The doctrine of nuclear deterrence and a belief in its importance underpins many aspects of United States policy; it informs strategic force structures within the military, incentivizes multi-billion-dollar weapon-modernization programs within the Department of Energy, and impacts international alliances with the 29 member states of the North Atlantic Treaty Organization (NATO). The doctrine originally evolved under the stewardship of some of the most impressive minds of the twentieth century, including the physicist and H-bomb designer Herman Kahn, the Nobel Prize-winning economist Thomas Schelling, and the preeminent political scientist and diplomat Henry Kissinger.
Geological carbon storage (GCS) is a promising technology for mitigating increasing concentrations of carbon dioxide (CO2) in the atmosphere. The injection of supercritical CO2into geological formations perturbs the physical and chemical state of the subsurface. The reservoir rock, as well as the overlying caprock, can experience changes in the pore fluid pressure, thermal state, chemical reactivity and stress distribution. These changes can cause mechanical deformation of the rock mass, opening/closure of preexisting fractures or/and initiation of new fractures, which can influence the integrity of the overall geological carbon storage (GCS) systems over thousands of years, required for successful carbon storage. GCS sites are inherently unified systems; however, given the scientific framework, these systems are usually divided based on the physics and temporal/spatial scales during scientific investigations. For many applications, decoupling the physics by treating the adjacent system as a boundary condition works well. Unfortunately, in the case of water and gas flow in porous media, because of the complexity of geological subsurface systems, the decoupling approach does not accurately capture the behavior of the larger relevant system. The coupled processes include various combinations of thermal (T), hydrological (H), chemical (C), mechanical (M), and biological (B) effects. These coupled processes are time- and length-scale- dependent, and can manifest in one- or two-way coupled behavior. There is an undeniable need for understanding the coupling of processes during GCS, and how these coupled phenomena can result in emergent behaviors arising from the interplay of physics and chemistry, including self - focusing of flow, porosity collapse, and changes in fracture networks. In this chapter, the first section addresses the subsurface system response to the injection of CO2, examined at field and laboratory scales, as well as in model systems, addressed from a perspective of single disciplines. The second section reviews coupling between processes during GCS observed either in the field or anticipated based on laboratory results.
When making computational simulation predictions of multi-physics engineering systems, sources of uncertainty in the prediction need to be acknowledged and included in the analysis within the current paradigm of striving for simulation credibility. A thermal analysis of an aerospace geometry was performed at Sandia National Laboratories. For this analysis a verification, validation and uncertainty quantification workflow provided structure for the analysis, resulting in the quantification of significant uncertainty sources including spatial numerical error and material property parametric uncertainty. It was hypothesized that the parametric uncertainty and numerical errors were independent and separable for this application. This hypothesis was supported by performing uncertainty quantification simulations at multiple mesh resolutions, while being limited by resources to minimize the number of medium and high resolution simulations. Based on this supported hypothesis, a prediction including parametric uncertainty and a systematic mesh bias are used to make a margin assessment that avoids unnecessary uncertainty obscuring the results and optimizes computing resources.
Methods for the efficient representation of fracture response in geoelectric models impact an impressively broad range of problems in applied geophysics. We adopt the recently-developed hierarchical material property representation in finite element analysis (Weiss, 2017) to model the electrostatic response of a discrete set of vertical fractures in the near surface and compare these results to those from anisotropic continuum models. We also examine the power law behavior of these results and compare to continuum theory. We find that in measurement profiles from a single point source in directions both parallel and perpendicular to the fracture set, the fracture signature persists over all distances. Furthermore, the homogenization limit (distance at which the individual fracture anomalies are too small to be either measured or of interest) is not strictly a function of the geometric distribution of the fractures, but also their conductivity relative to the background. Hence, we show that the definition of “representative elementary volume”, that distance over which the statistics of the underlying heterogeneities is stationary, is incomplete as it pertains to the applicability of an equivalent continuum model. We also show that detailed interrogation of such intrinsically heterogeneous models may reveal power law behavior that appears anomalous, thus suggesting a possible mechanism to reconcile emerging theories in fractional calculus with classical electromagnetic theory.
Many types of dark regions occur naturally or artificially in Synthetic Aperture Radar (SAR) and Coherent Change Detection (CCD) products. Occluded regions in SAR imagery, known as shadows, are created when incident radar energy is obstructed by a target with height from illuminating resolution cells immediately behind the target in the ground plane. No return areas are also created from objects or terrain that produce little scattering in the direction of the receiver, such as still water or flat plates for monostatic systems. Depending on the size of the dark region, additive and multiplicative noise levels are commonly measured for SAR performance testing. However, techniques for radar performance testing of CCD using dark regions are not common in the literature. While dark regions in SAR imagery also produce dark regions in CCD products, additional dark regions in CCD may further arise from decorrelation of bright regions in SAR imagery due to clutter or terrain that has poor wide-sense stationarity (such as foliage in wind), man-made disturbances of the scene, or unintended artifacts introduced by the radar and image processing. By comparing dark regions in CCD imagery over multiple passes, one can identify unintended decorrelation introduced by poor radar performance rather than phenomenology. This paper addresses select dark region automated measurement techniques for the evaluation of radar performance during SAR and CCD field testing.
Effectively using a graphics processing unit (GPU) for Monte Carlo particle transport is a challenging task due to its memory storage requirements and traditionally divergent algorithms. Most efforts in this area have focused on the entire transport process, choosing to use atomic operations or tally replication Tor computing tallies. This work isolates the performance of the tallies from the rest of the transport process, and studies the impact of using different approaches for tallying on the GPU. Five implementations of a photon escape tally are compared, using both single and double precision data types. Results show that replicating tallies is clearly the best option overall, if there is enough memory available on the GPU to store them. When insufficient memory becomes an issue, the best method to use depends on the size, data type, and update frequency of the tally. Global atomic updates can be a reasonable option in some cases, especially if they arc infrequently used. However, there arc two alternatives for general-purpose tallying that were shown to be more effective in most of the scenarios considered. These two alternatives arc based on NVIDIA's warp shuffle feature, which allows 32 threads to simultaneously exchange or broadcast data, minimizing the number of atomic operations needed to get the final tally result.
This paper introduces the "Discrete Direct" (DD) model calibration and uncertainty propagation approach for computational models calibrated to data from sparse replicate tests of stochastically varying systems. The DD approach generates and propagates various discrete realizations of possible calibration parameter values corresponding to possible realizations of the uncertain inputs and outputs of the experiments. This is in contrast to model calibration methods that attempt to assign or infer continuous probability density functions for the calibration parameters-which adds unjustified information to the calibration and propagation problem. The DD approach straightforwardly accommodates aleatory variabilities and epistemic uncertainties in system properties and behaviors, in input initial and boundary conditions, and in measurement uncertainties in the experiments. The approach appears to have several advantages over Bayesian and other calibration approaches for capturing and utilizing the information obtained from the typically small number of experiments in model calibration situations. In particular, the DD methodology better preserves the fundamental information from the experimental data in a way that enables model predictions to be more directly traced back to the supporting experimental data. The approach is also presently more viable for calibration involving sparse realizations of random function data (e.g. stress-strain curves) and random field data. The DD methodology is conceptually simpler than Bayesian calibration approaches, and is straightforward to implement. The methodology is demonstrated and analyzed in this paper on several illustrative calibration and uncertainty propagation problems.
Many companies rely on user experience metrics, such as Net Promoter scores, to monitor changes in customer attitudes toward their products. This paper suggests that similar metrics can be used to assess the user experience of the pilots and sensor operators who are tasked with using our radar, EO/IR, and other remote sensing technologies. As we have previously discussed, the problem of making our national security remote sensing systems useful, usable and adoptable is a human-system integration problem that does not get the sustained attention it deserves, particularly given the high-throughput, information-dense task environments common to military operations. In previous papers, we have demonstrated how engineering teams can adopt well-established human-computer interaction principles to fix significant usability problems in radar operational interfaces. In this paper, we describe how we are using a combination of Situation Awareness design methods, along with techniques from the consumer sector, to identify opportunities for improving human-system interactions. We explain why we believe that all stakeholders in remote sensing-including program managers, engineers, or operational users-can benefit from systematically incorporating some of these measures into the evaluation of our national security sensor systems. We will also provide examples of our own experience adapting consumer user experience metrics in operator-focused evaluation of currently deployed radar interfaces.
We discuss uncertainty quantification in multisensor data integration and analysis, including estimation methods and the role of uncertainty in decision making and trust in automated analytics. The challenges associated with automatically aggregating information across multiple images, identifying subtle contextual cues, and detecting small changes in noisy activity patterns are well-established in the intelligence, surveillance, and reconnaissance (ISR) community. In practice, such questions cannot be adequately addressed with discrete counting, hard classifications, or yes/no answers. For a variety of reasons ranging from data quality to modeling assumptions to inadequate definitions of what constitutes "interesting" activity, variability is inherent in the output of automated analytics, yet it is rarely reported. Consideration of these uncertainties can provide nuance to automated analyses and engender trust in their results. In this work, we assert the importance of uncertainty quantification for automated data analytics and outline a research agenda. We begin by defining uncertainty in the context of machine learning and statistical data analysis, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues and discuss methods for data-driven uncertainty quantification in the context of a multi-source image analysis example. We conclude by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics, including sensor tasking and analyst trust in automated analytics.
Distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that the PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l1-norm, l0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. As a result, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.
The Defense Advanced Research Project Agency has identified a need for low-standby-power systems which react to physical environmental signals in the form of an electrical wakeup signal. To address this need, we design piezoelectric aluminum nitride based microelectromechanical resonant accelerometers that couple with a near-zero power, complementary metal-oxide-semiconductor application specific integrated circuit. The piezoelectric accelerometer operates near resonance to form a passive mechanical filter of the vibration spectrum that targets a specific frequency signature. Resonant vibration sensitivities as large as 490 V/g (in air) are obtained at frequencies as low as 43 Hz. The integrated circuit operates in the subthreshold regime employing current starvation to minimize power consumption. Two accelerometers are coupled with the circuit to form the wakeup system which requires only 5.25 nW before wakeup and 6.75 nW after wakeup. The system is shown to wake up to a generator signal and reject confusers in the form of other vehicles and background noise.
Microtubules exhibit a dynamic instability between growth and catastrophic depolymerization. GTP-tubulin (αβ-dimer bound to GTP) self-assembles, but dephosphorylation of GTP- to GDP-tubulin within the tubule results in destabilization. While the mechanical basis for destabilization is not fully understood, one hypothesis is that dephosphorylation causes tubulin to change shape, frustrating bonds and generating stress. To test this idea, we perform molecular dynamics simulations of microtubules built from coarse-grained models of tubulin, incorporating a small compression of α-subunits associated with dephosphorylation in experiments. We find that this shape change induces depolymerization of otherwise stable systems via unpeeling "ram's horns" characteristic of microtubules. Depolymerization can be averted by caps with uncompressed α-subunits, i.e., GTP-rich end regions. Thus, the shape change is sufficient to yield microtubule behavior.
Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quantification analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several compressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers l1_ls, SpaRSA, CGIST, FPC_AS, and ADMM, we develop techniques to mitigate overfitting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendations on parameter settings for these techniques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-crossflow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy, and computational trade-offs between polynomial bases of different degrees, and practicability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.
The mechanism by which aerodynamic effects of jet/fin interaction arise from the flow structure of a jet in crossflow is explored using particle image velocimetry measurements of the crossplane velocity field as it impinges on a downstream fin instrumented with high-frequency pressure sensors. A Mach 3.7 jet issues into a Mach 0.8 crossflow from either a normal or inclined nozzle, and three lateral fin locations are tested. Conditional ensemble-averaged velocity fields are generated based upon the simultaneous pressure condition. Additional analysis relates instantaneous velocity vectors to pressure fluctuations. The pressure differential across the fin is driven by variations in the spanwise velocity component, which substitutes for the induced angle of attack on the fin. Pressure changes at the fin tip are strongly related to fluctuations in the streamwise velocity deficit, wherein lower pressure is associated with higher velocity and vice versa. The normal nozzle produces a counter-rotating vortex pair that passes above the fin, and pressure fluctuations are principally driven by the wall horseshoe vortex and the jet wake deficit. The inclined nozzle produces a vortex pair that impinges the fin and yields stronger pressure fluctuations driven more directly by turbulence originating from the jet mixing.
Multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physics compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.
Ceramic fiber insulation materials, such as Fiberfrax and Min-K products, are used in a number of applications (e.g. aerospace, fire protection, and military) for their stability and performance in extreme conditions. However, the thermal properties of these materials have not been thoroughly characterized for many of the conditions that they will be exposed to, such as high temperatures and pressures. This complicates the design of systems using these insulations as the uncertainty in the thermal properties is high. In this study, the thermal conductivity of three ceramic fiber insulations, Fiberfrax T-30LR laminate, Fiberfrax 970-H paper, and Min-K TE1400 board, was measured as a function of atmospheric temperature and compression. Measurements were taken using the transient plane source technique. The results of this study are compared against three published data sets.
Data movement is considered the main performance concern for exascale, including both on-node memory and off-node network communication. Indeed, many application traces show significant time spent in MPI calls, potentially indicating that faster networks must be provisioned for scalability. However, equating MPI times with network communication delays ignores synchronization delays and software overheads independent of network hardware. Using point-to-point protocol details, we explore the decomposition of MPI time into communication, synchronization and software stack components using architecture simulation. Detailed validation using Bayesian inference is used to identify the sensitivity of performance to specific latency/bandwidth parameters for different network protocols and to quantify associated uncertainties. The inference combined with trace replay shows that synchronization and MPI software stack overhead are at least as important as the network itself in determining time spent in communication routines.
The solution of the Optimal Power Flow (OPF) and Unit Commitment (UC) problems (i.e., determining generator schedules and set points that satisfy demands) is critical for efficient and reliable operation of the electricity grid. For computational efficiency, the alternating current OPF (ACOPF) problem is usually formulated with a linearized transmission model, often referred to as the DCOPF problem. However, these linear approximations do not guarantee global optimality or even feasibility for the true nonlinear alternating current (AC) system. Nonlinear AC power flow models can and should be used to improve model fidelity, but successful global solution of problems with these models requires the availability of strong relaxations of the AC optimal power flow constraints. In this paper, we use McCormick envelopes to strengthen the well-known second-order cone (SOC) relaxation of the ACOPF problem. With this improved relaxation, we can further include tight bounds on the voltages at the reference bus, and this paper demonstrates the effectiveness of this for improved bounds tightening. We present results on the optimality gap of both the base SOC relaxation and our Strengthened SOC (SSOC) relaxation for the National Information and Communications Technology Australia (NICTA) Energy System Test Case Archive (NESTA). For the cases where the SOC relaxation yields an optimality gap more than 0.1 %, the SSOC relaxation with bounds tightening further reduces the optimality gap by an average of 67 % and ultimately reduces the optimality gap to less than 0.1 % for 58 % of all the NESTA cases considered. Stronger relaxations enable more efficient global solution of the ACOPF problem and can improve computational efficiency of MINLP problems with AC power flow constraints, e.g., unit commitment.
Real-time energy pricing has caused a paradigm shift for process operations with flexibility becoming a critical driver of economics. As such, incorporating real-time pricing into planning and scheduling optimization formulations has received much attention over the past two decades (Zhang and Grossman, 2016). These formulations, however, focus on 1-hour or longer time discretizations and neglect process dynamics. Recent analysis of historical price data from the California electricity market (CAISO) reveals that a majority of economic opportunities come from fast market layers, i.e., real-time energy market and ancillary services (Dowling et al., 2017). We present a dynamic optimization framework to quantify the revenue opportunities of chemical manufacturing systems providing frequency regulation (FR). Recent analysis of first order systems finds that slow process dynamics naturally dampen high frequency harmonics in FR signals (Dowling and Zavala, 2017). As a consequence, traditional chemical processes with long time constants may be able to provide fast flexibility without disrupting product quality, performance of downstream unit operations, etc. This study quantifies the ability of a distillation system to provide sufficient dynamic flexibility to adjust energy demands every 4 seconds in response to market signals. Using a detailed differential algebraic equation (DAE) model (Hahn and Edgar, 2002) and historic data from the Texas electricity market (ECROT), we estimate revenue opportunities for different column designs. We implement our model using the algebraic modeling language Pyomo (Hart et al., 2011) and its dynamic optimization extension Pyomo.DAE (Nicholson et al., 2017). These software packages enable rapid development of complex optimization models using high-level modelling constructs and provide flexible tools for initializing and discretizing DAE models.
Turbulent viscosities have been calculated from stereoscopic particle image velocimetry (PIV) data for a supersonic jet exhausting into a transonic crossflow. Image interrogation must be optimized to produce useful turbulent viscosity fields. High-accuracy image reconstruction should be used for the final iteration, whereas efficient algorithms produce spatial artifacts in derivative fields. Mean strain rates should be calculated from large windows (128 pixel) with 75% overlap. Turbulent stresses are optimally computed using multiple (more than two) iterations of image interrogation and 75% overlap, both of which increase the signal bandwidth. However, the improvement is modest and may not justify the considerable increase in computational expense. The turbulent viscosity may be expressed in tensor notation to include all three axes of velocity data. In this formulation, a least-squares fit to the multiple equations comprising the tensor generated a scalar turbulent viscosity that eliminated many of the artifacts and noise present in the single-component formulation. The resulting experimental turbulent viscosity fields will be used to develop data-driven turbulence models that can improve the fidelity of predictive computations.
The thermal environment generated during an intense radiation event like a nuclear weapon airburst, lightning strike, or directed energy weaponry has a devastating effect on many exposed materials. Natural and engineered materials can be damaged and ignite from the intense thermal radiation, potentially resulting in sustained fires. Understanding material behavior in such an event is essential for mitigating the damage to a variety of defense systems, such as aircraft and weaponry. Flammability and ignition studies in this regime (very high heat flux, short duration) are less plentiful than in the heat flux regimes representative of typical fires. The flammability and ignition behavior of a material may differ at extreme heat flux due to the balance of the heat conduction into the material compared to other processes. Length scale effects may also be important in flammability and ignition behavior, especially in the high heat flux regime. A variety of materials have recently been subjected to intense thermal loads (~100–1000 kW/m2) in testing at both the Solar Furnace and the Solar Tower at the National Solar Thermal Test Facility at Sandia National Laboratories. The Solar Furnace, operating at a smaller scale (≈30 cm2 area), provides the ability to test a wide range of materials under controlled radiative flux conditions. The Solar Tower exposes objects and materials to the same flux on a much larger scale (≈4 m2 area), integrating complex geometry and scale effects. Results for a variety of materials tested in both facilities are presented and compared. Material response often differs depending on scale, suggesting a significant scale effect. Mass loss per unit energy tends to go down as scale increases, and ignition probability tends to increase with scale.
Intense, dynamic radiant heat loads damage and ignite many common materials, but are outside the scope of typical fire studies. Explosive, directed-energy, and nuclear-weapon environments subject materials to this regime of extreme heating. The Solar Furnace at the National Solar Test Facility simulated this environment for an extensive experimental study on the response of many natural and engineered materials. Solar energy was focused onto a spot (∼10 cm2 area) in the center of the tested materials, generating an intense radiant load (∼100 kW m−2 –1000 kW m−2) for approximately 3 seconds. Using video photography, the response of the material to the extreme heat flux was carefully monitored. The initiation time of various events was monitored, including charring, pyrolysis, ignition, and melting. These ignition and damage thresholds are compared to historical ignition results predominantly for black, α-cellulose papers. Reexamination of the historical data indicates ignition behavior is predicted from simplified empirical models based on thermal diffusion. When normalized by the thickness and the thermal properties, ignition and damage thresholds exhibit comparable trends across a wide range of materials. This technique substantially reduces the complexity of the ignition problem, improving ignition models and experimental validation.
This study demonstrates a systematic methodology for establishing the design loads of a wave energy converter. The proposed design load methodology incorporates existing design guidelines, where they exist, and follows a typical design progression; namely, advancing from many, quick, order-ofmagnitude accurate, conceptual stage design computations to a few, computationally intensive, high-fidelity, design validation simulations. The goal of the study is to streamline and document this process based on quantitative evaluations of the design loads' accuracy at each design step and consideration for the computational efficiency of the entire design process. For the wave energy converter, loads, and site conditions considered, this study demonstrates an efficient and accurate methodology of evaluating the design loads.
ASME 2018 12th International Conference on Energy Sustainability, ES 2018, collocated with the ASME 2018 Power Conference and the ASME 2018 Nuclear Forum
This paper presents an evaluation of alternative particle heat-exchanger designs, including moving packed-bed and fluidized-bed designs, for high-temperature heating of a solardriven supercritical CO2 (sCO2) Brayton power cycle. The design requirements for high pressure (> 20 MPa) and high temperature (> 700 °C) operation associated with sCO2 posed several challenges requiring high-strength materials for piping and/or diffusion bonding for plates. Designs from several vendors for a 100 kW-thermal particle-to-sCO2 heat exchanger were evaluated as part of this project. Cost, heat-transfer coefficient, structural reliability, manufacturability, parasitics and heat losses, scalability, compatibility, erosion and corrosion, transient operation, and inspection ease were considered in the evaluation. An analytical hierarchy process was used to weight and compare the criteria for the different design options. The fluidized-bed design fared the best on heat transfer coefficient, structural reliability, scalability and inspection ease, while the moving packed-bed designs fared the best on cost, parasitics and heat losses, manufacturability, compatibility, erosion and corrosion, and transient operation. A 100 kWt shell-and-plate design was ultimately selected for construction and integration with Sandia's falling particle receiver system.
We report on progress for increasing the laser-induced damage threshold of dichroic beam combiner coatings for high transmission at 527 nm and high reflection at 1054 nm (22.5° angle of incidence, S-polarization). The initial coating consisted of HfO2 and SiO2 layers deposited with electron beam evaporation, and the laser-induced damage threshold was 7 J/cm2 at 532 nm with 3.5 ns pulses. This study introduces different coating strategies that were utilized to increase the laser damage threshold of this coating to 12.5 J/cm2.
Nuclear weapon airbursts can create extreme radiative heat fluxes for a short duration. The radiative heat transfer from the fireball can damage and ignite materials in a region that extends beyond the zone damaged by the blast wave itself. Directed energy weapons also create extreme radiative heat fluxes. These scenarios involve radiative fluxes much greater than the environments typically studied in flammability and ignition tests. Furthermore, the vast majority of controlled experiments designed to obtain material response and flammability data at high radiative fluxes have been performed at relatively small scales (order 10 cm2 area). A recent series of tests performed on the Solar Tower at the National Solar Thermal Test Facility exposed objects and materials to fluxes of 100 – 2,400 kW/m2 at a much larger scale (≈1 m2 area). This paper provides an overview of testing performed at the Solar Tower for a variety of materials including aluminum, fabric, and two types of plastics. Tests with meter-scale objects such as tires and chairs are also reported, highlighting some potential effects of geometry that are difficult to capture in small-scale tests. The aluminum sheet melted at the highest heat flux tested. At the same flux, the tire ignited but the flames were not sustained when the external heat flux was removed; the damage appeared to be limited to the outer portion of the tire, and internal pressure was maintained.
We use broken symmetry III-V semiconductor Fano metasurfaces to substantially improve the efficiency of second-harmonic generation (SHG) in the near infrared, compared to SHG obtained from metasurfaces created using symmetrical Mie resonators.
In this work, we experimentally demonstrate simultaneous occurrence of second-,third-, fourth-harmonic generation, sum-frequency generation, four-wave mixing and six-wave mixing processes in III-V semiconductor metasurfaces with spectra spanning from the UV to the near-IR.
The radiation effects community embraces the importance of quantifying uncertainty in model predictions and the importance of propagating this uncertainty into the integral metrics used to validate models, but they are not always aware of the importance of addressing the energy- and reaction-dependent correlations in the underlying uncertainty contributors. This paper presents a rigorous high-fidelity Total Monte Carlo approach that addresses the correlation in the underlying uncertainty components and quantifies the role of both energy and reaction-dependent correlations in a sample application that addresses the damage metrics relevant to silicon semiconductors.
Maintenance, Safety, Risk, Management and Life-Cycle Performance of Bridges - Proceedings of the 9th International Conference on Bridge Maintenance, Safety and Management, IABMAS 2018
Economic barriers to the replacement of bridges and other civil structures have created an aging infrastructure and placed greater demands on the deployment of effective and rapid health monitoring methods. To gain access for inspections, structure and sealant must be removed, disassembly processes must be completed and personnel must be transported to remote locations. Reliable Structural Health Monitoring (SHM) systems can automatically process data, assess structural condition, and signal the need for specific maintenance actions. They can reduce the costs associated with the increasing maintenance and surveillance needs of aging structures. The use of in-situ sensors, coupled with remote interrogation, can be employed to overcome a myriad of inspection impediments stemming from accessibility limitations, complex geometries, the location of hidden damage, and the isolated location of the structure. Furthermore, prevention of unexpected flaw growth and structural failure could be improved if on-board SHM systems were used to regularly, or even continuously, assess structural integrity. A research program was completed to develop and validate Comparative Vacuum Monitoring (CVM) sensors for crack detection. Sandia National Labs, in conjunction with private industry and the U.S. Department of Transportation, completed a series of CVM validation and certification programs aimed at establishing the overall viability of these sensors for monitoring bridge structures. Factors that affect SHM sensitivity include flaw size, shape, orientation and location relative to the sensors, along with operational environments. Statistical methods using one-sided tolerance intervals were employed to derive Probability of Flaw Detection (POD) levels for typical application scenarios. Complimentary, multi-year field tests were also conducted to study the deployment and long-term operation of CVM sensors on aircraft and bridges. This paper presents the quantitative crack detection capabilities of the CVM sensor, its performance in actual operating environments, and the prospects for structural health monitoring applications on a wide array of civil structures.
Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics
Structural dynamic models of mechanical, aerospace, and civil structures often involve connections of multiple subcomponents with rivets, bolts, press fits, or other joining processes. Recent model order reduction advances have been made for jointed structures using appropriately defined whole joint models in combination with linear substructuring techniques. A whole joint model condenses the interface nodes onto a single node with multi-point constraints resulting in drastic increases in computational speeds to predict transient responses. One drawback to this strategy is that the whole joint models are empirical and require calibration with test or high-fidelity model data. A new framework is proposed to calibrate whole joint models by computing global responses from high-fidelity finite element models and utilizing global optimization to determine the optimal joint parameters. The method matches the amplitude dependent damping and natural frequencies predicted for each vibration mode using quasi-static modal analysis.
Flow maldistribution in microchannel heat exchanger(MCHEs) can negatively impact heat exchanger effectiveness.Several rules of thumb exist about designing for uniform flow,but very little data are published to support these claims. In thiswork, complementary experiments and computational fluiddynamics (CFD) simulations of MCHEs enable a solidunderstanding of flow uniformity to a higher level of detail thanpreviously seen. Experiments provide a validation data source toassess CFD predictive capability. The traditional semi-circularheader geometry is tested. Experiments are carried out in a clearacrylic MCHE and water flow is measured optically with particleimage velocimetry. CFD boundary conditions are matched tothose in the experiment and the outputs, specifically velocity andturbulent kinetic energy profiles, are compared.
Self-assembled giant polymer vesicles prepared from double-hydrophilic diblock copolymers, poly(ethylene oxide)-b-poly(acrylic acid) (PEO-PAA) show significant degradation in response to pH changes. Because of the switching behavior of the diblock copolymers at biologically-relevant pH environments (2 to 9), these polymer vesicles have potential biomedical applications as smart delivery vehicles.
Performance modeling of networks through simulation requires application endpoint models that inject traffic into the simulation models. Endpoint models today for system-scale studies consist mainly of post-mortem trace replay, but these off-line simulations may lack flexibility and scalability. On-line simulations running so-called skeleton applications run reduced versions of an application that generate traffic that is the same or similar to the full application. These skeleton apps have advantages for flexibility and scalability, but they often must be custom written for the simulator itself. Auto-skeletonization of existing application source code via compiler tools would provide endpoint models with minimal development effort. These source-to-source transformations have been only narrowly explored. We introduce a pragma language and corresponding Clang-driven source-to-source compiler that performs auto-skeletonization based on provided pragma annotations. We describe the compiler toolchain, validate the generated skeletons, and show scalability of the generated simulation models beyond 100Â K endpoints for example MPI applications. Overall, we assert that our proposed auto-skeletonization approach and the flexible skeletons it produces can be an important tool in realizing balanced exascale interconnect designs.
Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics
Advanced friction models are often mathematically defined as nonlinear differential equations or complicated algebraic operations acting in single degree-of-freedom systems; however, such simplified conditions are not relevant to most design applications. As a result, current designers of practical structures typically simplify friction modeling to classical, Coulomb-like descriptions. In order to be viable for design purposes, friction models must be applicable to realistic structures and available in standard commercial codes. The goal of this work is to implement several different friction models into the commercial code, Abaqus, as user-defined contact models and to explore their properties in a dynamic simulation. A verification problem of interest to the joints community is utilized to evaluate efficacy. Several output quantities of the model will be presented and discussed, including frictional energy dissipation, amplitude, and frequency. The selected results are comparable to commonly observed experimental phenomena in mechanics of jointed structures.
This paper details the development and validation of a numerical model of the Wavestar wave energy converter (WEC) developed in WEC-Sim. This numerical model was developed in support of the WEC Control Competition (WECCCOMP), a competition with the objective of maximizing WEC performance over costs through innovative control strategies. WECCCOMP has two stages: numerical implementation of control strategies, and experimental implementation. The work presented in this paper is for support of the numerical implementation, where contestants are provided a WEC-Sim model of the 1:20 scale Wavestar device to develop their control algorithms. This paper details the development of the numerical model in WEC-Sim and of its validation through comparison to experimental data.
Modern supercomputers are shared among thousands of users running a variety of applications. Knowing which applications are running in the system can bring substantial benefits: knowledge of applications that intensively use shared resources can aid scheduling; unwanted applications such as cryptocurrency mining or password cracking can be blocked; system architects can make design decisions based on system usage. However, identifying applications on supercomputers is challenging because applications are executed using esoteric scripts along with binaries that are compiled and named by users. This paper introduces a novel technique to identify applications running on supercomputers. Our technique, Taxonomist, is based on the empirical evidence that applications have different and characteristic resource utilization patterns. Taxonomist uses machine learning to classify known applications and also detect unknown applications. We test our technique with a variety of benchmarks and cryptocurrency miners, and also with applications that users of a production supercomputer ran during a 6 month period. We show that our technique achieves nearly perfect classification for this challenging data set.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Klinkenberg, Jannis; Samfass, Philipp; Terboven, Christian; Duran, Alejandro; Klemm, Michael; Teruel, Xavier; Mateo, Sergi; Olivier, Stephen L.; Muller, Matthias S.
In modern shared-memory NUMA systems which typically consist of two or more multi-core processor packages with local memory, affinity of data to computation is crucial for achieving high performance with an OpenMP program. OpenMP* 3.0 introduced support for task-parallel programs in 2008 and has continued to extend its applicability and expressiveness. However, the ability to support data affinity of tasks is missing. In this paper, we investigate several approaches for task-to-data affinity that combine locality-aware task distribution and task stealing. We introduce the task affinity clause that will be part of OpenMP 5.0 and provide the reasoning behind its design. Evaluation with our experimental implementation in the LLVM OpenMP runtime shows that task affinity improves execution performance up to 4.5x on an 8-socket NUMA machine and significantly reduces runtime variability of OpenMP tasks. Our results demonstrate that a variety of applications can benefit from task affinity and that the presented clause is closing the gap of task-to-data affinity in OpenMP 5.0.
Experimental hydrology data from the Mizunami Underground Research Laboratory in Central Japan have been used to develop a site-scale fracture model and a flow model for the study area. The discrete fracture network model was upscaled to a continuum model to be used in flow simulations. A flow model was developed centered on the research tunnel, and using a highly refined regular mesh. In this study development and utilization of the model is presented. The modeling analysis used permeability and porosity fields from the discrete fracture network model as well as a homogenous model using fixed values of permeability and porosity. The simulations were designed to reproduce hydrology of the modeling area and to predict inflow of water into the research tunnel during excavation. Modeling results were compared with the project hydrology data. Successful matching of the experimental data was obtained for simulations based on the discrete fracture network model.
Developing sound methods to evaluate risk of seabed mobility and alteration of sediment transport patterns in the near-shore coastal regions due to the presence of Offshore Wind (OW) infrastructure is critical to project planning, permitting, and operations. OW systems may include seafloor foundations, cabling, floating structures with gravity anchors, or a combination of several of these systems. Installation of these structures may affect the integrity of the sediment bed, thus affecting seabed dynamics and stability. It is therefore necessary to evaluate hydrodynamics and seabed dynamics and the effects of OW subsea foundations and cables on sediment transport. A methodology is presented here to map a site's sediment (seabed) stability and can in turn support the evaluation of the potential for these processes to affect OW deployments and the local ecology. Sediment stability risk maps are developed for a site offshore of Central Oregon. A combination of geophysical site characterization, metocean analysis, and numerical modeling is used to develop a quantitative assessment of local scour and overall seabed stability. The findings generally show the presence of structures reduces the sediment transport in the lee area of the array by altering current and wave fields. The results illustrate how the overall regional patterns of currents and waves influence local scour near pilings and cables.
Architecture simulation can aid in predicting and understanding application performance, particularly for proposed hardware or large system designs that do not exist. In network design studies for high-performance computing, most simulators focus on the dominant message passing (MPI) model. Currently, many simulators build and maintain their own simulator-specific implementations of MPI. This approach has several drawbacks. Rather than reusing an existing MPI library, simulator developers must implement all semantics, collectives, and protocols. Additionally, alternative runtimes like GASNet cannot be simulated without again building a simulator-specific version. It would be far more sustainable and flexible to maintain lower-level layers like uGNI or IB-verbs and reuse the production runtime code. Directly building and running production communication runtimes inside a simulator poses technical challenges, however. We discuss these challenges and show how they are overcome via the macroscale components for the Structural Simulation Toolkit (SST), leveraging a basic source-to-source tool to automatically adapt production code for simulation. SST is able to encapsulate and virtualize thousands of MPI ranks in a single simulator process, providing a “supercomputer in a laptop” environment. We demonstrate the approach for the production GASNet runtime over uGNI running inside SST. We then discuss the capabilities enabled, including investigating performance with tunable delays, deterministic debugging of race conditions, and distributed debugging with serial debuggers.
A new time domain electric field integral equation is proposed to solve low frequency problems. This new formulation uses the current and charge densities as unknowns, with a form of the continuity equation that is weighted by a Green's function as a second constraining equation. This equation can be derived from a scalar potential equivalence principle integral equation, which is in contrast to the traditional strong form of the continuity equation that has been used in an ad-hoc manner in the augmented EFIE. Numerical results demonstrate the improved stability of this approach, as well as the accuracy at low frequencies.
Femtosecond coherent anti-Stokes Raman scattering thermometry in a solid-fuel propellant flame is demonstrated by tuning the lasers to the rovibrational Raman transitions of diatomic hydrogen (H<inf>2</inf>).
The surface topology of a solid subjected to destructive environments is often difficult to quantify. In thermal environments, the size and shape of the solid changes as it pyrolyzes, ablates, warps, or chars. Quantitative descriptions of such responses are valuable for data reporting and model validation. In this work, a three-dimensional scanner is evaluated for non-destructive material analysis. The scans spatially resolve the response of materials to a high-heat-flux environment. To account for the effect of distortion induced in thin materials, back-side scans of the sample are used to characterize the displacement of the bulk material. Data spanning the area of the sample, rather than using a net or average quantity, enhances the evaluation of the crater formed by the incident flux. The 3D reconstruction of the sample also provides the ability to perform volumetric calculations. The data obtained from this methodology may be useful for characterizing materials exposed to a variety of destructive environments.
Intense, dynamic radiant heat loads damage and ignite many common materials, but are outside the scope of typical fire studies. Explosive, directed-energy, and nuclear-weapon environments subject materials to this regime of extreme heating. The Solar Furnace at the National Solar Test Facility simulated this environment for an extensive experimental study on the response of many natural and engineered materials. Solar energy was focused onto a spot (∼10 cm2 area) in the center of the tested materials, generating an intense radiant load (∼100 kW m−2 –1000 kW m−2) for approximately 3 seconds. Using video photography, the response of the material to the extreme heat flux was carefully monitored. The initiation time of various events was monitored, including charring, pyrolysis, ignition, and melting. These ignition and damage thresholds are compared to historical ignition results predominantly for black, α-cellulose papers. Reexamination of the historical data indicates ignition behavior is predicted from simplified empirical models based on thermal diffusion. When normalized by the thickness and the thermal properties, ignition and damage thresholds exhibit comparable trends across a wide range of materials. This technique substantially reduces the complexity of the ignition problem, improving ignition models and experimental validation.
Process-induced residual stresses occur in composite structures composed of dissimilar materials. As these residual stresses could result in fracture, their consideration when designing composite parts is necessary. However, the experimental determination of residual stresses in prototype parts can be time and cost prohibitive. Alternatively, it is possible for computational tools to predict potential residual stresses. Therefore, a process modeling methodology was developed and implemented into Sandia National Laboratories' SIERRA/Solid Mechanics code. This method requires the specification of many model parameters to form accurate predictions. These parameters, which are related to the mechanical and thermal behaviors of the modeled composite material, can be determined experimentally, but at a potentially prohibitive cost. Furthermore, depending upon a composite part's specific geometric and manufacturing process details, it is possible that certain model parameters may have an insignificant effect on the simulated prediction. Therefore, to streamline the material characterization process, formal parameter sensitivity studies can be applied to determine which of the required input parameters are truly relevant to the simulated prediction. Then, only those model parameters found to be critical will require rigorous experimental characterization. Numerous sensitivity analysis methods exist in the literature, each offering specific strengths and weaknesses. Therefore, the objective of this study is to compare the performance of several accepted sensitivity analysis methods during the simulation of a bi-material composite strip's manufacturing process. The examined sensitivity analysis methods include both simple techniques, such Monte Carlo and Latin Hypercube sampling, as well as more sophisticated approaches, such as the determination of Sobol indices via a polynomial chaos expansion or a Gaussian process. The relative computational cost and critical parameter list are assessed for each of the examined methods and conclusions are drawn regarding the ideal sensitivity analysis approach for future residual stress investigations.
The thermal environment generated during an intense radiation event like a nuclear weapon airburst, lightning strike, or directed energy weaponry has a devastating effect on many exposed materials. Natural and engineered materials can be damaged and ignite from the intense thermal radiation, potentially resulting in sustained fires. Understanding material behavior in such an event is essential for mitigating the damage to a variety of defense systems, such as aircraft and weaponry. Flammability and ignition studies in this regime (very high heat flux, short duration) are less plentiful than in the heat flux regimes representative of typical fires. The flammability and ignition behavior of a material may differ at extreme heat flux due to the balance of the heat conduction into the material compared to other processes. Length scale effects may also be important in flammability and ignition behavior, especially in the high heat flux regime. A variety of materials have recently been subjected to intense thermal loads (~100–1000 kW/m2) in testing at both the Solar Furnace and the Solar Tower at the National Solar Thermal Test Facility at Sandia National Laboratories. The Solar Furnace, operating at a smaller scale (≈30 cm2 area), provides the ability to test a wide range of materials under controlled radiative flux conditions. The Solar Tower exposes objects and materials to the same flux on a much larger scale (≈4 m2 area), integrating complex geometry and scale effects. Results for a variety of materials tested in both facilities are presented and compared. Material response often differs depending on scale, suggesting a significant scale effect. Mass loss per unit energy tends to go down as scale increases, and ignition probability tends to increase with scale.
Containment bypass scenarios in nuclear power plants can lead to large early release of radionuclides. A residual heat removal (RHR) system interfacing system loss of coolant accident (ISLOCA) has the potential to cause a hazardous environment in the auxiliary building, a loss of coolant from the primary system, a pathway for early release of radionuclides, and the failure of a system important to safely shutting down the plant. Prevention of this accident sequence relies on active systems that may be vulnerable to cyber failures in new or retrofitted plants with digital instrumentation and control systems. RHR ISLOCA in a hypothetical pressurized water reactor is analyzed in a dynamic framework to evaluate the time-dependent effects of various uncertainties on the state of the nuclear fuel, the auxiliary building environment, and the release of radionuclides. The ADAPT dynamic event tree code is used to drive both the MELCOR severe accident analysis code and the RADTRAD dose calculation code to track the progression of the accident from the initiating event to its end states. The resulting data set is then mined for insights into key events and their impacts on the final state of the plant and radionuclide releases.
Sodium Fast Reactors (SFRs) have an extensive operational history that can be leveraged to accelerate the licensing process for modern designs. Sandia National Laboratories (SNL) has recently reconstituted the United States SFR data from the Centralized Reliability Database Organization (CREDO) into a new modern database called the Sodium (Na) System Component Reliability Database (NaSCoRD). This new database is currently undergoing validation and usability testing to better understand the strengths and limitations of this historical data. The most common class of equipment found in the NaSCoRD database are valves. NaSCoRD contains a record of over 4,000 valves that have operated in EBR-II, FFTF, and test loops including those operated by Westinghouse and the Energy Technology Engineering Center. Valve failure events in NaSCoRD can be categorized by working fluid (e.g., sodium, water, gas), valve type (e.g., butterfly, check, throttle, block), failure mode (e.g., failure to open, failure to close, rupture), operating facility, operating temperature, or other user defined categories. Sodium valve reliability estimates will be presented in comparison to estimates provided in historical studies. The impacts of EG&G Idaho’s suggested corrections and various prior distributions on these reliability estimates will also be presented.
Metamaterials provide a means to tailor the spectral response of a surface. Given the periodic nature of the metamaterial, proper design of the unit cell requires intimate knowledge of the parameter space for each design variable. We present a detailed study of the parameter space surrounding vertical split-ring resonators and planar split-ring resonators, and demonstrate widening of the perfect absorption bandwidth based on the understanding of its parameter space.
In this work, we describe new capabilities for the Pyomo.GDP modeling environment, moving beyond classical reformulation approaches to include non-standard reformulations and a new logic-based solver, GDPopt. Generalized Disjunctive Programs (GDPs) address optimization problems involving both discrete and continuous decision variables. For difficult problems, advanced reformulations such as the disjunctive “basic step” to intersect multiple disjunctions or the use of procedural reformulations may be necessary. Complex nonlinear GDP models may also be tackled using logic-based outer approximation. These expanded capabilities highlight the flexibility that Pyomo.GDP offers modelers in applying novel strategies to solve difficult optimization problems.
Torque feedback control and series elastic actuators are widely used to enable compact, highly-geared electric motors to provide low and controllable mechanical impedance. While these approaches provide certain benefits for control, their impact on system energy consumption is not widely understood. This paper presents a model for examining the energy consumption of drivetrains implementing various target dynamic behaviors in the presence of gear reductions and torque feedback. Analysis of this model reveals that under cyclical motions for many conditions, increasing the gear ratio results in greater energy loss. A similar model is presented for series elastic actuators and used to determine the energy consequences of various spring stiffness values. Both models enable the computation and optimization of power based on specific hardware manifestations, and illustrate how energy consumption sometimes defies conventional best-practices. Results of evaluating these two topologies as part of a drivetrain design optimization for two energy-efficient electrically driven humanoids are summarized. The model presented enables robot designers to predict the energy consequences of gearing and series elasticity for future robot designs, helping to avoid substantial energy sinks that may be inadvertently introduced if these issues are not properly analyzed.
New modelling and optimization platforms have enabled the creation of frameworks for solution strategies that are based on solving sequences of dynamic optimization problems. This study demonstrates the application of the Python-based Pyomo platform as a basis for formulating and solving Nonlinear Model Predictive Control (NMPC) and Moving Horizon Estimation (MHE) problems, which enables fast on-line computations through large-scale nonlinear optimization and Nonlinear Programming (NLP) sensitivity. We describe these underlying approaches and sensitivity computations, and showcase the implementation of the framework with large DAE case studies including tray-by-tray distillation models and Bubbling Fluidized Bed Reactors (BFB).
A Butler-matrix-inspired beam-forming network has been developed to provide phasing for a switched-beam 2×2 element antenna array. The network uses an arrangement of double-box quadrature hybrids to achieve wide instantaneous bandwidth in a small, planar form factor. The planar feed structure has been designed to integrate with an array aperture to form a low-profile array stackup.
An ENUN 32P cask supplied by Equipos Nucleares S.A. (ENSA) was transported 9,600 miles by road, sea, and rail in 2017 in order to collect shock and vibration data on the cask system and surrogate spent fuel assemblies within the cask. The task of examining 101,857 ASCII data files – 6.002 terabytes of data (this includes binary and ASCII files) – has begun. Some results of preliminary analyses are presented in this paper. A total of seventy-seven accelerometers and strain gauges were attached by Sandia National Laboratories (SNL) to three surrogate spent fuel assemblies, the cask basket, the cask body, the transport cradle, and the transport platforms. The assemblies were provided by SNL, Empresa Nacional de Residuos Radiactivos, S.A. (ENRESA), and a collaboration of Korean institutions. The cask system was first subjected to cask handling operations at the ENSA facility. The cask was then transported by heavy-haul truck in northern Spain and shipped from Spain to Belgium and subsequently to Baltimore on two roll-on/roll-off ships. From Baltimore, the cask was transported by rail using a 12- axle railcar to the American Association of Railroads’ Transportation Technology Center, Inc. (TTCI) near Pueblo, Colorado where a series of special rail tests were performed. Data were continuously collected during this entire sequence of multi-modal transportation events. (We did not collect data on the transfer between modes of transportation.) Of particular interest – indeed the original motivation for these tests – are the strains measured on the zirconium-alloy tubes in the assemblies. The strains for each of the transport modes are compared to the yield strength of irradiated Zircaloy to illustrate the margin against rod failure during normal conditions of transport. The accelerometer data provides essential comparisons of the accelerations on the different components of the cask system exhibiting both amplification and attenuation of the accelerations at the transport platforms through the cradle and cask and up to the interior of the cask. These data are essential for modeling cask systems. This paper concentrates on analyses of the testing of the cask on a 12-axle railcar at TTCI.
Sandia National Laboratories (SNL) conducted in the summer of 2017 its third fracture challenge (i.e., the Third Sandia Fracture Challenge or SFC3). The challenge, which was open to the public, asked participants to predict, without foreknowledge of the outcome, the fracture response predictions of an additively manufactured tensile test coupon of moderate geometric complexity when loaded to failure. This paper outlines the approach taken by our team, one of the SNL teams that participated in the challenge, to make a prediction. To do so, we employed a traditional finite element approach coupled with a continuum damage mechanics constitutive model. Constitutive model parameters were determined through a calibration process of the model response with the provided longitudinal and transverse tensile test coupon data. Comparison of model predictions with the challenge coupon test results are presented and general observations gleaned from the exercise are provided.
Fiber reinforced composites are increasingly used in advanced applications due to advantageous qualities including high strength-To-weight ratio. The ability to tailor composite structures to meet specific performance criteria is particularly desirable. In practice designs must often balance multiple objectives with conflicting behavior. Objectives of this work were to optimize lamina orientations of a three-ply carbon fiber reinforced composite structure for the coupled solid mechanics and dynamics considerations of minimizing max principal stress while maximizing fundamental frequency. Two approaches were investigated: Pareto set optimization (PSO), and multi-objective genetic algorithm (MOGA). In PSO, a single objective function is constructed as a weighted sum of multiple objective terms. Multiple weighting sets are evaluated to determine a Pareto set of solutions. MOGA mimics evolutionary principles, where the best design points populate subsequent generations. Instead of weight factors, MOGA uses a domination count that ranks population members. Results showed both methods converged to solutions along the same Pareto front. The PSO method calculated fewer function evaluations, but provided many fewer final data points. At a certain threshold, MOGA provides more solutions with fewer calculations. The PSO method requires more user intervention which may introduce bias, but can largely be run in parallel. In contrast, MOGA generation are evaluated in series. The Pareto front showed the trend of increasing frequency with increasing stress. At the low stress and frequency extreme, the stacking sequence tended toward (45°/90°/45°) with max principal stress located in the inner ply in the hoop direction. At high stress and frequency, the stacking sequences (90°/∗/90°) indicated that the middle ply orientation was less significant. A mesh convergence study and dynamic validation experiments gave confidence to the computational model. Future work will include an uncertainty quantification about selected solutions. The final selected solution will be fabricated and experimental validation testing will be conducted.
A new approach to denoising Time-Resolved Particle Image Velocimetry data is proposed by incorporating measurement uncertainties estimated using the correlation statistics method. The denoising algorithm of Oxlade et al (Experiments in Fluids, 2012) has been modified to add the frequency dependence of PIV noise by obtaining it from the uncertainty estimates, including the correlated term between velocity and uncertainty that is zero only if white noise is assumed. Although the present approach was only partially effective in denoising the 400-kHz “postage-stamp PIV” data, important and novel insights were obtained into the behavior of PIV uncertainty. The belief that PIV noise is white noise has been shown to be inaccurate, though it may serve as a reasonable approximation for measurements with a high dynamic range. Noise spectra take a similar shape to the velocity spectra because increased velocity fluctuations correspond to higher shear and therefore increased uncertainty. Coherence functions show that correlation between velocity fluctuations and uncertainty is strongest at low and mid frequencies, tapering to a much weaker correlation at high frequencies where turbulent scales are small with lower shear magnitudes.
Optimization problems under uncertainty involve making decisions without the full knowledge of the impact the decisions will have and before all the facts relevant to those decisions are known. These problems are common, for example, in process synthesis and design, planning and scheduling, supply chain management, and generation and distribution of electric power. The sources of uncertainty in optimization problems fall into two broad categories: endogenous and exogenous. Exogenous uncertain parameters are realized at a known stage (e.g., time period or decision point) in the problem irrespective of the values of the decision variables. For example, demand is generally considered to be independent of any capacity expansion decisions in process industries, and hence, is regarded as an exogenous uncertain parameter. In contrast, decisions impact endogenous uncertain parameters. The impact can either be in the resolution or in the distribution of the uncertain parameter. The realized values of a Type-I endogenous uncertain parameter are affected by the decisions. An example of this type of uncertainty would be facility protection problem where the likelihood of a facility failing to deliver goods or services after a disruptive event depends on the level of resources allocated as protection to that facility. On the other hand, only the realization times of Type-II endogenous uncertain parameters are affected by decisions. For example, in a clinical trial planning problem, whether a clinical trial is successful or not is only realized after the clinical trial has been completed, and whether the clinical trial is successful or not is not impacted by when the clinical trial is started. There are numerous approaches to modelling and solving optimization problems with exogenous and/or endogenous uncertainty, including (adjustable) robust optimization, (approximate) dynamic programming, model predictive control, and stochastic programming. Stochastic programming is a particularly attractive approach, as there is a straightforward translation from the deterministic model to the stochastic equivalent. The challenge with stochastic programming arises through the rapid, sometimes exponential, growth in the program size as we sample the uncertainty space or increase the number of recourse stages. In this talk, we will give an overview of our research activities developing practical stochastic programming approaches to problems with exogeneous and/or endogenous uncertainty. We will highlight several examples from power systems planning and operations, process modelling, synthesis and design optimization, artificial lift infrastructure planning for shale gas production, and clinical trial planning. We will begin by discussing the straightforward case of exogenous uncertainty. In this situation, the stochastic program can be expressed completely by a deterministic model, a scenario tree, and the scenario-specific parameterizations of the deterministic model. Beginning with the deterministic model, modelers create instances of the deterministic model for each scenario using the scenario-specific data. Coupling the scenario models occurs through the addition of nonanticipativity constraints, equating the stage decision variables across all scenarios that pass through the same stage node in the scenario tree. Modelling tools like PySP (Watson, 2012) greatly simplify the process of composing large stochastic programs by beginning either with an abstract representation of the deterministic model written in Pyomo (Hart, et al., 2017) and scenario data, or a function that will return the deterministic Pyomo model for a specific scenario. PySP automatically can create the extensive form (deterministic equivalent) model from a general representation of the scenario tree. The challenge with large scale stochastic programs with exogenous uncertainty arises through managing the growth of the problem size. Fortunately, there are several well-known approaches to decomposing the problem, both stage-wise (e.g., Benders’ decomposition) and scenario-based (e.g., Lagrangian relaxation or Progressive Hedging), enabling the direct solution of stochastic programs with hundreds or thousands of scenarios. We will then discuss developments in modelling and solving stochastic programs with endogenous uncertainty. These problems are significantly more challenging to both pose and to solve, due to the exponential growth in scenarios required to cover the decision-dependent uncertainties relative to the number of stages in the problem. In this situation, standardized frameworks for expressing stochastic programs do not exist, requiring a modeler to explicitly generate the representations and nonanticipativity constraints. Further, the size of the resulting scenario space (frequently exceeding millions of scenarios) precludes the direct solution of the resulting program. In this case, numerous decomposition algorithms and heuristics have been developed (e.g., Lagrangean decomposition-based algorithms (Tarhan, et al. 2013) or Knapsack-based decomposition Algorithms (Christian and Cremaschi, 2015)).
The Separation and Safeguards Performance Model (SSPM) uses MATLAB/Simulink to provide a tool for safeguards analysis of bulk handling nuclear processing facilities. Models of aqueous and electrochemical reprocessing, enrichment, fuel fabrication, and molten salt reactor facilities have been developed to date. These models are used for designing the overall safeguards system, examining new safeguards approaches, virtually testing new measurement instrumentation, and analyzing diversion scenarios. The key metrics generated by the models include overall measurement uncertainty and detection probability for various material diversion or facility misuse scenarios. Safeguards modeling allows for rapid and cost-effective analysis for Safeguards by Design. The models are currently being used to explore alternative safeguards approaches, including more reliance on process monitoring data to reduce the need for destructive analysis that adds considerable burden to international safeguards. Machine learning techniques are being applied, but these techniques need large amounts of data for training and testing the algorithms. The SSPM can provide that training data. This paper will describe the SSPM and its use for applying both traditional nuclear material accountancy and newer machine learning options.
We describe the three electron-transport algorithms that have been implemented in the ITS Monte Carlo codes. While the underlying cross-section data is similar, each uses a fundamentally unique method, which at a high level are best characterized as condensed history, multigroup, and single scatter. Through a set of comparisons with experimental data and some comparisons of purely numerical results, we discuss various attributes of each of the algorithms and show some of the defects that can affect results.
Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with “Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.
Direct kinetic and product studies of Criegee Intermediates reveal insertion and addition mechanisms for multiple co-reactant species. Observation of these highly oxygenated low volatility products indicate the potential role of Criegee Intermediate chemistry in molecular weight growth, and subsequently, secondary organic aerosol formation.
Hybrid composites allow designers to develop efficient structures, which strategically exploit a material's strengths while mitigating possible weaknesses. However, elevated temperature curing processes and exposure to thermally-extreme service environments lead to the development of residual stresses. These stresses form at the hybrid composite's bi-material interfaces, significantly impacting the stress state at the crack tip of any pre-existing flaw within the structure and affecting the probability that small defects will grow into large-scale delaminations. Therefore, in this study, a carbon fiber reinforced composite (CFRP) is co-cured with a glass fiber reinforced composite (GFRP), and the mixed-mode fracture toughness is measured across a wide temperature range (-54°C to +71°C). Upon completion of the testing, the measured results and observations are used to develop high-fidelity finite element models simulating both the formation of residual stresses throughout the composite manufacturing process, as well as the mixed-mode testing of the hybrid composite. The stress fields predicted through simulation assist in understanding the trends observed during the completed experiments. Furthermore, the modeled predictions indicate that failure to account for residual stress effects during the analysis of composite structures could lead to non-conservative structural designs and premature failure.
Stone, Daniel; Au, Kendrew; Sime, Samantha; Medeiros, Diogo J.; Blitz, Mark; Seakins, Paul W.; Decker, Zachary; Sheps, Leonid S.
Decomposition kinetics of stabilised CH2OO and CD2OO Criegee intermediates have been investigated as a function of temperature (450-650 K) and pressure (2-350 Torr) using flash photolysis coupled with time-resolved cavity-enhanced broadband UV absorption spectroscopy. Decomposition of CD2OO was observed to be faster than CH2OO under equivalent conditions. Production of OH radicals following CH2OO decomposition was also monitored using flash photolysis with laser-induced fluorescence (LIF), with results indicating direct production of OH in the v = 0 and v = 1 states in low yields. Master equation calculations performed using the Master Equation Solver for Multi-Energy well Reactions (MESMER) enabled fitting of the barriers for the decomposition of CH2OO and CD2OO to the experimental data. Parameterisations of the decomposition rate coefficients, calculated by MESMER, are provided for use in atmospheric models and implications of the results are discussed. For CH2OO, the MESMER fits require an increase in the calculated barrier height from 78.2 kJ mol-1 to 81.8 kJ mol-1 using a temperature-dependent exponential down model for collisional energy transfer with 〈ΔE〉down = 32.6(T/298 K)1.7 cm-1 in He. The low- and high-pressure limit rate coefficients are k1,0 = 3.2 × 10-4(T/298)-5.81exp(-12770/T) cm3 s-1 and k1,∞ = 1.4 × 1013(T/298)0.06exp(-10010/T) s-1, with median uncertainty of ∼12% over the range of experimental conditions used here. Extrapolation to atmospheric conditions yields k1(298 K, 760 Torr) = 1.1+1.5-1.1 × 10-3 s-1. For CD2OO, MESMER calculations result in 〈ΔE〉down = 39.6(T/298 K)1.3 cm-1 in He and a small decrease in the calculated barrier to decomposition from 81.0 kJ mol-1 to 80.1 kJ mol-1. The fitted rate coefficients for CD2OO are k2,0 = 5.2 × 10-5(T/298)-5.28exp(-11610/T) cm3 s-1 and k2,∞ = 1.2 × 1013(T/298)0.06exp(-9800/T) s-1, with overall error of ∼6% over the present range of temperature and pressure. The extrapolated k2(298 K, 760 Torr) = 5.5+9.2-5.5 × 10-3 s-1. The master equation calculations for CH2OO indicate decomposition yields of 63.7% for H2 + CO2, 36.0% for H2O + CO and 0.3% for OH + HCO with no significant dependence on temperature between 400 and 1200 K or pressure between 1 and 3000 Torr.
Multi-site fatigue damage, hidden cracks in hard-to-reach locations, disbonded joints, erosion, impact, and corrosion are among the major flaws encountered in today's extensive fleet of aging aircraft. The use of in-situ sensors for real-time health monitoring of aircraft structures, coupled with remote interrogation, provides a viable option to overcome inspection impediments stemming from accessibility limitations, complex geometries, and the location and depth of hidden damage. Reliable, Structural Health Monitoring (SHM) systems can automatically process data, assess structural condition, and signal the need for human intervention. Prevention of unexpected flaw growth and structural failure can be improved if on-board health monitoring systems are used to continuously assess structural integrity. Such systems can detect incipient damage before catastrophic failures occurs. Other advantages of on-board distributed sensor systems are that they can eliminate costly and potentially damaging disassembly, improve sensitivity by producing optimum placement of sensors and decrease maintenance costs by eliminating more time-consuming manual inspections. This paper presents the results from successful SHM technology validation efforts that established the performance of sensor systems for aircraft fatigue crack detection. Validation tasks were designed to address the SHM equipment, the health monitoring task, the resolution required, the sensor interrogation procedures, the conditions under which the monitoring will occur, and the potential inspector population. All factors that affect SHM sensitivity were included in this program including flaw size, shape, orientation and location relative to the sensors, operational and environmental variables and issues related to the presence of multiple flaws within a sensor network. This paper will also present the formal certification tasks including formal adoption of SHM systems into aircraft manuals and the release of an Alternate Means of Compliance and a modified Service Bulletin to allow for routine use of SHM sensors on commercial aircraft. This program also established a regulatory approval process that includes FAR Part 25 (Transport Category Aircraft) and shows compliance with 25.571 (fatigue) and 25.1529 (Instructions for Continued Airworthiness).
Charged particles present some unique challenges for radiation transport codes. This is because charged particles have cross sections that are extremely forward peaked, are huge in the limit of small energy transfer, and are highly scattering, which causes slow convergence of the source iterations. The primary application of SCEPTRE is modeling radiation-driven electrical effects, so substantial effort has been invested in SCEPTRE for the efficient modeling of electron transport. This paper will summarize recent and ongoing activities involving the accurate deterministic-transport modeling of charged particles and methods implemented to improve iterative convergence.
Compressive sensing shows promise for sensors that collect fewer samples than required by traditional Shannon-Nyquist sampling theory. Recent sensor designs for hyperspectral imaging encode light using spectral modulators such as spatial light modulators, liquid crystal phase retarders, and Fabry-Perot resonators. The hyperspectral imager consists of a filter array followed by a detector array. It encodes spectra with less measurements than the number of bands in the signal, making reconstruction an underdetermined problem. We propose a reconstruction algorithm for hyperspectral images encoded through spectral modulators. Our approach constrains pixels to be similar to their neighbors in space and wavelength, as natural images tend to vary smoothly, and it increases robustness to noise. It combines L1 minimization in the wavelet domain to enforce sparsity and total variation in the image domain for smoothness. The alternating direction method of multipliers (ADMM) simplifies the optimization procedure. Our algorithm constrains encoded, compressed hyperspectral images to be smooth in their reconstruction, and we present simulation results to illustrate our technique. This work improves the reconstruction of hyperspectral images from encoded, multiplexed, and sparse measurements.
One of the most efficient methods for supplying gaseous hydrogen long distances is by using steel pipelines. However, steel pipelines exhibit accelerated fatigue crack growth rates in gaseous hydrogen relative to air. Despite conventional expectations that higher strength steels would be more susceptible to hydrogen embrittlement, recent testing on a variety of pipeline steel grades has shown a notable independence between strength and hydrogen assisted fatigue crack growth rate. It is thought that microstructure may play a more defining role than strength in determining the hydrogen susceptibility. Among the many factors that could affect hydrogen accelerated fatigue crack growth rates, this study was conducted with an emphasis on orientation dependence. The orientation dependence of toughness in hot rolled steels is a well-researched area; however, few studies have been conducted to reveal the relationship between fatigue crack growth rate in hydrogen and orientation. In this work, fatigue crack growth rates were measured in hydrogen for high strength steel pipeline with different orientations. A significant reduction in fatigue crack growth rates were measured when cracks propagated perpendicular to the rolling direction. A detailed microstructural investigation was performed, in an effort to understand the orientation dependence of fatigue crack growth rate performance of pipeline steels in hydrogen environments.
Supercomputing hardware is undergoing a period of significant change. In order to cope with the rapid pace of hardware and, in many cases, programming model innovation, we have developed the Kokkos Programming Model – a C++-based abstraction that permits performance portability across diverse architectures. Our experience has shown that the abstractions developed can significantly frustrate debugging and profiling activities because they break expected code proximity and layout assumptions. In this paper we present the Kokkos Profiling interface, a lightweight, suite of hooks to which debugging and profiling tools can attach to gain deep insights into the execution and data structure behaviors of parallel programs written to the Kokkos interface.
Sandia National Laboratories has developed a method that applies machine learning methods to high-energy spectral X-ray computed tomography data to identify material composition for every reconstructed voxel in the field-of-view. While initial experiments led by Koundinyan et al. demonstrated that supervised machine learning techniques perform well in identifying a variety of classes of materials, this work presents an unsupervised approach that differentiates isolated materials with highly similar properties, and can be applied on spectral computed tomography data to identify materials more accurately compared to traditional performance. Additionally, if regions of the spectrum for multiple voxels become unusable due to artifacts, this method can still reliably perform material identification. This enhanced capability can tremendously impact fields in security, industry, and medicine that leverage non-destructive evaluation for detection, verification, and validation applications.
A new approach is presented for conducting and extrapolating combined environment (radiation plus thermal) accelerated aging experiments. The method involves a novel way of applying the time-temperature-dose rate (t-T-R) approach derived many years ago, which assumes that by simultaneously accelerating the thermal-initiation rate (from Arrhenius T-only analysis) and the radiation dose rate R by the same factor x, the overall degradation rate will increase by the factor x. The dose rate assumption implies that equal dose yields equal damage, which is equivalent to assuming the absence of dose-rate effects (DRE).Aplot of inverse absolute temperature versus the log of the dose rate is used to indicate experimental conditions consistent with themodel assumptions, which can be derived along lines encompassing so-called matched accelerated conditions (MAC lines). Aging trends taken along MAC lines for several elastomers confirms the underlying model assumption and therefore indicates, contrary to many past published results, that DRE are typically not present. In addition, the MAC approach easily accommodates the observation that substantial degradation chemistry changes occur as aging conditions transition R-T space from radiation domination (high R, low T) to temperature domination (low R, high T). The MAC-line approach also suggests an avenue for gaining more confidence in extrapolations of accelerated MAC-line data to ambient aging conditions by using ultrasensitive oxygen consumption (UOC) measurements taken along the MAC line both under the accelerated conditions and at ambient. From UOC data generated under combined R-T conditions, this approach is tested and quantitatively confirmed for one of thematerials. In analogy to the wear-out approach developed previously for thermo-oxidative aging, the MAC-line concept can also be used to predict the remaining lifetimes of samples extracted periodically from ambient environments.
Securing the rapidly expanding Internet of Things (IoT) is critical. Many of these "things" are vulnerable bare-metal embedded systems where the application executes directly on hardware without an operating system. Unfortunately, the integrity of current systems may be compromised by a single vulnerability, as recently shown by Google's P0 team against Broadcom's WiFi SoC. We present ACES (Automatic Compartments for Embedded Systems)1, an LLVM-based compiler that automatically infers and enforces inter-component isolation on bare-metal systems, thus applying the principle of least privileges. ACES takes a developer-specified compartmentalization policy and then automatically creates an instrumented binary that isolates compartments at runtime, while handling the hardware limitations of baremetal embedded devices. We demonstrate ACES' ability to implement arbitrary compartmentalization policies by implementing three policies and comparing the compartment isolation, runtime overhead, and memory overhead. Our results show that ACES' compartments can have low runtime overheads (13% on our largest test application), while using 59% less Flash, and 84% less RAM than the Mbed μVisor-the current state-of-the-art compartmentalization technique for bare-metal systems. ACES' compartments protect the integrity of privileged data, provide control-flow integrity between compartments, and reduce exposure to ROP attacks by 94.3% compared to μVisor.
Advancements in machine learning (ML) and deep learning (DL) have enabled imaging systems to perform complex classification tasks, opening numerous problem domains to solutions driven by high quality imagers coupled with algorithmic elements. However, current ML and DL methods for target classification typically rely upon algorithms applied to data measured by traditional imagers. This design paradigm fails to enable the ML and DL algorithms to influence the sensing device itself, and treats the optimization of the sensor and algorithm as separate sequential elements. Additionally, this current paradigm narrowly investigates traditional images, and therefore traditional imaging hardware, as the primary means of data collection. We investigate alternative architectures for computational imaging systems optimized for specific classification tasks, such as digit classification. This involves a holistic approach to the design of the system from the imaging hardware to algorithms. Techniques to find optimal compressive representations of training data are discussed, and most-useful object-space information is evaluated. Methods to translate task-specific compressed data representations into non-traditional computational imaging hardware are described, followed by simulations of such imaging devices coupled with algorithmic classification using ML and DL techniques. Our approach allows for inexpensive, efficient sensing systems. Reduced storage and bandwidth are achievable as well since data representations are compressed measurements which is especially important for high data volume systems.
Compressive sensing shows promise for sensors that collect fewer samples than required by traditional Shannon-Nyquist sampling theory. Recent sensor designs for hyperspectral imaging encode light using spectral modulators such as spatial light modulators, liquid crystal phase retarders, and Fabry-Perot resonators. The hyperspectral imager consists of a filter array followed by a detector array. It encodes spectra with less measurements than the number of bands in the signal, making reconstruction an underdetermined problem. We propose a reconstruction algorithm for hyperspectral images encoded through spectral modulators. Our approach constrains pixels to be similar to their neighbors in space and wavelength, as natural images tend to vary smoothly, and it increases robustness to noise. It combines L1 minimization in the wavelet domain to enforce sparsity and total variation in the image domain for smoothness. The alternating direction method of multipliers (ADMM) simplifies the optimization procedure. Our algorithm constrains encoded, compressed hyperspectral images to be smooth in their reconstruction, and we present simulation results to illustrate our technique. This work improves the reconstruction of hyperspectral images from encoded, multiplexed, and sparse measurements.
GaN is an attractive material for high-power electronics due to its wide bandgap and large breakdown field. Verticalgeometry devices are of interest due to their high blocking voltage and small form factor. One challenge for realizing complex vertical devices is the regrowth of low-leakage-current p-n junctions within selectively defined regions of the wafer. Presently, regrown p-n junctions exhibit higher leakage current than continuously grown p-n junctions, possibly due to impurity incorporation at the regrowth interfaces, which consist of c-plane and non-basal planes. Here, we study the interfacial impurity incorporation induced by various growth interruptions and regrowth conditions on m-plane p-n junctions on free-standing GaN substrates. The following interruption types were investigated: (1) sample in the main MOCVD chamber for 10 min, (2) sample in the MOCVD load lock for 10 min, (3) sample outside the MOCVD for 10 min, and (4) sample outside the MOCVD for one week. Regrowth after the interruptions was performed on two different samples under n-GaN and p-GaN growth conditions, respectively. Secondary ion mass spectrometry (SIMS) analysis indicated interfacial silicon spikes with concentrations ranging from 5e16 cm-3 to 2e18 cm-3 for the n-GaN growth conditions and 2e16 cm-3 to 5e18 cm-3 for the p-GaN growth conditions. Oxygen spikes with concentrations ∼1e17 cm-3 were observed at the regrowth interfaces. Carbon impurity levels did not spike at the regrowth interfaces under either set of growth conditions. We have correlated the effects of these interfacial impurities with the reverse leakage current and breakdown voltage of regrown m-plane p-n junctions.
MPI usage patterns are changing as applications move towards fully-multithreaded runtimes. However, the impact of these patterns on MPI message matching is not well-studied. In particular, MPI’s mechanic for receiver-side data placement, message matching, can be impacted by increased message volume and nondeterminism incurred by multithreading. While there has been significant developer interest and work to provide an efficient MPI interface for multithreaded access, there has not been a study showing how these patterns affect messaging patterns and matching behavior. In this paper, we present a framework for studying the effects of multithreading on MPI message matching. This framework allows us to explore the implications of different common communication patterns and thread-level decompositions. We present a study of these impacts on the architecture of two of the Top 10 supercomputers (NERSC’s Cori and LANL’s Trinity). This data provides a baseline to evaluate reasonable matching engine queue lengths, search depths, and queue drain times under the multithreaded model. Furthermore, the study highlights surprising results on the challenge posed by message matching for multithreaded application performance.
As research in superconducting electronics matures, it is necessary to have failure analysis techniques to identify parameters that impact yield and failure modes in the fabricated product. However, there has been significant skepticism regarding the ability of laser-based failure analysis techniques to dctect defects at room temperature in superconducting electronics designed to operate at cryogenic temperatures. In this paper, we describe preliminary data showing the use of Thermally Induced Voltage Alteration (1∗1 VA) [l| at ambient temperature to locate defects in known defective circuits fabricated using state-of-the-art techniques for superconducting electronics.
The design of satellites usually includes the objective of minimizing mass due to high launch costs, which is complicated by the need to protect sensitive electronics from the space radiation environment. There is growing interest in automated design optimization techniques to help achieve that objective. Traditional optimization approaches that rely exclusively on response functions (e.g. dose calculations) can be quite expensive when applied to transport problems. Previously we showed how adjoint-based transport sensitivities used in conjunction with gradient-based optimization algorithms can be quite effective in designing mass-efficient electron/proton shields in one-dimensional slab geometries. In this paper we extend that work to two-dimensional Cartesian geometries. This consists primarily of deriving the sensitivities to geometric changes, given a particular prescription for parametrizing the shield geometry. We incorporate these sensitivities into our optimization process and demonstrate their effectiveness in such design calculations.
Many candidate power system architectures are being evaluated for the Navy’s next generation all-electric warship. One proposed power system concept involves the use of dual-wound generators to power both the Port and Starboard side buses using different 3-phase sets from the same machine (Doerry, 2015). This offers the benefit of improved efficiency through reduced engine light-loading and improved dispatch flexibility, but the approach couples the two busses through a common generator, making one bus vulnerable to faults and other dynamic events on the other bus. Thus, understanding the dynamics of cross-bus coupling is imperative to the successful implementation of a dual-wound generator system. In (Rashkin, 2017), a kilowatt-scale system was analysed that considered the use of a dual-wound permanent magnet machine, two passive rectifiers, and two DC buses with resistive loads. For this system, dc voltage variation on one bus was evaluated in the time domain as a function of load changes on the other bus. Therein, substantive cross-bus coupling was demonstrated in simulation and hardware experiments. The voltage disturbances were attributed to electromechanical (i.e. speed disturbances) as well as electromagnetic coupling mechanisms. In this work, a 25 MVA dual-wound generator was considered, and active rectifier models were implemented in Matlab both using average value modelling and switching (space vector modulation) simulation models. The frequency dynamics of the system between the load on one side and the dc voltage on the other side was studied. The coupling is depicted in the frequency domain as a transfer function with amplitude and phase and is shown to have distinct characteristics (i.e. frequency regimes) associated with physical coupling mechanisms such as electromechanical and electromagnetic coupling as well as response characteristics associated with control action by the active rectifiers. In addition, based on requirements outlined in draft Military Standard 1399-MVDC, an approach to derive specifications will be discussed and presented. This method will aid in quantifying the allowable coupling of energy from one bus to another in various frequency regimes as a function of other power system parameters. Finally, design and control strategies will be discussed to mitigate cross-bus coupling. The findings of this work will inform the design, control, and operation of future naval warship power systems.
Today’s computational, experimental, and observational sciences rely on computations that involve many related tasks. The success of a scientific mission often hinges on the computer automation of these workflows. In April 2015, the US Department of Energy (DOE) invited a diverse group of domain and computer scientists from national laboratories supported by the Office of Science, the National Nuclear Security Administration, from industry, and from academia to review the workflow requirements of DOE’s science and national security missions, to assess the current state of the art in science workflows, to understand the impact of emerging extreme-scale computing systems on those workflows, and to develop requirements for automated workflow management in future and existing environments. This article is a summary of the opinions of over 50 leading researchers attending this workshop. We highlight use cases, computing systems, workflow needs and conclude by summarizing the remaining challenges this community sees that inhibit large-scale scientific workflows from becoming a mainstream tool for extreme-scale science.
The rapidly growing penetration levels of distributed photovoltaic (PV) systems requires more comprehensive studies to understand their impact on distribution feeders. IEEE P.1547 highlights the need for Quasi-Static Time Series (QSTS) simulation in conducting distribution impact studies for distributed resource interconnection. Unlike conventional scenario-based simulation, the time series simulation can realistically assess time-dependent impacts such as the operation of various controllable elements (e.g. voltage regulating tap changers) or impacts of power fluctuations. However, QSTS simulations are still not widely used in the industry because of the computational burden associated with running yearlong simulations at a 1-s granularity, which is needed to capture device controller effects responding to PV variability. This paper presents a novel algorithm that reduces the number of times that the non-linear 3-phase unbalanced AC power flow must be solved by storing and reassigning power flow solutions as it progresses through the simulation. Each unique power flow solution is defined by a set of factors affecting the solution that can easily be queried. We demonstrate a computational time reduction of 98.9% for a yearlong simulation at 1-s resolution with minimal errors for metrics including: number of tap changes, capacitor actions, highest and lowest voltage on the feeder, line losses, and ANSI voltage violations. The key contribution of this work is the formulation of an algorithm capable of: (i) drastically reducing the computational time of QSTS simulations, (ii) accurately modeling distribution system voltage-control elements with hysteresis, and (iii) efficiently compressing result time series data for post-simulation analysis.
Aluminized ammonium perchlorate composite propellants can form large molten agglomerated particles that may result in poor combustion performance, slag accumulation, and increased two-phase flow losses. Quantifying agglomerate size distributions are needed to gain an understanding of agglomeration dynamics and ultimately design new propellants for improved performance. Due to complexities of the reacting multiphase environment, agglomerate size diagnostics are difficult and measurement accuracies are poorly understood. To address this, the current work compares three agglomerate sizing techniques applied to two propellant formulations. Particle collection on a quench plate and backlit videography are two relatively common techniques, whereas digital inline holography is an emerging alternative for three-dimensional measurements. Atmospheric pressure combustion results show that all three techniques are able to capture the qualitative trends; however, significant differences exist in the quantitative size distributions and mean diameters. For digital inline holography, methods are proposed that combine temporally resolved high-speed recording with lower-speed but higher spatial resolution measurements to correct for size- velocity correlation biases while extending the measurable size dynamic range. The results from this work provide new guidance for improved agglomerate size measurements along with statistically resolved datasets for validation of agglomerate models.
Miller, Elizabeth C.; Kasse, Robert M.; Heath, Khloe N.; Perdue, Brian R.; Toney, Michael F.
In this study, a novel cross-sectional battery cellwas developed to characterize lithium-sulfur batteries usingX-ray spectromicroscopy. Chemically sensitive X-raymapswere collected operando at energies relevant to the expected sulfur species andwere used to correlate changes in sulfur species with electrochemistry. Significant changes in the sulfur/carbon composite electrode were observed from cycle to cycle including rearrangement of the elemental sulfur matrix and PEO10LiTFSI binder. Polysulfide concentration and area of spatial diffusion increased with cycling, indicating that some polysulfide dissolution is irreversible, leading to polysulfide shuttle. Fitting of the maps using standard sulfur and polysulfide XANES spectra indicated that upon subsequent discharge/charge cycles, the initial sulfur concentration was not fully recovered; polysulfides and lithium sulfide remained at the cathodes with higher order polysulfides as the primary species in the region of interest. Quantification of the polysulfide concentration across the electrolyte and electrode interfaces shows that the polysulfide concentration before the first discharge and after the third charge is constant within the electrolyte, but while cycling, a significant increase in polysulfides and a gradient toward the lithium metal anode forms. This chemically and spatially sensitive characterization and analysis provides a foundation for further operando spectromicroscopy of lithium-sulfur batteries.
Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics
Structural dynamic models of mechanical, aerospace, and civil structures often involve connections of multiple subcomponents with rivets, bolts, press fits, or other joining processes. Recent model order reduction advances have been made for jointed structures using appropriately defined whole joint models in combination with linear substructuring techniques. A whole joint model condenses the interface nodes onto a single node with multi-point constraints resulting in drastic increases in computational speeds to predict transient responses. One drawback to this strategy is that the whole joint models are empirical and require calibration with test or high-fidelity model data. A new framework is proposed to calibrate whole joint models by computing global responses from high-fidelity finite element models and utilizing global optimization to determine the optimal joint parameters. The method matches the amplitude dependent damping and natural frequencies predicted for each vibration mode using quasi-static modal analysis.
Nano-scale spatial confinement can alter chemistry at mineral-water interfaces. These nano-scale confinement effects can lead to anomalous fate and transport behavior of aqueous metal species. When a fluid resides in a nano-porous environments (pore size under 100 nm), the observed density, surface tension, and dielectric constant diverge from those measured in the bulk. To evaluate the impact of nano-scale confinement on the adsorption of copper (Cu2+), we performed batch adsorption studies using mesoporous silica. Mesoporous silica with the narrow distribution of pore diameters (SBA-15; 8, 6, and 4 nm pore diameters) was chosen since the silanol functional groups are typical to surface environments. Batch adsorption isotherms were fit with adsorption models (Langmuir, Freundlich, and Dubinin-Radushkevich) and adsorption kinetic data were fit to a pseudo-first-order reaction model. We found that with decreasing pore size, the maximum surface area-normalized uptake of Cu2+ increased. The pseudo-first-order kinetic model demonstrates that the adsorption is faster as the pore size decreases from 8 to 4 nm. We attribute these effects to the deviations in fundamental water properties as pore diameter decreases. In particular, these effects are most notable in SBA-15 with a 4-nm pore where the changes in water properties may be responsible for the enhanced Cu mobility, and therefore, faster Cu adsorption kinetics.
Proceedings of ISMA 2018 - International Conference on Noise and Vibration Engineering and USD 2018 - International Conference on Uncertainty in Structural Dynamics
Complex mechanical structures are often subjected to random vibration environments. One strategy to analyze these nonlinear structures numerically is to use finite element analysis with an explicit solver to resolve interactions in the time domain. However, this approach is impractical because the solver is conditionally stable and requires thousands of iterations to resolve the contact algorithms. As a result, only short runs can be performed practically because of the extremely long runtime needed to obtain sufficient sampling for long-time statistics. The proposed approach uses a machine learning algorithm known as the Long Short-Term Memory (LSTM) network to model the response of the nonlinear system to random input. The LSTM extends the capability of the explicit solver approach by taking short samples and extending them to arbitrarily long signals. The efficient LSTM algorithm enables the capability to perform Monte Carlo simulations to quantify model-form and aleatoric uncertainty due to the random input.
Recent theoretical predictions indicate that functional groups and additives could have a favorable impact on the hydrogen adsorption characteristics of sorbents; however, no definite evidence has been obtained to date and little is known about the impact of such modifications on the thermodynamics of hydrogen uptake and overall capacity. In this work, we investigate the effect of two types of additives on the cryoadsorption of hydrogen to mesoporous silica. First, Lewis and Brønsted acid sites were evaluated by grafting aluminum to the surface of mesoporous silica (MCF-17) and characterizing the resulting silicate materials' surface area and the concentration of Brønsted and Lewis acid sites created. Heat of adsorption measurements found little influence of surface acidity on the enthalpy of hydrogen cryoadsorption. Secondly, platinum nanoparticles of 1.5 nm and 7.1 nm in diameter were loaded into MCF-17, and characterized by TEM. Hydrogen absorption measurements revealed that the addition of small amounts of metallic platinum nanoparticles increases by up to two-fold the amount of hydrogen adsorbed at liquid nitrogen temperature. Moreover, we found a direct correlation between the size of platinum particles and the amount of hydrogen stored, in favor of smaller particles.
Chen, Ming W.; Rotavera, Brandon; Chao, Wen; Zador, Judit; Taatjes, Craig A.
Formation of the key general radical chain carriers, OH and HO2, during pulsed-photolytic Cl-initiated oxidation of tetrahydropyran and cyclohexane are measured with time-resolved infrared absorption in a temperature-controlled Herriott multipass cell in the temperature range of 500-750 K at 20 Torr. The experiments show two distinct timescales for HO2 and OH formation in the oxidation of both fuels. Analysis of the timescales reveals striking differences in behavior between the two fuels. In both cyclohexane and tetrahydropyran oxidation, a faster timescale is strongly related to the "well-skipping" (R + O2 → alkene + HO2 or cyclic ether + OH) mechanism and is expected to have, at most, a weak temperature dependence. Indeed, the fast HO2 formation timescale is nearly temperature independent both for cyclohexyl + O2 and for tetrahydropyranyl + O2 below 700 K. A slower HO2 formation timescale in cyclohexane oxidation is shown to be linked to the sequential R + O2 → ROO → alkene + HO2 pathway, and displays a strong temperature dependence mainly from the final step (with energy barrier ∼32.5 kcal mol-1). In contrast, the slower HO2 formation timescale in tetrahydropyran oxidation is surprisingly temperature insensitive across all measured temperatures. Although the OH formation timescales in tetrahydropyran oxidation show a temperature dependence similar to the cyclohexane oxidation, the temperature dependence of OH yield is opposite in both cases. This significant difference of HO2 formation kinetics and OH formation yield for the tetrahydropyran oxidation can arise from contributions related to ring-opening pathways in the tetrahydropyranyl + O2 system that compete with the typical R + O2 reaction scheme. This comparison of two similar fuels demonstrates the consequences of differing chemical mechanisms on OH and HO2 formation and shows that they can be highlighted by analysis of the eigenvalues of a system of simplified kinetic equations for the alkylperoxy-centered R + O2 reaction pathways. We suggest that such analysis can be more generally applied to complex or poorly known oxidation systems.
We review the physical foundations of Landauer’s Principle, which relates the loss of information from a computational process to an increase in thermodynamic entropy. Despite the long history of the Principle, its fundamental rationale and proper interpretation remain frequently misunderstood. Contrary to some misinterpretations of the Principle, the mere transfer of entropy between computational and non-computational subsystems can occur in a thermodynamically reversible way without increasing total entropy. However, Landauer’s Principle is not about general entropy transfers; rather, it more specifically concerns the ejection of (all or part of) some correlated information from a controlled, digital form (e.g., a computed bit) to an uncontrolled, non-computational form, i.e., as part of a thermal environment. Any uncontrolled thermal system will, by definition, continually re-randomize the physical information in its thermal state, from our perspective as observers who cannot predict the exact dynamical evolution of the microstates of such environments. Thus, any correlations involving information that is ejected into and subsequently thermalized by the environment will be lost from our perspective, resulting directly in an irreversible increase in thermodynamic entropy. Avoiding the ejection and thermalization of correlated computational information motivates the reversible computing paradigm, although the requirements for computations to be thermodynamically reversible are less restrictive than frequently described, particularly in the case of stochastic computational operations. There are interesting possibilities for the design of computational processes that utilize stochastic, many-to-one computational operations while nevertheless avoiding net entropy increase that remain to be fully explored.
Simultaneous pressure sensitive paint (PSP) and stereo digital image correlation (DIC) measurements on a jointed beam structure are presented. Tests are conducted in a shock tube, providing an impulsive starting condition followed by approximately uniform high-speed flow conditions for 5.0 msec. The unsteady pressure loading generated by shock waves and vortex shedding results in the excitation of various structural modes in the beam. The combined data characterizes the structural loading input (pressure) and the resulting structural behavior output (deformation). Time-series filtering is used to remove external bias errors such as shock tube motion, and proper orthogonal decomposition (POD) is used to extract mode shapes from the deformation data. This demonstrates the utility of using fast-response PSP together with stereo digital image correlation (DIC), which provides a valuable capability for validating structural dynamics simulations.
A BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a deluxe type of weighted average and an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced. Under the assumption that the subdomains are all built from elements of a coarse triangulation of the given domain, that the meshes of each subdomain are quasi uniform and that the material parameters are constant in each subdomain, a bound is obtained for the condition number of the preconditioned linear system which is independent of the values and the jumps of these parameters across the interface between the subdomains as well as the number of subdomains. Numerical experiments, using the PETSc library, are also presented which support the theory and show the effectiveness of the algorithms even for problems not covered by the theory. Included are also experiments with Brezzi-Douglas-Marini finite element approximations.
This letter demonstrates the design of continuously graded elastic cylinders to achieve passive cloaking from harmonic acoustic excitation, both at single frequencies and over extended bandwidths. The constitutive parameters in a multilayered, constant-density cylinder are selected in a partial differential equation-constrained optimization problem, such that the residual between the pressure field from an unobstructed spreading wave in a fluid and the pressure field produced by the cylindrical inclusion is minimized. The radial variation in bulk modulus appears fundamental to the cloaking behavior, while the shear modulus distribution plays a secondary role. Such structures could be realized with functionally-graded elastic materials.
A closed-form solution is described here for the equilibrium configurations of the magnetic field in a simple heterogeneous domain. This problem and its solution are used for rigorous assessment of the accuracy of the ALEGRA code in the quasistatic limit. By the equilibrium configuration we understand the static condition, or the stationary states without macroscopic current. The analysis includes quite a general class of 2D solutions for which a linear isotropic metallic matrix is placed inside a stationary magnetic field approaching a constant value ° at infinity. The process of evolution of the magnetic fields inside and outside the inclusion and the parameters for which the quasi-static approach provides for self-consistent results is also explored. It is demonstrated that under spatial mesh refinement, ALEGRA converges to the analytic solution for the interior of the inclusion at the expected rate, for both body-fitted and regular rectangular meshes.
The corrosion of pulsed-laser deposited Fe thin films by aqueous acetic acid solution was explored in real time by performing dynamic microfluidic experiments in situ in a transmission electron microscope. The films were examined in both the as-deposited condition and after annealing. In the as-deposited films, discrete events featuring the localized dissolution of grains were observed with the dissolved volumes ranging in size from ~1.5 x 10-5 μm3 to 3.4 x 10-7 μm3. The annealed samples had larger grains than the as-deposited samples, were more resistant to corrosion, and did not show similar discrete dissolution events. The electron beam was observed to accelerate the corrosion, especially on the as-deposited samples. The effects of grain surface energy, grain boundary energy and the electron beam-specimen interactions are discussed in relation to the observed behavior.
Simulation models can improve decisions meant to control the consequences of disruptions to critical infrastructures. We describe a dynamic flow model on networks purposed to inform analyses by those concerned about consequences of disruptions to infrastructures and to help policy makers design robust mitigations. We conceptualize the adaptive responses of infrastructure networks to perturbations as market transactions and business decisions of operators. We approximate commodity flows in these networks by a diffusion equation, with nonlinearities introduced to model capacity limits. To illustrate the behavior and scalability of the model, we show its application first on two simple networks, then on petroleum infrastructure in the United States, where we analyze the effects of a hypothesized earthquake.
We consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error [A. Mugler and H.-J. Starkloff, ESAIM Math. Model. Numer. Anal., 47 (2013), pp. 1237-1263]. As a remedy for this, we propose a novel stochatic least-squares Petrov-Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted2-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted2-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.
In this work, a Pitzer model is developed for the K+(Na+)-Am(OH)4−-Cl−-OH− system based on Am(OH)3(s) solubility data in highly alkaline KOH solutions. Under highly alkaline conditions, the solubility reaction of Am(OH)3(s) is expressed as: Solubilities of Am(OH)3(s) based on the above reaction are modeled as a function of KOH concentrations. The stability constant for Am(OH)4− is evaluated using Am(OH)3(s) solubility data in KOH solutions up to 12 mol•kg-1 taken from the literature. The Pitzer interaction parameters related to Al(OH)4- are used as analogs for the interaction parameters involving Am(OH)4- to obtain the stability constant for Am(OH)4-. The for the reaction is -11.34 ± 0.15 (2σ).
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. In this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. We conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.
The effect of a linear accelerator's (LINAC's) microstructure (i.e., train of narrow pulses) on devices and the associated transient photocurrent models are investigated. The data indicate that the photocurrent response of Si-based RF bipolar junction transistors and RF p-i-n diodes is considerably higher when taking into account the microstructure effects. Similarly, the response of diamond, SiO2, and GaAs photoconductive detectors (standard radiation diagnostics) is higher when taking into account the microstructure. This has obvious hardness assurance implications when assessing the transient response of devices because the measured photocurrent and dose rate levels could be underestimated if microstructure effects are not captured. Indeed, the rate the energy is deposited in a material during the microstructure peaks is much higher than the filtered rate which is traditionally measured. In addition, photocurrent models developed with filtered LINAC data may be inherently inaccurate if a device is able to respond to the microstructure.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene have visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. Finally, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.
We present the design, characterization, and testing of a laboratory prototype radiological search and localization system. The system, based on time-encoded imaging, uses the attenuation signature of neutrons in time, induced by the geometrical layout and motion of the system. We have demonstrated the ability to detect a ∼1mCi252Cf radiological source at 100m standoff with 90% detection efficiency and 10% false positives against background in 12min. This same detection efficiency is met at 15s for a 40m standoff, and 1.2s for a 20m standoff.
Efficient design of wave energy converters requires an accurate understanding of expected loads and responses during the deployment lifetime of a device. A study has been conducted to better understand best-practices for prediction of design responses in a wave energy converter. A case-study was performed in which a simplified wave energy converter was analyzed to predict several important device design responses. The application and performance of a full long-term analysis, in which numerical simulations were used to predict the device response for a large number of distinct sea states, was studied. Environmental characterization and selection of sea states for this analysis at the intended deployment site were performed using principle-components analysis. The full long-term analysis applied here was shown to be stable when implemented with a relatively low number of sea states and convergent with an increasing number of sea states. As the number of sea states utilized in the analysis was increased, predicted response levels did not change appreciably. However, uncertainty in the response levels was reduced as more sea states were utilized.
This work extends recent methods to calculate dynamic substructuring predictions of a weakly nonlinear structure using nonlinear pseudo-modal models. In previous works, constitutive joint models (such as the modal Iwan element) were used to capture the nonlinearity of each subcomponent on a mode-by-mode basis. This work uses simpler polynomial stiffness and damping elements to capture nonlinear dynamics from more diverse jointed connections including large continuous interfaces. The proposed method requires that the modes of the system remain distinct and uncoupled in the amplitude range of interest. A windowed sinusoidal loading is used to excite each experimental subcomponent mode in order to identify the nonlinear pseudo-modal models. This allows for a higher modal amplitude to be achieved when fitting these models and extends the applicable amplitude range of this method. Once subcomponent modal models have been experimentally extracted for each mode, the Transmission Simulator method is implemented to assemble the subcomponent models into a nonlinear assembled prediction. Numerical integration methods are used to evaluate this prediction compared to a truth test of the nonlinear assembly.
We develop a procedure to utilize error estimates for samples of a surrogate model to compute robust upper and lower bounds on estimates of probabilities of events. We show that these error estimates can also be used in an adaptive algorithm to simultaneously reduce the computational cost and increase the accuracy in estimating probabilities of events using computationally expensive high-fidelity models. Specifically, we introduce the notion of reliability of a sample of a surrogate model, and we prove that utilizing the surrogate model for the reliable samples and the high-fidelity model for the unreliable samples gives precisely the same estimate of the probability of the output event as would be obtained by evaluation of the original model for each sample. The adaptive algorithm uses the additional evaluations of the high-fidelity model for the unreliable samples to locally improve the surrogate model near the limit state, which significantly reduces the number of high-fidelity model evaluations as the limit state is resolved. Numerical results based on a recently developed adjoint-based approach for estimating the error in samples of a surrogate are provided to demonstrate (1) the robustness of the bounds on the probability of an event, and (2) that the adaptive enhancement algorithm provides a more accurate estimate of the probability of the QoI event than standard response surface approximation methods at a lower computational cost.
When very few samples of a random quantity are available from a source distribution or probability density function (PDF) of unknown shape, it is usually not possible to accurately infer the PDF from which the data samples come. Then a significant component of epistemic uncertainty exists concerning the source distribution of random or aleatory variability. For many engineering purposes, including design and risk analysis, one would normally want to avoid inference related under-estimation of important quantities such as response variance, and failure probabilities. Recent research has established the practicality and effectiveness of a class of simple and inexpensive UQ Methods for reasonable conservative estimation of such quantities when only sparse samples of a random quantity are available. This class of UQ methods is explained, demonstrated, and analyzed in this paper within the context of the Sandia Cantilever Beam End-to-End UQ Problem, Part A.1. Several sets of sparse replicate data are involved and several representative uncertainty quantities are to be estimated: A) beam deflection variability, in particular the 2.5 to 97.5 percentile “central 95%” range of the sparsely sampled PDF of deflection; and B) a small exceedance probability associated with a tail of the PDF integrated beyond a specified deflection tolerance.
Duan, Lian; Choudhari, Meelan M.; Chou, Amanda; Munoz, Federico; Ali, Syed R.C.; Radespiel, Rolf; Schilden, Thomas; Schroder, Wolfgang; Marineau, Eric C.; Casper, Katya M.; Chaudhry, Ross S.; Candler, Graham V.; Gray, Kathryn G.; Sweeney, Cameron J.; Schneider, Steven P.
While low disturbance (“quiet”) hypersonic wind tunnels are believed to provide more reliable extrapolation of boundary layer transition behavior from ground to flight, the presently available quiet facilities are limited to Mach 6, moderate Reynolds numbers, low freestream enthalpy, and subscale models. As a result, only conventional (“noisy”) wind tunnels can reproduce both Reynolds numbers and enthalpies of hypersonic flight configurations, and must therefore be used for flight vehicle test and evaluation involving high Mach number, high enthalpy, and larger models. This article outlines the recent progress and achievements in the characterization of tunnel noise that have resulted from the coordinated effort within the AVT-240 specialists group on hypersonic boundary layer transition prediction. New Direct Numerical Simulation (DNS) datasets elucidate the physics of noise generation inside the turbulent nozzle wall boundary layer, characterize the spatiotemporal structure of the freestream noise, and account for the propagation and transfer of the freestream disturbances to a pitot-mounted sensor. The new experimental measurements cover a range of conventional wind tunnels with different sizes and Mach numbers from 6 to 14 and extend the database of freestream fluctuations within the spectral range of boundary layer instability waves over commonly tested models. Prospects for applying the computational and measurement datasets for developing mechanism-based transition prediction models are discussed.
Al-Based Al-Cu alloys have a very high strength to density ratio, and are therefore important materials for transportation systems including vehicles and aircrafts. These alloys also appear to have a high resistance to hydrogen embrittlement, and as a result, are being explored for hydrogen related applications. To enable fundamental studies of mechanical behavior of Al-Cu alloys under hydrogen environments, we have developed an Al-Cu-H bond-order potential according to the formalism implemented in the molecular dynamics code LAMMPS. Our potential not only fits well to properties of a variety of elemental and compound configurations (with coordination varying from 1 to 12) including small clusters, bulk lattices, defects, and surfaces, but also passes stringent molecular dynamics simulation tests that sample chaotic configurations. Careful studies verified that this Al-Cu-H potential predicts structural property trends close to experimental results and quantum-mechanical calculations; in addition, it properly captures Al-Cu, Al-H, and Cu-H phase diagrams and enables simulations of H2 dissociation, chemisorption, and absorption on Al-Cu surfaces.
This study details a complimentary testing and finite element analysis effort to model threaded fasteners subjected to multiple loadings and loading rates while identifying modeling sensitivities that impact this process. NAS1352-06-6P fasteners were tested in tension at quasistatic loading rates and tension and shear at dynamic loading rates. The quasistatic tension tests provided calibration and validation data for constitutive model fitting, but this process was complicated by the difference in the conventional (global) and novel (local) displacement measurements. The consequences of these differences are investigated in detail by obtaining calibrated models from both displacement measurements and assessing their performance when extended to the dynamic tension and shear applications. Common quantities of interest are explored, including failure load, time-to-failure, and displacement-at-failure. Finally, the mesh sensitivities of both dynamic analysis models are investigated to assess robustness and inform modeling fidelity. This study is performed in the context of applying these fastener models into large-scale, full system finite element analyses of complex structures, and therefore the models chosen are relatively basic to accommodate this desire and reflect typical modeling approaches. The quasistatic tension results reveal the sensitivity and importance of displacement measurement techniques in the testing procedure, especially when performing experiments involving multiple components that inhibit local specimen measurements. Additional compliance from test fixturing and load frames have an increasingly significant effect on displacement data as the measurement becomes more global, and models must necessarily capture these effects to accurately reproduce the test data. Analysis difficulties were also discovered in the modeling of shear loadings, as the results were very sensitive to mesh discretization, further complicating the ability to analyze joints subjected to diverse loadings. These variables can significantly contribute to the error and uncertainty associated with the model, and this study begins to quantify this behavior and provide guidance on mitigating these effects. When attempting to capture multiple loadings and loading rates in fasteners through simulation, it becomes necessary to thoroughly exercise and explore test and analysis procedures to ensure the final model is appropriate for the desired application.
Park, Michael A.; Barral, Nicolas; Ibanez-Granados, Daniel A.; Kamenetskiy, Dmitry S.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien
Unstructured grid adaptation is a tool to control Computational Fluid Dynamics (CFD) discretization error. However, adaptive grid techniques have made limited impact on production analysis workflows where the control of discretization error is critical to obtaining reliable simulation results. Issues that prevent the use of adaptive grid methods are identified by applying unstructured grid adaptation methods to a series of benchmark cases. Once identified, these challenges to existing adaptive workflows can be addressed. Unstructured grid adaptation is evaluated for test cases described on the Turbulence Modeling Resource (TMR) web site, which documents uniform grid refinement of multiple schemes. The cases are turbulent flow over a Hemisphere Cylinder and an ONERA M6 Wing. Adaptive grid force and moment trajectories are shown for three integrated grid adaptation processes with Mach interpolation control and output error based metrics. The integrated grid adaptation process with a finite element (FE) discretization produced results consistent with uniform grid refinement of fixed grids. The integrated grid adaptation processes with finite volume schemes were slower to converge to the reference solution than the FE method. Metric conformity is documented on grid/metric snapshots for five grid adaptation mechanics implementations. These tools produce anisotropic boundary conforming grids requested by the adaptation process.
The International Monitoring System (IMS) infrasound network has been designed to acquire the necessary data to detect and locate explosions in the atmosphere with a yield equivalent to 1 kiloton of TNT anywhere on Earth. A major associated challenge is the task of automatically processing data from all IMS infrasound stations to identify possible nuclear tests for subsequent review by analysts. This paper is the first attempt to quantify the false alarm rate (FAR) of the IMS network, and in particular to assess how the FAR is affected by the numbers and distributions of detections at each infrasound station. To ensure that the results are sufficiently general, and not dependent entirely on one detection algorithm, the assessment is based on two detection algorithms that can be thought of as end members in their approach to the trade-offbetween missed detections and false alarms. The results show that the FAR for events formed at only two arrays is extremely high (ranging from 10s to 100s of false events per day across the IMS network, depending on the detector tuning). It is further shown that the FAR for events formed at three or more IMS arrays is driven by ocean-generated waves (microbaroms), despite efforts within both detection algorithms for avoiding these signals, indicating that further research into this issue is merited. Overall, the results highlight the challenge of processing data from a globally sparse network of stations to detect and form events. The results suggest that more work is required to reduce false alarms caused by the detection of microbarom signals.
Desirable outcomes for geologic carbon storage include maximizing storage efficiency, preserving injectivity, and avoiding unwanted consequences such as caprock or wellbore leakage or induced seismicity during and post injection. To achieve these outcomes, three control measures are evident including pore pressure, injectate chemistry, and knowledge and prudent use of geologic heterogeneity. Field, experimental, and modeling examples are presented that demonstrate controllable GCS via these three measures. Observed changes in reservoir response accompanying CO2 injection at the Cranfield (Mississippi, USA) site, along with lab testing, show potential for use of injectate chemistry as a means to alter fracture permeability (with concomitant improvements for sweep and storage efficiency). Further control of reservoir sweep attends brine extraction from reservoirs, with benefit for pressure control, mitigation of reservoir and wellbore damage, and water use. State-of-the-art validated models predict the extent of damage and deformation associated with pore pressure hazards in reservoirs, timing and location of networks of fractures, and development of localized leakage pathways. Experimentally validated geomechanics models show where wellbore failure is likely to occur during injection, and efficiency of repair methods. Use of heterogeneity as a control measure includes where best to inject, and where to avoid attempts at storage. An example is use of waste zones or leaky seals to both reduce pore pressure hazards and enhance residual CO2 trapping.
To reduce the levelized cost of wind energy, wind plant controllers are being developed to improve overall performance by increasing energy capture. Previous work has shown that increased energy capture is possible by steering the wake around downstream turbines; however, the impact this steering action has on the loading of the turbines continues to need further investigation with operational data to determine overall benefit. In this work, rotor loading data from a wind turbine operating a wake steering wind plant controller at the DOE/Sandia National Laboratories Scaled Wind Farm Technology (SWiFT) Facility is evaluated. Rotor loading was estimated from fiber optic strain sensors acquired with a state-of-the-art Micron Optics Hyperion interrogator mounted within the rotor and synchronized to the open-source SWiFT controller. A variety of ground and operational calibrations were performed to produce accurate measurements of rotor blade root strains. Time- and rotational-domain signal processing methods were used to estimate bending moment at the root of the rotor blade. Results indicate a correlation of wake steering angle with: one-perrevolution thrust moment amplitude, two-per-revolution torque phase, and three-perrevolution torque amplitude and phase. Future work is needed to fully explain the correlations observed in this work and study additional multi-variable relationships that may also exist.
Fluid-structure interactions were studied on a 7◦ half-angle cone in the Sandia Hypersonic Wind Tunnel at Mach 5 and 8 and in the Purdue Boeing/AFOSR Mach 6 Quiet Tunnel. A thin composite panel was integrated into the cone and the response to boundary-layer disturbances was characterized by accelerometers on the backside of the panel. Under quiet-flow conditions at Mach 6, the cone boundary layer remained laminar. Artificially generated turbulent spots excited a directionally dependent panel response which would last much longer than the spot duration. When the spot generation frequency matched a structural natural frequency of the panel, resonance would occur and responses over 200 times greater than under a laminar boundary layer were obtained. At Mach 5 and 8 under noisy flow conditions, natural transition driven by the wind-tunnel acoustic noise dominated the panel response. An elevated vibrational response was observed during transition at frequencies corresponding to the distribution of turbulent spots in the transitional flow. Once turbulent flow developed, the structural response dropped because the intermittent forcing from the spots no longer drove panel vibration.
Wang, Peng; Sternberg, Andrew L.; Kozub, John A.; Zhang, En X.; Dodds, Nathaniel A.; Jordan, Scott L.; Fleetwood, Daniel M.; Reed, Robert A.; Schrimpf, Ronald D.
Two-photon absorption (TPA) pulsed-laser testing is used to analyze the TPA-induced single-event latchup sensitive-area of a specially designed test structure. This method takes into account the existence of an onset region in which the probability of triggering latchup transits between 0 and 1 as the laser pulse energy increases. This variability is attributed to a combination of laser pulse-to-pulse variability and variations in local carrier density and temperature. For each spatial position, the latchup probability associated with a given energy is calculated. Calculation of latchup cross section at lower laser energies, relative to onset, is improved significantly by taking into account the full probability distribution. The transition from low probability of latchup to high probability is more abrupt near the source contacts than for surrounding areas.
Wind resource assessments are used to estimate a wind farm's power production during the planning process. It is important that these estimates are accurate, as they can impact financing agreements, transmission planning, and environmental targets. Here, we analyze the challenges in wind power estimation for onshore farms. Turbine wake effects are a strong determinant of farm power production. With given input wind conditions, wake losses typically cause downstream turbines to produce significantly less power than upstream turbines. These losses have been modeled extensively and are well understood under certain conditions. Most notably, validation of different model types has favored offshore farms. Models that capture the dynamics of offshore wind conditions do not necessarily perform equally as well for onshore wind farms. We analyze the capabilities of several different methods for estimating wind farm power production in 2 onshore farms with non-uniform layouts. We compare the Jensen model to a number of statistical models, to meteorological downscaling techniques, and to using no model at all. We show that the complexities of some onshore farms result in wind conditions that are not accurately modeled by the Jensen wake decay techniques and that statistical methods have some strong advantages in practice.
In this study, we have performed direct molecular dynamics (MD) simulations of heteroepitaxial vapor deposition of InxGa1-xN films on nonpolar wurtzite-GaN surfaces to investigate strain relaxation by misfit-dislocation formation. The simulated growth is conducted on an atypically large scale by sequentially injecting nearly a million individual vapor-phase atoms towards a fixed GaN substrate. We apply time-and-position-dependent boundary constraints to affect the appropriate environments for the vapor phase, the near-surface solid phase, and the bulklike regions of the growing layer. The simulations employ a newly optimized Stillinger-Weber In-Ga-N system interatomic potential wherein multiple binary and ternary structures are included in the underlying density-functional theory and experimental training sets to improve the treatment of the In-Ga-N related interactions. To examine the effect of growth conditions, we study a matrix of 63 different MD-growth simulations spanning seven InxGa1-xN-alloy compositions ranging from x = 0.0 to x = 0.8 and nine growth temperatures above half the simulated melt temperature. We found a composition dependent temperature range where all kinetically trapped defects were eliminated, leaving only quasiequilibrium misfit and threading dislocations present in the simulated films. Based on the MD results obtained in this temperature range, we observe the formation of interfacial misfit and threading dislocation arrays with morphologies strikingly close to those seen in experiments. In addition, we compare the MD-observed thickness-dependent onset of misfit-dislocation formation to continuum-elasticity-theory models of the critical thickness and find reasonably good agreement. Lastly, we use the three-dimensional atomistic details uniquely available in the MD-growth histories to directly observe the nucleation of dislocations at surface pits in the evolving free surface.
Silicon is a promising material candidate for qubits due to the combination of worldwide infrastructure in silicon microelectronics fabrication and the capability to drastically reduce decohering noise channels via chemical purification and isotopic enhancement. However, a variety of challenges in fabrication, control, and measurement leaves unclear the best strategy for fully realizing this material’s future potential. In this article, we survey three basic qubit types: those based on substitutional donors, on metal-oxide-semiconductor (MOS) structures, and on Si/SiGe heterostructures. We also discuss the multiple schema used to define and control Si qubits, which may exploit the manipulation and detection of a single electron charge, the state of a single electron spin, or the collective states of multiple spins. Far from being comprehensive, this article provides a brief orientation to the rapidly evolving field of silicon qubit technology and is intended as an approachable entry point for a researcher new to this field.
The development of multi-dimensional statistical methods has been demonstrated on variable contact time (VCT) 29Si{1H} cross-polarization magic angle spinning (CP/MAS) data sets collected using Carr-Purcell-Meiboom-Gill (CPMG) type acquisition. These methods utilize the transformation of the collected 2D VCT data set into a 3D data set and use tensor-rank decomposition to extract the spectral components that vary as a function of transverse relaxation time (T2) and CP contact time. The result is a data dense spectral set that can be used to reconstruct CP/MAS spectra at any contact time with a high signal to noise ratio and with an excellent agreement to 29Si{1H} CP/MAS spectra collected using conventional acquisition. These CPMG data can be collected in a fraction of time that would be required to collect a conventional VCT data set. We demonstrate the method on samples of functionalized mesoporous silica materials and show that the method can provide valuable surface specific information about their functional chemistry.
The hydrogen absorption properties of metal closo-borate/metal hydride composites, M2B10H10-8MH and M2B12H12-10MH, M = Li or Na, are studied under high hydrogen pressures to understand the formation mechanism of metal borohydrides. The hydrogen storage properties of the composites have been investigated by in situ synchrotron radiation powder X-ray diffraction at p(H2) = 400 bar and by ex situ hydrogen absorption measurements at p(H2) = 526 to 998 bar. The in situ experiments reveal the formation of crystalline intermediates before metal borohydrides (MBH4) are formed. On the contrary, the M2B12H12-10MH (M = Li and Na) systems show no formation of the metal borohydride at T = 400 °C and p(H2) = 537 to 970 bar. 11B MAS NMR of the M2B10H10-8MH composites reveal that the molar ratio of LiBH4 or NaBH4 and the remaining B species is 1:0.63 and 1:0.21, respectively. Solution and solid-state 11B NMR spectra reveal new intermediates with a B:H ratio close to 1:1. Our results indicate that the M2B10H10 (M = Li, Na) salts display a higher reactivity towards hydrogen in the presence of metal hydrides compared to the corresponding [B12H12]2- composites, which represents an important step towards understanding the factors that determine the stability and reversibility of high hydrogen capacity metal borohydrides for hydrogen storage.
At a low depth of discharge, the performance of rechargeable alkaline Zn/MnO2 batteries is determined by the concomitant processes of hydrogen ion insertion and electro-reduction in the solid phase of γ-MnO2. Ab initio computational methods based on density functional theory (DFT) were applied to study the mechanism of hydrogen ion insertion into the pyrolusite (β), ramsdellite (R), and nsutite (γ) MnO2 polymorphs. It was found that hydrogen ion insertion induced significant distortion in the crystal structures of MnO2 polymorphs. Calculations demonstrated that the hydrogen ions inserted into γ-MnO2 initially occupied the larger 2×1 ramsdellite tunnels. The protonated form of γ-MnO2 was found to be stable over the discharge range during which up to two hydrogen ions were inserted into each 2×1 tunnel. At the same time, the study showed that the insertion of hydrogen ions into the 1×1 pyrolusite tunnels of γ-MnO2 created instability leading to the structural breakdown of γ-MnO2. The results of this study explain the presence of groutite (α-MnOOH) and the absence of manganite (γ-MnOOH) among the reaction products of partially reduced γ-MnO2
Alexander, Christopher L.; Liu, Chao; Kelly, Robert G.; Carpenter, Jacob; Bryan, Charles
In this work, a rotating disk electrode was used to measure the cathodic kinetics on stainless steel as a function of diffusion layer thickness (6 to 60 μm) and chloride concentration (0.6 to 5.3 M NaCl). It was found that, while the cathodic kinetics followed the Levich equation for large diffusion layer thicknesses, the Levich equation overpredicts the mass-transfer limited current density for diffusion layer thicknesses less than 20 μm. Also, an unusual transitory response between the activation and mass-transfer controlled regions was observed for small diffusion layer thicknesses that was more apparent in lower concentration solutions. The presence and reduction of an oxide film and a transition in the oxygen reduction mechanism were identified as possible reasons for this response. The implications of these results on atmospheric corrosion kinetics under thin electrolyte layers is discussed.
The surface area dependence of the decomposition reaction between lithiated graphites and electrolytes for temperatures above 100◦C up to ~200◦C is explored through comparison of model predictions to published calorimetry data. The initial rate of the reaction is found to scale super-linearly with the particle surface area. Initial reaction rates are suggested to scale with edge area, which has also been measured to scale super-linearly with particle area. As in previous modeling studies, this work assumes that electron tunneling through the solid electrolyte interphase (SEI) limits the rate of the reaction between lithium and electrolyte. Comparison of model predictions to calorimetry data indicates that the development of the tunneling barrier is not linear with BET surface area; rather, the tunneling barrier correlates best with the square root of specific surface area. This result suggests that tunneling though the SEI may be controlled by defects with linear characteristics. The effect of activation energy on the tunneling-limited reaction is also investigated. The modified area dependence results in a model that predicts with reasonable accuracy the range of observed heat-release rates in the important temperature range from 100◦C to 200◦C where transition to thermal runaway typically occurs at the cell level.
The plan is based on various implementation plans and strategies that define ongoing activities. If significant updates, such as changes or additions to goals and messaging occur, Sandia will submit an updated plan to the Contracting Officer for approval.
The photovoltaics industry is in dire need of a cheap, robust, reliable arc fault detector that is sensitive enough to detect arc faults before they can develop into a fire while robust enough to noise to limit unwanted tripping. Management Sciences has developed an arc fault detector that is housed in a standard PV connector, which will disconnect the PV array when it detects the surge current from an arc fault. Sandia National Labs, an industry leader in the detection, characterization, and mitigation of arc faults in PV arrays, will work with Management Sciences to characterize, demonstrate, and develop their arc fault detection/connector technology.
This Sandia National Laboratories, New Mexico Environmental Restoration Operations (ER) Consolidated Quarterly Report (ER Quarterly Report) fulfills all quarterly reporting requirements set forth in the Compliance Order on Consent. The 12 sites in the corrective action process are listed in Table I-1. This ER Quarterly Report presents activities and data.
Bacterial pathogens have numerous processes by which their genomic DNA is acquired or rearranged as part of their normal physiology (e.g., exchange of plasmids through conjugation) or by bacteriophage that parasitize bacteria and often insert into the bacterial genome as prophages. These processes occur with relatively high probability/frequency, and may lead to sudden changes in virulence, as new genetic material is added to the chromosome, or structural changes in the chromosome affect gene expression. We set out to devise methods to measure the rates of these processes in bacteria using next generation DNA sequencing. Using very deep sequencing on genomes we had assembled, using library preparation methods and bioinformatics tools designed to help find mobile elements and signs of their insertion, we were able to find numerous examples of attempted novel genome arrangements, revealing data that can be used to calculate rates of different mechanisms of genome change.
Trichloroethene (TCE) and nitrate have been identified as constituents of concern in groundwater at the Sandia National Laboratories, New Mexico (SNL/NM) Technical Area (TA)-V Groundwater (TAVG) Area of Concern (AOC) based on detections above the U.S. Environmental Protection Agency (EPA) maximum contaminant level (MCL) in samples collected from monitoring wells. The EPA MCLs and the State of New Mexico drinking water standards for TCE and nitrate are 5 micrograms per liter and 10 milligrams per liter (as nitrogen), respectively. A phased Treatability Study/Interim Measure (TS/IM) of in-situ bioremediation (ISB) will be implemented to evaluate the effectiveness of ISB as a potential technology to treat the groundwater contamination at TAVG AOC (New Mexico Environment Department [NMED] April 2016). The NMED Hazardous Waste Bureau (HWB) approved the Revised Treatability Study Work Plan (TSWP) (SNL/NM March 2016) in May 2016 (NMED May 2016). The SNL/NM Environmental Restoration Operations (ER) personnel are responsible for implementing the TS/IM of ISB at TAVG AOC in accordance with the Revised TSWP.
Sandia National Laboratories is a multimission laboratory managed by National Technology & Engineering Solutions of Sandia, LLC a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration. Sandia has a long history of pioneering AM technology development. In the mid 1990s, Sandia developed laser-engineered net shaping (LENS), one of the first direct metal AM technologies, which was commercialized by Optomec. Robocast, an extrusion-based direct-write process for 3D ceramic parts, is another technology developed at Sandia. It was commercialized by Robocast Enterprises, LLC. As of January, 2019, Sandia was conducting AM R&D projects valued at about $25 million, with an emphasis on: 1) analysis-driven design, 2) materials reliability, and 3) multi-material AM.
This document will outline the test plans for the Hill AFB Mk 84 aging studies. The goal of the test series is to measure early case expansion velocities, sample the fragment field at various locations, and measure the overall shockwave and large fragment trajectories. This will be accomplished with 3 imaging systems as outlined in the sections below.
The US currently has about 80,000 metric tons of uranium in 279,000 used fuel assemblies from commercial spent nuclear fuel (SNF), most of which is stored “on-site” at or near power plants where it was produced. On-site storage facilities were not designed to provide a permanent solution for the disposal of spent nuclear fuel, and building a permanent disposal facility will likely take decades. Therefore, it is important to consider how the US public views options for constructing one or more storage facilities for safely consolidating and storing SNF in the interim. The 2017 iteration of the Energy and Environment survey (EE17) by the Center for Energy, Security, & Society (CES&S) included a battery of questions that measure public views about SNF storage and disposal options. The questions gauge general support for continued on-site storage, interim storage, and permanent disposal. EE17 also measured support for several of the specific sites under consideration, including the two private initiatives for interim storage of SNF in New Mexico and Texas. In addition, EE17 respondents provide insight into the factors likely to affect broader public support for these initiatives as the siting process unfolds, including public views about the importance of support for a prospective facility by host communities and host state residents.
Thermochemical energy storage (TCES) offers the potential for greatly increased storage density relative to sensible-only energy storage. Moreover, heat may be stored indefinitely in the form of chemical bonds via TCES, accessed upon demand, and converted to heat at temperatures significantly higher than current solar thermal electricity production technology and is therefore well-suited to more efficient high-temperature power cycles. However, this potential has yet to be realized as no current TCES system satisfies all requirements. This project involves the design, development, and demonstration of a robust and innovative storage cycle based on redox-active metal oxides that are Mixed Ionic-Electronic Conductors (MIECs). We will develop, characterize, and demonstrate a first of its kind 100kWth particle-based TCES system for direct integration with combined-cycle Air Brayton based on the endothermic reduction and exothermic reoxidation of MIECs. Air Brayton cycles require temperatures in the range of 1000-1230 °C for smaller axial flow turbines and are therefore inaccessible to all but the most robust storage solutions such as metal oxides. The choice of MIECs, with exceptional tunability and stability over the specified operating conditions allows us to optimally target this high impact cycle and to introduce the innovation of directly driving the turbine with the reacting/heat recovery fluid. The potential for high temperature thermal storage has direct bearing on next-gen CSP, and an appropriate investment for SETO.
The collapse or merging of individual plumes of direct-injection gasoline injectors is of fundamental importance to engine performance because of its impact on fuel-air mixing. However, the mechanisms of spray collapse are not fully understood. The purpose of this work is to study the effects of injection duration and multiple injections on the interaction and/or collapse of multiplume gasoline direct injection sprays. High-speed (100 kHz) particle image velocimetry is applied along a plane between plumes to observe the full temporal evolution of plume interaction and potential collapse, resolved for individual injection events. Supporting information along a line of sight is obtained using diffused back illumination. Experiments are performed under simulated engine conditions using a symmetric 8-hole injector in a high-temperature, high-pressure vessel at the "Spray G" operating conditions of the Engine Combustion Network. Longer injection duration is found to promote plume collapse, while staging fuel delivery with multiple, shorter injections is resistant to plume collapse.
Our goal is to develop a general theoretical basis for quantifying uncertainty in supervised machine learning models. Current machine learning accuracy-based validation metrics indicate how well a classifier performs on a given data set as a whole. However, these metrics do not tell us a model's efficacy in predicting particular samples. We quantify uncertainty by constructing probability distributions of the predictions made by an ensemble of classifiers. This report details our initial investigations into uncertainty quantification for supervised machine learning. We apply an uncertainty analysis to the problem of malicious website detection. Machine learning models can be trained to find suspicious characteristics in the text of a website's Uniform Resource Locator (URL). However, given the vast numbers of URLs and the ever changing tactics of malicious actors, it will always be possible to find sets of websites which are outliers with respect to a model's hypothesis. Therefore, we seek to understand a model's per-sample reliability when classifying URL data. Acknowledgements This work was funded by the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) program.
Faegh, Ehsan; Omasta, Travis; Hull, Matthew; Ferrin, Sean; Shrestha, Sujan; Lechman, Jeremy B.; Bolintineanu, Dan S.; Zuraw, Michael; Mustain, William E.
The leading cause for safety vent rupture in alkaline batteries is the intrinsic instability of Zn in the highly alkaline reacting environment. Zn and aqueous KOH react in a parasitic process to generate hydrogen gas, which can rupture the seal and vent the hydrogen along with small amounts of electrolyte, and thus, damage consumer devices. Abusive conditions, particularly deep discharge, are known to accelerate this “gassing” phenomena. In order to understand the fundamental drivers and mechanisms for such gassing behavior, the results from multiphysics modeling, ex-situ microscopy and operando measurements of cell potential, pressure and visualization have been combined. Operando measurements were enabled by the development a new research platform that enables a cross-sectional view of a cylindrical Zn-MnO2 primary alkaline battery throughout its discharge and recovery. A second version of this cell can actively measure the in-cell pressure during the discharge. It is shown that steep concentration gradients emerge during the cell discharge through a redox electrolyte mechanism, leading to the formation of high surface area Zn deposits that experience rapid corrosion when the cell rests to its open circuit voltage. Such corrosion is paired with the release of hydrogen and high cell pressure – eventually leading to cell rupture.
Molten salt electrolytes show promise as safe, effective elements of emerging low to intermediate temperature molten sodium batteries. Here we investigate the NaI-AlCl3 molten salt system for its electrochemical and physical properties at 150 and 180°C, temperatures recently used to demonstrate a new NaI battery using this molten salt system. Molten salt compositions ranging from 20–75% NaI were prepared and electrochemically interrogated with carbon fiber ultramicroelectrodes utilizing cyclic voltammetry, chronoamperometry, and differential pulse voltammetry. Results indicate that at very high or very low NaI concentrations, secondary phases present hinder diffusion of the redox-active species, potentially impacting the current density of the system. Furthermore, a concentration-independent chronoamperometric analysis technique was leveraged to determine effective diffusion coefficients of active I− in the melt phase. Collectively, the physical characterization and electrochemical properties of the tested salts indicate that the catholyte composition can significantly affect the physical state, current density, ionic diffusion, and voltage window of these promising NaI-AlCl3 molten salt battery catholytes.
The initial oxidation products of methyl butyrate (MB) and ethyl butyrate (EB) are studied using a time- and energy-resolved photoionization mass spectrometer. Reactions are initiated with Cl radicals in an excess of oxygen at a temperature of 550 K and a pressure of 6 Torr. Ethyl crotonate is the sole isomeric product that is observed from concerted HO2-elimination from initial alkylperoxy radicals formed in the oxidation of EB. Analysis of the potential energy surface of each possible alkylperoxy radical shows that the CH3CH(OO)CH2C(O)OCH2CH3 (RγO2) and CH3CH2CH(OO)C(O)OCH2CH3 (RβO2) radicals are the isomers that could undergo this concerted HO2-elimination. Two lower-mass products (formaldehyde and acetaldehyde) are observed in both methyl and ethyl butyrate reactions. Secondary reactions of alkylperoxy radicals with HO2 radicals can decompose into the aforementioned products and smaller radicals. These pathways are the likely explanation for the formation of formaldehyde and acetaldehyde.
The Sandia Cable Tester is an automated cable testing solution capable of quickly testing connectivity and isolation of arbitrary cables. This manual describes how to operate the unit including system setup and how to read out results. This manual also goes into detail on the design and theory behind the system.
ASME 2018 12th International Conference on Energy Sustainability, ES 2018, collocated with the ASME 2018 Power Conference and the ASME 2018 Nuclear Forum
Prior research at Sandia National Laboratories showed the potential advantages of using light-trapping features which are not currently used in direct tubular receivers. A horizontal bladed receiver arrangement showed the best potential for increasing the effective solar absorptance by increasing the ratio of effective surface area to the aperture footprint. Previous test results and models of the bladed receiver showed a receiver efficiency increase over a flat receiver panel of ~ 5-7% over a range of average irradiances, while showing that the receiver tubes can withstand temperatures > 800 °C with no issues. The bladed receiver is being tested at various peak heat fluxes ranging 75-150 kW/m2 under transient conditions using Air as a heat transfer fluid at inlet pressure ~250 kPa (~36 psi) using a regulating flow loop. The flow loop was designed and tested to maintain a steady mass flow rate for ~15 minutes using pressurized bottles as gas supply. Due to the limited flow-time available, a novel transient methodology to evaluate the thermal efficiencies is presented in this work. Computational fluid dynamics (CFD) models are used to predict the temperature distribution and the resulting transient receiver efficiencies. The CFD simulations results using air as heat transfer fluid have been validated experimentally at the National Solar Thermal Test Facility in Sandia National Labs.
Architecture simulation can aid in predicting and understanding application performance, particularly for proposed hardware or large system designs that do not exist. In network design studies for high-performance computing, most simulators focus on the dominant message passing (MPI) model. Currently, many simulators build and maintain their own simulator-specific implementations of MPI. This approach has several drawbacks. Rather than reusing an existing MPI library, simulator developers must implement all semantics, collectives, and protocols. Additionally, alternative runtimes like GASNet cannot be simulated without again building a simulator-specific version. It would be far more sustainable and flexible to maintain lower-level layers like uGNI or IB-verbs and reuse the production runtime code. Directly building and running production communication runtimes inside a simulator poses technical challenges, however. We discuss these challenges and show how they are overcome via the macroscale components for the Structural Simulation Toolkit (SST), leveraging a basic source-to-source tool to automatically adapt production code for simulation. SST is able to encapsulate and virtualize thousands of MPI ranks in a single simulator process, providing a “supercomputer in a laptop” environment. We demonstrate the approach for the production GASNet runtime over uGNI running inside SST. We then discuss the capabilities enabled, including investigating performance with tunable delays, deterministic debugging of race conditions, and distributed debugging with serial debuggers.
Directional drilling can be used to enable multi-lateral completions from a single well pad to improve well productivity and decrease environmental impact. Downhole rotation is typically developed with a motor in the Bottom Hole Assembly (BHA) that develops drilling power necessary to rotate the bit apart from the rotation developed by the surface rig. Historically, wellbore deviation has been introduced by a “bent-sub” that introduces a small angular deviation to allow the bit to drill off-axis with orientation of the BHA controlled via surface rotation. The geothermal drilling industry has not realized the benefit of Rotary Steerable Systems, and struggles with conventional downhole rotation systems that use bent-subs for directional control due to shortcomings with downhole motors. Commercially-available Positive Displacement Motors are limited to approximately 350 F (177C) and introduce lateral vibration to the bottom hole assembly contributing to hardware failures and compromising directional drilling objectives. Mud turbines operate at higher temperatures but do not have the low-speed, high torque performance envelope for use with conventional geothermal drill bits. Development of a fit-for purpose downhole motor would enable geothermal directional drilling. Sandia National Laboratories is developing technology for a downhole piston motor to enable directional drilling in high temperature, high strength rock. Application of conventional hydraulic piston motor power cycles using drilling fluids is detailed. Work is described regarding conceiving downhole piston motor power sections; modeling and analysis of potential solutions; and development and laboratory testing of prototype hardware. These developments will lead to more reliable access to geothermal resources and allow preferential wellbore trajectories resulting in improved resource recovery, decreased environmental impact and enhanced well construction economics.
This study evaluates the applicability of the Octane Index (OI) framework under conventional spark ignition (SI) and “beyond Research Octane Number (RON)” conditions using nine fuels operated under stoichiometric, knock-limited conditions in a direct injection spark ignition (DISI) engine, supported by Monte Carlo-type simulations which interrogate the effects of measurement uncertainty. Of the nine tested fuels, three fuels are “Tier III” fuel blends, meaning that they are blends of molecules which have passed two levels of screening, and have been evaluated to be ready for tests in research engines. These molecules have been blended into a four-component gasoline surrogate at varying volume fractions in order to achieve a RON rating of 98. The molecules under consideration are isobutanol, 2-butanol, and diisobutylene (which is a mixture of two isomers of octene). The remaining six fuels were research-grade gasolines of varying formulations. The DISI research engine was used to measure knock limits at heated and unheated intake temperature conditions, as well as throttled and boosted intake pressures, all at an engine speed of 1400 rpm. The tested knock-limited operating conditions conceptually exist both between the Motor Octane Number (MON) and RON conditions, as well as “beyond RON” conditions (conditions which are conceptually at lower temperatures, higher pressures, or longer residence times than the RON condition). In addition to directly assessing the performance of the Tier III blends relative to other gasolines, the OI framework was evaluated with considerations of experimental uncertainty in the knock-limited combustion phasing (KL-CA50) measurements, as well as RON and MON test uncertainties. The OI was found to hold to the first order, explaining more than 80% of the knock-limited behavior, although the remaining variation in fuel performance from OI behavior was found to be beyond the likely experimental uncertainties. This indicates that the effects of specific fuel components on knock which are not captured by RON and MON ratings, and complicating the assessment of a given fuel by RON and MON ratings alone.
Methyl vinyl ketone (MVK) and methacrolein (MACR) are important intermediate products in atmospheric degradation of volatile organic compounds, especially of isoprene. This work investigates the reactions of the smallest Criegee intermediate, CH2OO, with its co-products from isoprene ozonolysis, MVK and MACR, using multiplexed photoionization mass spectrometry (MPIMS), with either tunable synchrotron radiation from the Advanced Light Source or Lyman-α (10.2 eV) radiation for photoionization. CH2OO was produced via pulsed laser photolysis of CH2I2 in the presence of excess O2. Time-resolved measurements of reactant disappearance and of product formation were performed to monitor reaction progress; first order rate coefficients were obtained from exponential fits to the CH2OO decays. The bimolecular reaction rate coefficients at 300 K and 4 Torr are k(CH2OO + MVK) = (5.0 ± 0.4) × 10-13 cm3 s-1 and k(CH2OO + MACR) = (4.4 ± 1.0) × 10-13 cm3 s-1, where the stated ±2σ uncertainties are statistical uncertainties. Adduct formation is observed for both reactions and is attributed to the formation of a secondary ozonides (1,2,4-trioxolanes), supported by master equation calculations of the kinetics and the agreement between measured and calculated adiabatic ionization energies. Kinetics measurements were also performed for a possible bimolecular CH2OO + CO reaction and for the reaction of CH2OO with CF3CHCH2 at 300 K and 4 Torr. For CH2OO + CO, no reaction is observed and an upper limit is determined: k(CH2OO + CO) < 2 × 10-16 cm3 s-1. For CH2OO + CF3CHCH2, an upper limit of k(CH2OO + CF3CHCH2) < 2 × 10-14 cm3 s-1 is obtained.
We use machine learning (ML) to infer stress and plastic flow rules using data from representative polycrystalline simulations. In particular, we use so-called deep (multilayer) neural networks (NN) to represent the two response functions. The ML process does not choose appropriate inputs or outputs, rather it is trained on selected inputs and output. Likewise, its discrimination of features is crucially connected to the chosen input-output map. Hence, we draw upon classical constitutive modeling to select inputs and enforce well-accepted symmetries and other properties. In the context of the results of numerous simulations, we discuss the design, stability and accuracy of constitutive NNs trained on typical experimental data. With these developments, we enable rapid model building in real-time with experiments, and guide data collection and feature discovery.
Inter-area oscillation is one of the main concerns in power system small signal stability. It involves wide area in power system, therefore identifying the causes and damping these oscillations are challenging. Undamped inter-area oscillations may cause severe problems in power systems including large-scale blackouts. Designing a proper controller for power systems also is a challenging problem due to the complexity of the system. Moreover, for a large-scale system it is impractical to collect all system information in one location to design a centralized controller. Decentralized controller will be more desirable for large scale systems to minimize the inter area oscillations by using local information. In this paper, we consider a large-scale power system consisting of three areas. After decomposing the system into three subsystems, each subsystem is modeled with a lower order system. Finally, a decentralized controller is designed for each subsystem to maintain the large-scale system frequency at the desired level even in the presence of disturbances.
Three-dimensional (3D) seismic wave propagation is simulated in the newly-developed Marmousi2 elastic model, using both acoustic and elastic finite-difference (FD) algorithms. Although acoustic and elastic ocean-bottom particle velocity seismograms display distinct differences, only subtle variations are discernable in pressure seismograms recorded in the marine water layer.
A monumental shift from conventional lighting technologies (incandescent, fluorescent, high intensity discharge) to LED lighting is currently transpiring. The primary driver for this shift has been energy and associated cost savings. LED lighting is now more efficacious than any of the conventional lighting technologies with room to still improve. Near term, phosphor converted LED packages have the potential for efficacy improvement from 160 lm/W to 255 lm/W. Longer term, color-mixed LED packages have the potential for efficacy levels conceivably as high as 330 lm/W, though reaching these performance levels requires breakthroughs in green and amber LED efficiency. LED package efficacy sets the upper limit to luminaire efficacy, with the luminaire containing its own efficacy loss channels. In this paper, based on analyses performed through the U.S. Department of Energy Solid State Lighting Program, various LED and luminaire loss channels are elucidated, and critical areas for improvement identified. Beyond massive energy savings, LED technology enables a host of new applications and added value not possible or economical with previous lighting technologies. These include connected lighting, lighting tailored for human physiological responses, horticultural lighting, and ecologically conscious lighting. Finally, none of these new applications would be viable if not for the high efficacies that have been achieved, and are themselves just the beginning of what LED lighting can do.
Using molecular dynamics simulation, we studied the density fluctuations and cavity formation probabilities in aqueous solutions and their effect on the hydration of CO2. With increasing salt concentration, we report an increased probability of observing a larger than the average number of species in the probe volume. Our energetic analyses indicate that the van der Waals and electrostatic interactions between CO2 and aqueous solutions become more favorable with increasing salt concentration, favoring the solubility of CO2 (salting in). However, due to the decreasing number of cavities forming when salt concentration is increased, the solubility of CO2 decreases. The formation of cavities was found to be the primary control on the dissolution of gas, and is responsible for the observed CO2 salting-out effect. Our results provide the fundamental understanding of the density fluctuation in aqueous solutions and the molecular origin of the salting-out effect for real gas.
Glick, Joseph A.; Edwards, Samuel; Korucu, Demet; Aguilar, Victor; Niedzielski, Bethany M.; Loloee, Reza; Pratt, W.P.; Birge, Norman O.; Kotula, Paul G.; Missert, Nancy A.
We present measurements of Josephson junctions containing three magnetic layers with noncollinear magnetizations. The junctions are of the form S/F′/N/F/N/F″/S, where S is superconducting Nb, F′ is either a thin Ni or Permalloy layer with in-plane magnetization, N is the normal metal Cu, F is a synthetic antiferromagnet with magnetization perpendicular to the plane, composed of Pd/Co multilayers on either side of a thin Ru spacer, and F″ is a thin Ni layer with in-plane magnetization. The supercurrent in these junctions decays more slowly as a function of the F-layer thickness than for similar spin-singlet junctions not containing the F′ and F″ layers. The slower decay is the prime signature that the supercurrent in the central part of these junctions is carried by spin-triplet pairs. The junctions containing F′= Permalloy are suitable for future experiments where either the amplitude of the critical current or the ground-state phase difference across the junction is controlled by changing the relative orientations of the magnetizations of the F′ and F″ layers.
High-fidelity detection of iodine species is of utmost importance to the safety of the population in cases of nuclear accidents or advanced nuclear fuel reprocessing. Herein, we describe the success at using impedance spectroscopy to directly detect the real-time adsorption of I2 by a metal-organic framework zeolitic imidazolate framework (ZIF)-8-based sensor. Methanolic suspensions of ZIF-8 were dropcast onto platinum interdigitated electrodes, dried, and exposed to gaseous I2 at 25, 40, or 70 °C. Using an unoptimized sensor geometry, I2 was readily detected at 25 °C in air within 720 s of exposure. The specific response is attributed to the chemical selectivity of the ZIF-8 toward I2. Furthermore, equivalent circuit modeling of the impedance data indicates a >105× decrease in ZIF-8 resistance when 116 wt % I2 is adsorbed by ZIF-8 at 70 °C in air. This irreversible decrease in resistance is accompanied by an irreversible loss in the long-range crystallinity, as evidenced by X-ray diffraction and infrared spectroscopy. Air, argon, methanol, and water were found to produce minimal changes in ZIF-8 impedance. This report demonstrates how selective I2 adsorption by ZIF-8 can be leveraged to create a highly selective sensor using >105× changes in impedance response to enable the direct electrical detection of environmentally relevant gaseous toxins.
Containerization, or OS-level virtualization has taken root within the computing industry. However, container utilization and its impact on performance and functionality within High Performance Computing (HPC) is still relatively undefined. This paper investigates the use of containers with advanced supercomputing and HPC system software. With this, we define a model for parallel MPI application DevOps and deployment using containers to enhance development effort and provide container portability from laptop to clouds or supercomputers. In this endeavor, we extend the use of Sin- gularity containers to a Cray XC-series supercomputer. We use the HPCG and IMB benchmarks to investigate potential points of overhead and scalability with containers on a Cray XC30 testbed system. Furthermore, we also deploy the same containers with Docker on Amazon's Elastic Compute Cloud (EC2), and compare against our Cray supercomputer testbed. Our results indicate that Singularity containers operate at native performance when dynamically linking Cray's MPI libraries on a Cray supercomputer testbed, and that while Amazon EC2 may be useful for initial DevOps and testing, scaling HPC applications better fits supercomputing resources like a Cray.
Here, the first metal–organic framework exhibiting thermally activated delayed fluorescence (TADF) was developed. The zirconium-based framework (UiO-68-dpa) uses a newly designed linker composed of a terphenyl backbone, an electron-accepting carboxyl group, and an electron-donating diphenylamine and exhibits green TADF emission with a photoluminescence quantum yield of 30% and high thermal stability.
Optimizing thermal generation commitments and dispatch in the presence of high penetrations of renewable resources such as solar energy requires a characterization of their stochastic properties. In this study, we describe novel methods designed to create day-ahead, wide-area probabilistic solar power scenarios based only on historical forecasts and associated observations of solar power production. Each scenario represents a possible trajectory for solar power in next-day operations with an associated probability computed by algorithms that use historical forecast errors. Scenarios are created by segmentation of historic data, fitting non-parametric error distributions using epi-splines, and then computing specific quantiles from these distributions. Additionally, we address the challenge of establishing an upper bound on solar power output. Our specific application driver is for use in stochastic variants of core power systems operations optimization problems, e.g., unit commitment and economic dispatch. These problems require as input a range of possible future realizations of renewables production. However, the utility of such probabilistic scenarios extends to other contexts, e.g., operator and trader situational awareness. Finally, we compare the performance of our approach to a recently proposed method based on quantile regression, and demonstrate that our method performs comparably to this approach in terms of two widely used methods for assessing the quality of probabilistic scenarios: the Energy score and the Variogram score.
Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating flow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with "setup-up" costs can increase to nearly 50% of overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.
A wave energy converter must be designed to survive and function efficiently, often in highly energetic ocean environments. This represents a challenging engineering problem, comprising systematic failure mode analysis, environmental characterization, modeling, experimental testing, fatigue and extreme response analysis. While, when compared with other ocean systems such as ships and offshore platforms, there is relatively little experience in wave energy converter design, a great deal of recent work has been done within these various areas. Here, this article summarizes the general stages and workflow for wave energy converter design, relying on supporting articles to provide insight. By surveying published work on wave energy converter survival and design response analyses, this paper seeks to provide the reader with an understanding of the different components of this process and the range of methodologies that can be brought to bear. In this way, the reader is provided with a large set of tools to perform design response analyses on wave energy converters.
Inherent advantages of wide bandgap materials make GaN-based devices attractive for power electronics and applications in radiation environments. Recent advances in the availability of wafer-scale, bulk GaN substrates have enabled the production of high quality, low defect density GaN devices, but fundamental studies of carrier transport and radiation hardness in such devices are lacking. Here, we report measurements of the hole diffusion length in low threading dislocation density (TDD), homoepitaxial n-GaN, and high TDD heteroepitaxial n-GaN Schottky diodes before and after irradiation with 2.5 MeV protons at fluences of 4-6 × 1013 protons/cm2. We also characterize the specimens before and after irradiation using electron beam-induced-current (EBIC) imaging, cathodoluminescence, deep level optical spectroscopy (DLOS), steady-state photocapacitance, and lighted capacitance-voltage (LCV) techniques. We observe a substantial reduction in the hole diffusion length following irradiation (50%-55%) and the introduction of electrically active defects which could be attributed to gallium vacancies and associated complexes (VGa-related), carbon impurities (C-related), and gallium interstitials (Gai). EBIC imaging suggests long-range migration and clustering of radiation-induced point defects over distances of ∼500 nm, which suggests mobile Gai. Following irradiation, DLOS and LCV reveal the introduction of a prominent optical energy level at 1.9 eV below the conduction band edge, consistent with the introduction of Gai.
A Stillinger-Weber potential is computationally very efficient for molecular dynamics simulations. Despite its simple mathematical form, the Stillinger-Weber potential can be easily parameterized to ensure that crystal structures with tetrahedral bond angles (e.g., diamond-cubic, zinc-blende, and wurtzite) are stable and have the lowest energy. As a result, the Stillinger-Weber potential has been widely used to study a variety of semiconductor elements and alloys. When studying an A-B binary system, however, the Stillinger-Weber potential is associated with two major drawbacks. First, it significantly overestimates the elastic constants of elements A and B, limiting its use for systems involving both compounds and elements (e.g., an A/AB multilayer). Second, it prescribes equal energy for zinc-blende and wurtzite crystals, limiting its use for compounds with large stacking fault energies. Here, we utilize the polymorphic potential style recently implemented in LAMMPS to develop a modified Stillinger-Weber potential for InGaN that overcomes these two problems.
We report real-time observations of a phase transition in the ionic solid CaF2, a model AB2 structure in high-pressure physics. Synchrotron x-ray diffraction coupled with dynamic loading to 27.7 GPa, and separately with static compression, follows, in situ, the fluorite to cotunnite structural phase transition, both on nanosecond and on minute time scales. Using Rietveld refinement techniques, we examine the kinetics and hysteresis of the transition. Our results give insight into the kinetic time scale of the fluorite-cotunnite phase transition under shock compression, which is relevant to a number of isomorphic compounds.
The defense community desires low-power sensors deployed around critical assets for intrusion detection. A piezoelectric microelectromechanical accelerometer is coupled with a complementary metal-oxide-semiconductor comparator to create a near-zero power wakeup system. The accelerometer is designed to operate at resonance and employs aluminum nitride for piezoelectric transduction. At a target frequency of 160 Hz, the accelerometer achieves sensitivities as large as 26 V/g. The system is shown to require only 5.4 nW of power before and after latching. The combined system is shown to wake up to a target frequency signature of a generator while rejecting background noise as well as non-target frequency signatures.
A solubility model is presented for ferrous iron hydroxide (Fe(OH)2(s)), hibbingite (Fe2Cl(OH)3(s)), siderite (FeCO3(s)), and chukanovite (Fe2CO3(OH)2(s)). The Pitzer activity coefficient equation was utilized in developing the model to account for the excess free energies of aqueous species in the background solutions of high ionic strength. Solubility limiting minerals were analyzed before and after experiments using X-ray diffraction. Formation of Fe(OH)2(s) was observed in the experiments that were initiated with Fe2Cl(OH)3(s) in Na2SO4 solution. Coexistence of siderite and chukanovite was observed in the experiments in Na2CO3 + NaCl solutions. Two equilibrium constants that had been reported by us for the dissolution of Fe(OH)2(s) and Fe2Cl(OH)3(s) (Nemer et al.) were rederived in this paper, using newer thermodynamic data selected from the literature to maintain internal consistency of the series of our data analyses in preparation, including this paper. Three additional equilibrium constants for the following reactions were determined in this paper: dissolution of siderite and chukanovite and dissociation of the aqueous species Fe(CO3)2 -2. Five Pitzer interaction parameters were derived in this paper: β(0), β(1), and Cφ parameters for the species pair Fe+2/SO4 -2 β(0) and β(1) parameters for the species pair Na+/Fe(CO3)2 -2. Our model predicts that, among the four inorganic ferrous iron minerals, siderite is the stable mineral in two WIPP-related brines (WIPP: Waste Isolation Pilot Plant), i.e., GWB and ERDA6 (Brush and Domski), and the electrochemical equilibrium between elemental iron and siderite provides a low oxygen fugacity (10-91.2 atm) that can keep the actinides at their lowest oxidation states. (Nemer et al., Brush and Domski; references numbered 1 and 2 in the main text).
We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differential equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robust optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.
We derive an intrinsically temperature-dependent approximation to the correlation grand potential for many-electron systems in thermodynamical equilibrium in the context of finite-temperature reduced-density-matrix-functional theory (FT-RDMFT). We demonstrate its accuracy by calculating the magnetic phase diagram of the homogeneous electron gas. We compare it to known limits from highly accurate quantum Monte Carlo calculations as well as to phase diagrams obtained within existing exchange-correlation approximations from density functional theory and zero-temperature RDMFT.
Zhou, Xiaowang Z.; Cho, Eun S.; Ruminski, Anne M.; Liu, Yi S.; Shea, Patrick T.; Kang, Shin Y.; Zaia, Edmond W.; De Chuang, Yi; Heo, Tae W.; Guo, Jinghua; Wood, Brandon C.; Urban, Jeffrey J.
Demand for pragmatic alternatives to carbon-intensive fossil fuels is growing more strident. Hydrogen represents an ideal zero-carbon clean energy carrier with high energy density. For hydrogen fuel to compete with alternatives, safe and high capacity storage materials that are readily cycled are imperative. Here, development of such a material, comprised of nickel-doped Mg nanocrystals encapsulated by molecular-sieving reduced graphene oxide (rGO) layers, is reported. While most work on advanced hydrogen storage composites to date endeavor to explore either nanosizing or addition of carbon materials as secondary additives individually, methods to enable both are pioneered: “dual-channel” doping combines the benefits of two different modalities of enhancement. Specifically, both external (rGO strain) and internal (Ni doping) mechanisms are used to efficiently promote both hydriding and dehydriding processes of Mg nanocrystals, simultaneously achieving high hydrogen storage capacity (6.5 wt% in the total composite) and excellent kinetics while maintaining robustness. Furthermore, hydrogen uptake is remarkably accomplished at room temperature and also under 1 bar—as observed during in situ measurements—which is a substantial advance for a reversible metal hydride material. The realization of three complementary functional components in one material breaks new ground in metal hydrides and makes solid-state materials viable candidates for hydrogen-fueled applications.
Ordering nanoparticles into a desired super-structure is often crucial for their technological applications. We use molecular dynamics simulations to study the assembly of nanoparticles in a polymer brush randomly grafted to a planar surface as the solvent evaporates. Initially, the nanoparticles are dispersed in a solvent that wets the polymer brush. After the solvent evaporates, the nanoparticles are either inside the brush or adsorbed at the surface of the brush, depending on the strength of the nanoparticle-polymer interaction. For strong nanoparticle-polymer interactions, a 2-dimensional ordered array is only formed when the brush density is finely tuned to accommodate a single layer of nanoparticles. When the brush density is higher or lower than this optimal value, the distribution of nanoparticles shows large fluctuations in space and the packing order diminishes. For weak nanoparticle-polymer interactions, the nanoparticles order into a hexagonal array on top of the polymer brush as long as the grafting density is high enough to yield a dense brush. An interesting healing effect is observed for a low-grafting-density polymer brush that can become more uniform in the presence of weakly adsorbed nanoparticles.
We outline ideas on desired properties for a new generation of effective core potentials (ECPs) that will allow valence-only calculations to reach the full potential offered by recent advances in many-body wave function methods. The key improvements include consistent use of correlated methods throughout ECP constructions and improved transferability as required for an accurate description of molecular systems over a range of geometries. The guiding principle is the isospectrality of all-electron and ECP Hamiltonians for a subset of valence states. We illustrate these concepts on a few first- and second-row atoms (B, C, N, O, S), and we obtain higher accuracy in transferability than previous constructions while using semi-local ECPs with a small number of parameters. In addition, the constructed ECPs enable many-body calculations of valence properties with higher (or same) accuracy than their all-electron counterparts with uncorrelated cores. This implies that the ECPs include also some of the impacts of core-core and core-valence correlations on valence properties. The results open further prospects for ECP improvements and refinements.
We present DAGSENS, a new approach to parametric transient sensitivity analysis of Differential Algebraic Equation systems (DAEs), such as SPICE-level circuits. The key ideas behind DAGSENS are, (1) to represent the entire sequence of computations from DAE parameters to the objective function (whose sensitivity is needed) as a Directed Acyclic Graph (DAG) called the 'sensitivity DAG', and (2) to compute the required sensitivites efficiently by using dynamic programming techniques to traverse the DAG. DAGSENS is simple, elegant, and easy-to-understand compared to previous approaches; for example, in DAGSENS, one can switch between direct and adjoint sensitivities simply by reversing the direction of DAG traversal. Also, DAGSENS is more powerful than previous approaches because it works for a more general class of objective functions, including those based on 'events' that occur during a transient simulation (e.g., a node voltage crossing a threshold, a phase-locked loop (PLL) achieving lock, a circuit signal reaching its maximum/minimum value, etc.). In this paper, we demonstrate DAGSENS on several electronic and biological applications, including high-speed communication, statistical cell library characterization, and gene expression.
Bilayer van der Waals (vdW) heterostructures such as MoS2/WS2 and MoSe2/WSe2 have attracted much attention recently, particularly because of their type II band alignments and the formation of interlayer exciton as the lowest-energy excitonic state. In this work, we calculate the electronic and optical properties of such heterostructures with the first-principles GW+Bethe-Salpeter Equation (BSE) method and reveal the important role of interlayer coupling in deciding the excited-state properties, including the band alignment and excitonic properties. Our calculation shows that due to the interlayer coupling, the low energy excitons can be widely tuned by a vertical gate field. In particular, the dipole oscillator strength and radiative lifetime of the lowest energy exciton in these bilayer heterostructures is varied by over an order of magnitude within a practical external gate field. We also build a simple model that captures the essential physics behind this tunability and allows the extension of the ab initio results to a large range of electric fields. Our work clarifies the physical picture of interlayer excitons in bilayer vdW heterostructures and predicts a wide range of gate-tunable excited-state properties of 2D optoelectronic devices.
Hammond, Glenn E.; Bisht, Gautam; Huang, Maoyi; Zhou, Tian; Chen, Xingyuan; Dai, Heng; Riley, William J.; Downs, Janelle L.; Liu, Ying; Zachara, John M.
A fully coupled three-dimensional surface and subsurface land model is developed and applied to a site along the Columbia River to simulate three-way interactions among river water, groundwater, and land surface processes. The model features the coupling of the Community Land Model version 4.5 (CLM4.5) and a massively parallel multiphysics reactive transport model (PFLOTRAN). The coupled model, named CP v1.0, is applied to a 400 m × 400 m study domain instrumented with groundwater monitoring wells along the Columbia River shoreline. CP v1.0 simulations are performed at three spatial resolutions (i.e., 2, 10, and 20 m) over a 5-year period to evaluate the impact of hydroclimatic conditions and spatial resolution on simulated variables. Results show that the coupled model is capable of simulating groundwater-river-water interactions driven by river stage variability along managed river reaches, which are of global significance as a result of over 30 000 dams constructed worldwide during the past half-century. Our numerical experiments suggest that the land-surface energy partitioning is strongly modulated by groundwater-river-water interactions through expanding the periodically inundated fraction of the riparian zone, and enhancing moisture availability in the vadose zone via capillary rise in response to the river stage change. Meanwhile, CLM4.5 fails to capture the key hydrologic process (i.e., groundwater-river-water exchange) at the site, and consequently simulates drastically different water and energy budgets. Furthermore, spatial resolution is found to significantly impact the accuracy of estimated the mass exchange rates at the boundaries of the aquifer, and it becomes critical when surface and subsurface become more tightly coupled with groundwater table within 6 to 7 meters below the surface. Inclusion of lateral subsurface flow influenced both the surface energy budget and subsurface transport processes as a result of river-water intrusion into the subsurface in response to an elevated river stage that increased soil moisture for evapotranspiration and suppressed available energy for sensible heat in the warm season. The coupled model developed in this study can be used for improving mechanistic understanding of ecosystem functioning and biogeochemical cycling along river corridors under historical and future hydroclimatic changes. The dataset presented in this study can also serve as a good benchmarking case for testing other integrated models.
Channeled spectropolarimetry measures the spectrally resolved Stokes parameters. A key aspect of this technique is to accurately reconstruct the Stokes parameters from a modulated measurement of the channeled spectropolarimeter. The state-of-the-art reconstruction algorithm uses the Fourier transform to extract the Stokes parameters from channels in the Fourier domain. While this approach is straightforward, it can be sensitive to noise and channel cross-talk, and it imposes bandwidth limitations that cut o high frequency details. To overcome these drawbacks, we present a reconstruction method called compressed channeled spectropolarimetry. In our proposed framework, reconstruction in channeled spectropolarimetry is an underdetermined problem, where we take N measurements and solve for 3N unknown Stokes parameters. We formulate an optimization problem by creating a mathematical model of the channeled spectropolarimeter with inspiration from compressed sensing. We show that our approach o ers greater noise robustness and reconstruction accuracy compared with the Fourier transform technique in simulations and experimental measurements. By demonstrating more accurate reconstructions, we push performance to the native resolution of the sensor, allowing more information to be recovered from a single measurement of a channeled spectropolarimeter.
With the rapid spread in use of Digital Image Correlation (DIC) globally, it is important there be some standard methods of verifying and validating DIC codes. To this end, the DIC Challenge board was formed and is maintained under the auspices of the Society for Experimental Mechanics (SEM) and the international DIC society (iDICs). The goal of the DIC Board and the 2D–DIC Challenge is to supply a set of well-vetted sample images and a set of analysis guidelines for standardized reporting of 2D–DIC results from these sample images, as well as for comparing the inherent accuracy of different approaches and for providing users with a means of assessing their proper implementation. This document will outline the goals of the challenge, describe the image sets that are available, and give a comparison between 12 commercial and academic 2D–DIC codes using two of the challenge image sets.
AlGaN-channel high electron mobility transistors (HEMTs) are among a class of ultra wide-bandgap transistors that are promising candidates for RF and power applications. Long-channel AlxGa1-xN HEMTs with x = 0.7 in the channel have been built and evaluated across the -50°C to +200°C temperature range. These devices achieved room temperature drain current as high as 46 mA/mm and were absent of gate leakage until the gate diode forward bias turn-on at ~2.8 V, with a modest -2.2 V threshold voltage. A very large Ion/Ioff current ratio, of 8 × 109 was demonstrated. A near ideal subthreshold slope that is just 35% higher than the theoretical limit across the temperature range was characterized. The ohmic contact characteristics were rectifying from -50°C to +50°C and became nearly linear at temperatures above 100°C. An activation energy of 0.55 eV dictates the temperature dependence of off-state leakage.
Atomic motion at grain boundaries is essential to microstructure development, growth and stability of catalysts and other nanostructured materials. However, boundary atomic motion is often too fast to observe in a conventional transmission electron microscope (TEM) and too slow for ultrafast electron microscopy. We report on the entire transformation process of strained Pt icosahedral nanoparticles (ICNPs) into larger FCC crystals, captured at 2.5 ms time resolution using a fast electron camera. Results show slow diffusive dislocation motion at nm/s inside ICNPs and fast surface transformation at μm/s. By characterizing nanoparticle strain, we show that the fast transformation is driven by inhomogeneous surface stress. And interaction with pre-existing defects led to the slowdown of the transformation front inside the nanoparticles. Particle coalescence, assisted by oxygen-induced surface migration at T ≥ 300°C, also played a critical role. Thus by studying transformation in the Pt ICNPs at high time and spatial resolution, we obtain critical insights into the transformation mechanisms in strained Pt nanoparticles.
A power converter capable of converting the 48 V DC output of a photovoltaic panel into 120 V AC at up to 400 W has been demonstrated in a 40 cu. cm. (2.4 cu. in.) module, for a power density of greater than 160 W/cu. in. The module is enabled by the use of GaN and AlGaN field effect transistors (FETs) and diodes operating at higher power densities and higher switching frequencies than conventional silicon power devices. Typical photovoltaic panel converter/inverters have power densities ranging from 3.5-5.0 W/cu. in. and often make use of bulky, low frequency transformers. By using wide- and ultra-wide-bandgap switching devices, the operating frequency has been increased to 150 kHz, eliminating the need to use low frequency charge and current storage elements. The resulting size reduction demonstrates the significant possibilities that the adoption of GaN and AlGaN devices housed in small, 3D printed packages offers in the field of power electronics.
In order to determine how material characteristics percolate up to system-level improvements in power dissipation for different material systems and device types, we have developed an optimization tool for power diodes. This tool minimizes power dissipation in a diode for a given system operational regime (reverse voltage, forward current density, frequency, duty cycle, and temperature) for a variety of device types and materials. We have carried out diode optimizations for a wide range of system operating points to determine the regimes for which certain power diode materials/devices are favored. In this work, we present results comparing state-of-the-art Si and SiC merged PiN Schottky (MPS) diodes to vertical GaN (v-GaN) PiN diodes and as-yet undeveloped v-GaN Schottky barrier diodes (SBDs). The results of this work show that for all conditions tested, SiC MPS and v-GaN PiN diodes are preferred over Si MPS diodes. v-GaN PiN diodes are preferred over SiC MPS diodes for high-voltage / moderate-frequency operation with the limits of the v-GaN PiN preferred regime, increasing with increasing forward current density. If a v-GaN SBD diode were available, it would be preferred over all other devices at low to moderate voltages, for all frequencies from 100 Hz to 1 MHz.
Sophisticated cyber attacks by state-sponsored and criminal actors continue to plague government and industrial infrastructure. Intuitively, partitioning cyber systems into survivable, intrusion tolerant compartments is a good idea. This prevents witting and unwitting insiders from moving laterally and reaching back to their command and control (C2) servers. However, there is a lack of artifacts that can predict the effectiveness of this approach in a realistic setting. We extend earlier work by relaxing simplifying assumptions and providing a new attacker-facing metric. In this article, we propose new closed-form mathematical models and a discrete time simulation to predict three critical statistics: probability of compromise, probability of external host compromise and probability of reachback. The results of our new artifacts agree with one another and with previous work, which suggests they are internally valid and a viable method to evaluate the effectiveness of cyber zone defense.
Extensive all-atom molecular dynamics calculations on the water–squalane interface for nine different loadings with sorbitan monooleate (SPAN80), at T = 300 K, are analyzed for the surface tension equation of state, desorption free-energy profiles as they depend on loading, and to evaluate escape times for adsorbed SPAN80 into the bulk phases. These results suggest that loading only weakly affects accommodation of a SPAN80 molecule by this squalane–water interface. Specifically, the surface tension equation of state is simple through the range of high tension to high loading studied, and the desorption free-energy profiles are weakly dependent on loading here. The perpendicular motion of the centroid of the SPAN80 headgroup ring is well-described by a diffusional model near the minimum of the desorption free-energy profile. Lateral diffusional motion is weakly dependent on loading. Escape times evaluated on the basis of a diffusional model and the desorption free energies are 7 × 10-2 s (into the squalane) and 3 × 102 h (into the water). Finally, the latter value is consistent with desorption times of related lab-scale experimental work.
This chapter describes the Sandia National Laboratories (SNL) processes for feedback and improvement of radiological operations, including triennial self-assessment (TSA) and radiological process improvement reports (RPIR). This chapter applies to all radiological activities performed by Members of the Workforce (MOW). Certain operations are outside the scope of this manual and do not require self-assessments or use of the RPIR process. See the "Introduction" for exemptions to the requirements of this manual.
This chapter presents requirements and guidance for radiological posting and labeling at Sandia National Laboratories (SNL). This chapter applies to all radiological posting and labeling practices at SNL.
The role of an external field on capillary waves at the liquid-vapor interface of a dipolar fluid is investigated using molecular dynamics simulations. For fields parallel to the interface, the interfacial width squared increases linearly with respect to the logarithm of the size of the interface across all field strengths tested. The value of the slope decreases with increasing field strength, indicating that the field dampens the capillary waves. With the inclusion of the parallel field, the surface stiffness increases with increasing field strength faster than the surface tension. For fields perpendicular to the interface, the interfacial width squared is linear with respect to the logarithm of the size of the interface for small field strengths, and the surface stiffness is less than the surface tension. Above a critical field strength that decreases as the size of the interface increases, the interface becomes unstable due to the increased amplitude of the capillary waves.
Deep learning techniques have demonstrated the ability to perform a variety of object recognition tasks using visible imager data; however, deep learning has not been implemented as a means to autonomously detect and assess targets of interest in a physical security system. We demonstrate the use of transfer learning on a convolutional neural network (CNN) to significantly reduce training time while keeping detection accuracy of physical security relevant targets high. Unlike many detection algorithms employed by video analytics within physical security systems, this method does not rely on temporal data to construct a background scene; targets of interest can halt motion indefinitely and still be detected by the implemented CNN. A key advantage of using deep learning is the ability for a network to improve over time. Periodic retraining can lead to better detection and higher confidence rates. We investigate training data size versus CNN test accuracy using physical security video data. Due to the large number of visible imagers, significant volume of data collected daily, and currently deployed human in the loop ground truth data, physical security systems present a unique environment that is well suited for analysis via CNNs. This could lead to the creation of algorithmic element that reduces human burden and decreases human analyzed nuisance alarms.
There is a desire to detect and assess unmanned aerial systems (UAS) with a high probability of detection and low nuisance alarm rates in numerous fields of security. Currently available solutions rely upon exploiting electronic signals emitted from the UAS. While these methods may enable some degree of security, they fail to address the emerging domain of autonomous UAS that do not transmit or receive information during the course of a mission. We examine frequency analysis of pixel fluctuation over time to exploit the temporal frequency signature present in imagery data of UAS. This signature is present for autonomous or controlled multirotor UAS and allows for lower pixels-on-target detection. The methodology also acts as a method of assessment due to the distinct frequency signatures of UAS when examined against the standard nuisance alarms such as birds or non-UAS electronic signal emitters. The temporal frequency analysis method is paired with machine learning algorithms to demonstrate a UAS detection and assessment method that requires minimal human interaction. The use of the machine learning algorithm allows each necessary human assess to increase the likelihood of autonomous assessment, allowing for increased system performance over time.
Conventional cyber defenses require continual maintenance: virus, firmware, and software updates; costly functional impact tests; and dedicated staff within a security operations center. The conventional defenses require access to external sources for the latest updates. The whitelisted system, however, is ideally a system that can sustain itself freed from external inputs. Cyber-Physical Systems (CPS), have the following unique traits: digital commands are physically observable and verifiable; possible combinations of commands are limited and finite. These CPS traits, combined with a trust anchor to secure an unclonable digital identity (i.e., digitally unclonable function [DUF] - Patent Application #15/183,454; CodeLock), offers an excellent opportunity to explore defenses built on whitelisting approach called 'Trustworthy Design Architecture (TDA).' There exist significant research challenges in defining what are the physically verifiable whitelists as well as the criteria for cyber-physical traits that can be used as the unclonable identity. One goal of the project is to identify a set of physical and/or digital characteristics that can uniquely identify an endpoint. The measurements must have the properties of being reliable, reproducible, and trustworthy. Given that adversaries naturally evolve with any defense, the adversary will have the goal of disrupting or spoofing this process. To protect against such disruptions, we provide a unique system engineering technique, when applied to CPSs (e.g., nuclear processing facilities, critical infrastructures), that will sustain a secure operational state without ever needing external information or active inputs from cybersecurity subject-matter experts (i.e., virus updates, IDS scans, patch management, vulnerability updates). We do this by eliminating system dependencies on external sources for protection. Instead, all internal communication is actively sealed and protected with integrity, authenticity and assurance checks that only cyber identities bound to the physical component can deliver. As CPSs continue to advance (i.e., IoTs, drones, ICSs), resilient-maintenance free solutions are needed to neutralize/reduce cyber risks. TDA is a conceptual system engineering framework specifically designed to address cyber-physical systems that can potentially be maintained and operated without the persistent need or demand for vulnerability or security patch updates.
Physical unclonable functions (PUFs) are devices which are easily probed but difficult to predict. Optical PUFs have been discussed within the literature, with traditional optical PUFs typically using spatial light modulators, coherent illumination, and scattering volumes; however, these systems can be large, expensive, and difficult to maintain alignment in practical conditions. We propose and demonstrate a new kind of optical PUF based on computational imaging and compressive sensing to address these challenges with traditional optical PUFs. This work describes the design, simulation, and prototyping of this computational optical PUF (COPUF) that utilizes incoherent polychromatic illumination passing through an additively manufactured refracting optical polymer element. We demonstrate the ability to pass information through a COPUF using a variety of sampling methods, including the use of compressive sensing. The sensitivity of the COPUF system is also explored. We explore non-traditional PUF configurations enabled by the COPUF architecture. The double COPUF system, which employees two serially connected COPUFs, is proposed and analyzed as a means to authenticate and communicate between two entities that have previously agreed to communicate. This configuration enables estimation of a message inversion key without the calculation of individual COPUF inversion keys at any point in the PUF life cycle. Our results show that it is possible to construct inexpensive optical PUFs using computational imaging. This could lead to new uses of PUFs in places where electrical PUFs cannot be utilized effectively, as low cost tags and seals, and potentially as authenticating and communicating devices.
Counterfeiting or surreptitious modification of electronic systems is of increasing concern, particularly for critical infrastructure and national security systems. Such systems include avionics, medical devices, military systems, and utility infrastructure. We present experimental results from an approach to uniquely identify printed circuit boards (PCBs) on the basis of device unique variations in surface mount passive components and wire trace patterns. We also present an innovative approach for combining measurements of each of these quantities to create unique, random identifiers for each PCB and report the observed entropy, reliability, and uniqueness of the signatures. These unique signatures can be used directly for verifying the integrity and authenticity of the PCBs, or can serve as the basis for generating cryptographic keys for more secure authentication of the devices during system acquisition or field deployment. Our results indicate that the proposed approaches for measuring and combining these quantities are capable of generating high-entropy, unique signatures for PCBs. The techniques explored do not require system designers to utilize specialized manufacturing processes and implementation is low-cost.
In the California Industrial General Permit (IGP) 2014-0057-DWQ for storm water monitoring, effective July 1, 2015, there are 21 contaminants that have been assigned NAL (Numeric Action Level) values, both annual and instantaneous.