Publications

Results 18801–19000 of 99,299

Search results

Jump to search filters

Infrared absorption spectroscopy of dynamically compressed water

Proceedings of SPIE - The International Society for Optical Engineering

Ramsey, Darrell; Mance, Jason; La Lone, Brandon; Dolan, Dan

Streaked visible spectroscopy is well established in dynamic compression research. Infrared measurements remain problematic, however, due to the diminishing sensitivity of streak camera photocathodes beyond 800 nm. Time-stretch techniques offer an alternative method for probing infrared features during single-event experiments. This paper discusses the development of a time-stretch spectroscopy diagnostic using dispersed supercontinuum laser pulses. The technique is applied to near-infrared measurements of liquid water during multiple shock compression.

More Details

The emergence of small-scale self-affine surface roughness from deformation

Science Advances

Hinkle, Adam; Author, No; Wolfram, Leute R.; Junge, Till; Pastewka, Lars

Most natural and man-made surfaces appear to be rough on many length scales. There is presently no unifying theory of the origin of roughness or the self-affine nature of surface topography. One likely contributor to the formation of roughness is deformation, which underlies many processes that shape surfaces such as machining, fracture, and wear. Using molecular dynamics, we simulate the biaxial compression of single-crystal Au, the high-entropy alloy Ni36.67Co30Fe16.67Ti16.67, and amorphous Cu50Zr50 and show that even surfaces of homogeneous materials develop a self-affine structure. By characterizing subsurface deformation, we connect the self-affinity of the surface to the spatial correlation of deformation events occurring within the bulk and present scaling relations for the evolution of roughness with strain. These results open routes toward interpreting and engineering roughness profiles.

More Details

Neural Network-Based Classification of String-Level IV Curves from Physically-Induced Failures of Photovoltaic Modules

IEEE Access

Hopwood, Michael W.; Gunda, Thushara; Seigneur, Hubert; Walters, Joseph

Accurate diagnosis of failures is critical for meeting photovoltaic (PV) performance objectives and avoiding safety concerns. This analysis focuses on the classification of field-collected string-level current-voltage (IV) curves representing baseline, partial soiling, and cracked failure modes. Specifically, multiple neural network-based architectures (including convolutional and long short-term memory) are evaluated using domain-informed parameters across different portions of the IV curve and a range of irradiance thresholds. The analysis identified two models that were able to accurately classify the relatively small dataset (400 samples) at a high accuracy (99%+). Findings also indicate optimal irradiance thresholds and opportunities for improvements in classification activities by focusing on portions of the IV curve. Such advancements are critical for expanding accurate classification of PV faults, especially for those with low power loss (e.g., cracked cells) or visibly similar IV curve profiles.

More Details

Quasi-simultaneous system modeling in adapt

Proceedings of the 30th European Safety and Reliability Conference and the 15th Probabilistic Safety Assessment and Management Conference

Cohn, Brian; Noel, Todd; Haskin, Troy C.; Osborn, Douglas; Aldemir, Tunc

Risk assessment of nuclear power plants (NPPs) is commonly driven by computer modeling which tracks the evolution of NPP events over time. To capture interactions between nuclear safety and nuclear security, multiple system codes each of which specializes on one space may need to be linked with information transfer among the codes. A systems analysis based on fixed length time blocks is proposed to allow for such a linking within the ADAPT framework without needing to predetermine in which order the safety/security codes interact. A case study using two instances of the Scribe3D code demonstrates the concept and shows agreement with results from a direct solution.

More Details

Ducted fuel injection vs. Conventional diesel combustion: Extending the load range in an optical engine with a four-orifice fuel injector

SAE International Journal of Engines

Nilsen, Christopher W.; Biles, Drummond E.; Yraguen, Boni F.; Mueller, Charles J.

Ducted fuel injection (DFI) is a technique to attenuate soot formation in compression ignition engines relative to conventional diesel combustion (CDC). The concept is to inject fuel through a small tube inside the combustion chamber to reduce equivalence ratios in the autoignition zone relative to CDC. DFI has been studied at loads as high as 8.5 bar gross indicated mean effective pressure (IMEPg) and as low as 2.5 bar IMEPg using a four-orifice fuel injector. Across previous studies, DFI has been shown to attenuate soot emissions, increase NOx emissions (at constant charge dilution), and slightly decrease fuel conversion efficiencies for most tested points. This study expands on the previous work by testing 1.1 bar IMEPg (low-load/idle) conditions and 10 bar IMEPg (higher-load) conditions with the same four-orifice fuel injector, as well as examining potential causes of the degradations in NOx emissions and fuel conversion efficiencies. DFI and CDC are directly compared at each operating point in the study. At the low-load condition, the intake charge dilution was swept to elucidate the soot and NOx performance of DFI. The low-load range is important because it is the target of impending, more-stringent emissions regulations, and DFI is shown to be a potentially effective approach for helping to meet these regulations. The results also indicate that DFI likely has slightly decreased fuel conversion efficiencies relative to CDC. The increase in NOx emissions with DFI is likely due to longer charge gas residence times at higher temperatures, which arise from shorter combustion durations and advanced combustion phasing relative to CDC.

More Details

Estimating higher-order moments using symmetric tensor decomposition

SIAM Journal on Matrix Analysis and Applications

Sherman, Samantha; Kolda, Tamara G.

We consider the problem of decomposing higher-order moment tensors, i.e., the sum of symmetric outer products of data vectors. Such a decomposition can be used to estimate the means in a Gaussian mixture model and for other applications in machine learning. The dth-order empirical moment tensor of a set of p observations of n variables is a symmetric d-way tensor. Our goal is to find a low-rank tensor approximation comprising r < p symmetric outer products. The challenge is that forming the empirical moment tensors costs O(pnd) operations and O(nd) storage, which may be prohibitively expensive; additionally, the algorithm to compute the low-rank approximation costs O(nd) per iteration. Our contribution is avoiding formation of the moment tensor, computing the low-rank tensor approximation of the moment tensor implicitly using O(pnr) operations per iteration and no extra memory. This advance opens the door to more applications of higher-order moments since they can now be efficiently computed. We present numerical evidence of the computational savings and show an example of estimating the means for higher-order moments.

More Details

Fourier analyses of high-order continuous and discontinuous Galerkin methods

SIAM Journal on Numerical Analysis

Le Roux, Daniel Y.; Eldred, Christopher; Taylor, Mark A.

We present a Fourier analysis of wave propagation problems subject to a class of continuous and discontinuous discretizations using high-degree Lagrange polynomials. This allows us to obtain explicit analytical formulas for the dispersion relation and group velocity and, for the first time to our knowledge, characterize analytically the emergence of gaps in the dispersion relation at specific wavenumbers, when they exist, and compute their specific locations. Wave packets with energy at these wavenumbers will fail to propagate correctly, leading to significant numerical dispersion. We also show that the Fourier analysis generates mathematical artifacts, and we explain how to remove them through a branch selection procedure conducted by analysis of eigenvectors and associated reconstructed solutions. The higher frequency eigenmodes, named erratic in this study, are also investigated analytically and numerically.

More Details

Effect of Teflon Wrapping on the Interaction Position Reconstruction Resolution in Long, Thin Plastic Scintillator Pillars

2020 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2020

Moustafa, Ahmed; Galindo-Tellez, Aline; Sweany, Melinda D.; Brubaker, E.; Mattingly, John

An optically-segmented single-volume scatter camera is being developed to image MeV-energy neutron sources. The design employs long, thin, optically isolated organic scintillator pillars with 5 mm × 5 mm × 200 mm dimensions (i.e., an aspect-ratio of 1:1:40). Teflon reflector is used to achieve optical isolation and improve light collection. The effect of Teflon on the ability to resolve the radiation interaction locations along such high aspect-ratio pillars is investigated. It was found that reconstruction based on the amplitude of signals collected on both ends of a bare pillar is less precise than reconstruction based on their arrival times. However, this observation is reversed after wrapping in Teflon, such that there is little to no improvement in reconstruction resolution calculated by combining both methods. It may be possible to use another means of optical isolation that does not require wrapping each individual pillar of the camera.

More Details

Efficient Generalized Boundary Detection Using a Sliding Information Distance

IEEE Transactions on Signal Processing

Field, Richard; Quach, Tu T.; Ting, Christina

We present a general machine learning algorithm for boundary detection within general signals based on an efficient, accurate, and robust approximation of the universal normalized information distance. Our approach uses an adaptive sliding information distance (SLID) combined with a wavelet-based approach for peak identification to locate the boundaries. Special emphasis is placed on developing an adaptive formulation of SLID to handle general signals with multiple unknown and/or drifting section lengths. Although specialized algorithms may outperform SLID when domain knowledge is available, these algorithms are limited to specific applications and do not generalize. SLID excels in these cases. We demonstrate the versatility and efficacy of SLID on a variety of signal types, including synthetically generated sequences of tokens, binary executables for reverse engineering applications, and time series of seismic events.

More Details

Operational, gauge-free quantum tomography

Quantum

Di Matteo, Olivia; Gamble, John; Granade, Chris; Rudinger, Kenneth M.; Wiebe, Nathan

As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is tomography, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the gauge problem). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.

More Details

Thermal-hydraulic investigations of a horizontal dry cask simulator

International Conference on Nuclear Engineering, Proceedings, ICONE

Pulido, Ramon J.; Lindgren, Eric; Durbin, S.; Salazar, Alex S.

Recent advances in horizontal cask designs for commercial spent nuclear fuel have significantly increased maximum thermal loading. This is due in part to greater efficiency in internal conduction pathways. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating thermal-hydraulic models of these storage cask designs. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of this investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to calculate cladding temperatures and induced cooling air flows in modern, horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and complement the existing knowledge base. Transverse and axial temperature profiles along with induced-cooling air flow are measured using various backfills of gases for a wide range of decay powers and canister pressures. The data from the HDCS tests will be used to host a blind model validation effort.

More Details

A volumetric framework for quantum computer benchmarks

Quantum

Blume-Kohout, Robin; Young, Kevin

We propose a very large family of benchmarks for probing the performance of quantum computers. We call them volumetric benchmarks (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of square circuits whose depth d and width w are the same. A volumetric benchmark defines a family of rectangular quantum circuits, for which d and w are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes - (w, d) pairs - to test suites C(w, d). A test suite is an ensemble of test circuits that share a common structure. The test suite C for a given circuit shape may be a single circuit C, a specific list of circuits {C1... CN} that must all be run, or a large set of possible circuits equipped with a distribution Pr(C). The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have “passed” a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the d vs w trade-off for the processor being benchmarked.

More Details

Experiments at Sandia to measure the effect of temperature on critical systems

Transactions of the American Nuclear Society

Harms, Gary A.; Foulk, James W.

Estimation of the uncertainty in a critical experiment attributable to uncertainties in the measured experiment temperature is done by calculating the variation of the eigenvalue of a benchmark configuration as a function of temperature. In the low-enriched water-moderated critical experiments performed at Sandia, this is done by 1) estimating the effects of changing the water temperature while holding the UO2 fuel temperature constant, 2) estimating the effects of changing the UO2 temperature while holding the water temperature constant, and 3) combining the two results. This assumes that the two effects are separable. The results of such an analysis are nonintuitive and need experimental verification. Critical experiments are being planned at Sandia National Laboratories (Sandia) to measure the effect of temperature on critical systems and will serve to test the methods used in estimating the temperature effects in critical experiments.

More Details

Understanding and mitigating process drift in aerosol jet printing

Flexible and Printed Electronics

Secor, Ethan B.; Tafoya, Rebecca R.

Aerosol jet printing offers a versatile, high-resolution prototyping capability for flexible and hybrid electronics. Despite its rapid growth in recent years, persistent problems such as process drift hinder the adoption of this technology in production environments. Here we explore underlying causes of process drift during aerosol jet printing and introduce an engineered solution to improve deposition stability. It is shown that the ink level within the cartridge is a critical factor in determining atomization efficiency, such that the reduction in ink volume resulting from printing itself can induce significant and systematic process drift. By integrating a custom 3D-printed cartridge with an ink recirculation system, ink composition and level within the cartridge are better maintained. This strategy allows extended duration printing with improved stability, as evidenced by 30 h of printing over 5 production runs. This provides an important tool for extending the duration and improving reliability for aerosol jet printing, a key factor for integration in practical manufacturing operations.

More Details

Measuring fatigue crack growth behavior of ferritic steels near threshold in high pressure hydrogen gas

American Society of Mechanical Engineers Pressure Vessels and Piping Division Publication PVP

Ronevich, Joseph; San Marchi, Chris; Nibur, Kevin A.; Bortot, Paolo; Bassanini, Gianluca; Sileo, Michele

Following the ASME codes, the design of pipelines and pressure vessels for transportation or storage of high-pressure hydrogen gas requires measurements of fatigue crack growth rates at design pressure. However, performing tests in high pressure hydrogen gas can be very costly as only a few laboratories have the unique capabilities. Recently, Code Case 2938 was accepted in ASME Boiler and Pressure Vessel Code (BPVC) VIII-3 allowing for design curves to be used in lieu of performing fatigue crack growth rate (da/dN vs. ?K) and fracture threshold (KIH) testing in hydrogen gas. The design curves were based on data generated at 100 MPa H2 on SA-372 and SA-723 grade steels; however, the data used to generate the design curves are limited to measurements of ?K values greater than 6 MPa m1/2. The design curves can be extrapolated to lower ?K (<6 MPa m1/2), but the threshold stress intensity factor (?Kth) has not been measured in hydrogen gas. In this work, decreasing ?K tests were performed at select hydrogen pressures to explore threshold (?Kth) for ferritic-based structural steels (e.g. pipelines and pressure vessels). The results were compared to decreasing ?K tests in air, showing that the fatigue crack growth rates in hydrogen gas appear to yield similar or even slightly lower da/dN values compared to the curves in air at low ?K values when tests were performed at stress ratios of 0.5 and 0.7. Correction for crack closure was implemented, which resulted in better agreement with the design curves and provide an upper bound throughout the entire ?K range, even as the crack growth rates approach ?Kth. This work gives further evidence of the utility of the design curves described in Code Case 2938 of the ASME BPVC VIII-3 for construction of high pressure hydrogen vessels.

More Details

Melt-Cast Organic Glass Scintillators for a Handheld Dual Particle Imager

2020 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2020

Giha, Nathan P.; Steinberger, William M.; Nguyen, Lucas; Carlson, Joseph; Feng, Patrick L.; Clarke, Shaun D.; Pozzi, Sara A.

The light output, time resolution, pulse shape discrimination (PSD), neutron light output, and interaction position reconstruction of melt-cast small-molecule organic glass bar scintillators were measured. The trans-stilbene organic scintillator detects fast neutrons and gamma rays with high efficiency and exhibits excellent PSD, but the manufacturing process is slow and expensive and its light output in response to neutrons is anisotropic. Small-molecule organic glass bars offer an easy-to-implement and cost-effective solution to these problems. These properties were characterized to evaluate the efficacy of constructing a compact, low-voltage neutron and gamma-ray imaging system using organic glass bars coupled to silicon photomultiplier arrays. A complete facility for melt-casting organic glass scintillators was setup at the University of Michigan. 6×6×50 mm3 glass bars were produced and the properties listed above were characterized. The first neutron image using organic glass was produced in simple backprojection.

More Details

Analysis of water retention in isothermal vacuum drying test

International Conference on Nuclear Engineering, Proceedings, ICONE

Salazar, Alex; Pulido, Ramon J.; Lindgren, Eric; Durbin, S.

Validation of the extent of water removal in a dry storage system using an industrial vacuum drying procedure is needed. Water remaining in casks upon completion of vacuum drying can lead to cladding corrosion, embrittlement, and breaching, as well as fuel degradation. In order to address the lack of time-dependent industrial drying data, this study employs a vacuum drying procedure to evaluate the efficiency of water removal over time in a scaled system. Isothermal conditions are imposed to generate baseline pressure and moisture data for comparison to future tests under heated conditions. A pressure vessel was constructed to allow for the emplacement of controlled quantities of water and connections to a pumping system and instrumentation. Measurements of pressure and moisture content were obtained over time during sequential vacuum hold points, where the vacuum flow rate was throttled to draw pressures from 100 torr down to 0.7 torr. The pressure rebound, dew point, and water content were observed to eventually diminish with increasingly lower hold points, indicating a reduction in retained water.

More Details

Fast computation of laser vibrometer alignment using photogrammetric techniques

Conference Proceedings of the Society for Experimental Mechanics Series

Rohe, Daniel P.; Witt, Bryan

Laser vibrometry has become a mature technology for structural dynamics testing, enabling many measurements to be obtained in a short amount of time without mass-loading the part. Recently multi-point laser vibrometers consisting of 48 or more measurement channels have been introduced to overcome some of the limitations of scanning systems, namely the inability to measure multiple data points simultaneously. However, measuring or estimating the alignment (Euler angles) of many laser beams for a given test setup remains tedious and can require a significant amount of time to complete and adds an unquantified source of uncertainty to the measurement. This paper introduces an alignment technique for the multipoint vibrometer system that utilizes photogrammetry to triangulate laser spots from which the Euler angles of each laser head relative to the test coordinate system can be determined. The generated laser beam vectors can be used to automatically create a test geometry and channel table. While the approach described was performed manually for proof of concept, it could be automated using the scripting tools within the vibrometer system.

More Details

Testing and simulations of spatial and temporal temperature variations in a particle-based thermal energy storage bin

ASME 2020 14th International Conference on Energy Sustainability Es 2020

Sment, Jeremy N.I.; Martinez, Mario J.; Albrecht, Kevin; Ho, Clifford K.

The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories is conducting research on a Generation 3 Particle Pilot Plant (G3P3) that uses falling sand-like particles as the heat transfer medium. The system will include a thermal energy storage (TES) bin with a capacity of 6 MWht¬ requiring ~120,000 kg of flowing particles. Testing and modeling were conducted to develop a validated modeling tool to understand temporal and spatial temperature distributions within the storage bin as it charges and discharges. Flow and energy transport in funnel-flow was modeled using volume averaged conservation equations coupled with level set interface tracking equations that prescribe the dynamic geometry of particle flow within the storage bin. A thin layer of particles on top of the particle bed was allowed to flow toward the center and into the flow channel above the outlet. Model results were validated using particle discharge temperatures taken from thermocouples mounted throughout a small steel bin. The model was then used to predict heat loss during charging, storing, and discharging operational modes at the G3P3 scale. Comparative results from the modeling and testing of the small bin indicate that the model captures many of the salient features of the transient particle outlet temperature over time.

More Details

Melcor validation study on multi-room fire

International Conference on Nuclear Engineering, Proceedings, ICONE

Foulk, James W.; El-Darazi, Samir; Fyffe, Lyndsey M.; Clark, James L.

Estimation of radionuclide aerosol release to the environment, from fire accident scenarios, are one of the most dominant accident evaluations at the U.S. Department of Energy's (DOE's) nuclear facilities. Of particular interest to safety analysts, is estimating the radionuclide aerosol release, the Source Term (ST), based on aerosol transport from a fire room to a corridor and from the corridor to the environment. However, no existing literature has been found on estimating ST from this multi-room facility configuration. This paper contributes the following to aerosol transport modeling body of work: a validation study on a multiroom fire experiment (this includes a code-to-code comparison between MELCOR and Consolidated Fire and Smoke Transport, a specialized fire code without radionuclide transport capabilities), a sensitivity study to provide insight on the effect of smoke on ST, and a sensitivity study on the effect of aerosol entrainment in the atmosphere (puff and continuous rate) on ST.

More Details

Sodium fire analysis using a sodium chemistry package in MELCOR

International Conference on Nuclear Engineering, Proceedings, ICONE

Aoyagi, Mitsuhiro; Uchibori, Akihiro; Takata, Takahi; Foulk, James W.; Clark, Andrew

The Sodium Chemistry (NAC) package in MELCOR has been developed to enhance application to sodium cooled fast reactors. The models in the NAC package have been assessed through benchmark analyses. The F7-1 pool fire experimental analysis is conducted within the framework of the U.S.-Japan collaboration; Civil Nuclear Energy Research and Development Working Group. This study assesses the capability of the pool fire model in MELCOR and provides recommendations for future model improvements because the physics of sodium pool fire are complex. Based on the preliminary results, analytical conditions, such as heat transfer on the floor catch pan are modified. The current MELCOR analysis yields lower values than the experimental data in pool combustion rate and pool, catch pan, and gas temperature during early time. The current treatment of heat transfer for the catch pan is the primary cause of the difference in the results from the experimental data. After sodium discharge stopping, the pool combustion rate and temperature become higher than experimental data. This is caused by absence of a model for pool fire suppression due to the oxide layer buildup on the pool surface. Based on these results, recommendations for future works are needed, such as heat transfer modification in terms of the catch pan and consideration of the effects of the oxide layer for both the MELCOR input model and pool physic.

More Details

Subsurface airflow measurements before and after a small chemical explosion

54th U.S. Rock Mechanics/Geomechanics Symposium

Bauer, Stephen J.; Broome, Scott T.; Gardner, W.P.

To increase understanding of damage associated with underground explosions, a field test program was developed jointly by Sandia and Pacific Northwest National Laboratories at the EMRTC test range in Socorro, NM. The Blue Canyon Dome test site is underlain by a rhyolite that is fractured in places. The test system included deployment of a defined array of 64 probes in eight monitoring boreholes. The monitoring boreholes radially surround a central near vertical shot hole at horizontal distances of 4.6m and 7.6m in cardinal and 45 degrees offset to cardinal directions, respectively. The probes are potted in coarse sand which touches/accesses the rhyolite and are individually accessed via nylon tubing and isolated from each other by epoxy and grout sequences. Pre and post chemical explosion air flow rate measurements, conducted for ~30-45 minutes from each probe, were observed for potential change. The gas flow measurement is a function of the rock mass permeability near a probe. Much of the flow rate change is at depth station 8 (59.4m) and is in the SE quadrant. Flow rate changes are inferred to be caused by the chemical explosion which may have opened pre-existing fractures, fractured the rock and/or caused block displacements by rotations and translations. The air flow rate data acquired here may enable a relationship and/or calibration to rock damage to be developed.

More Details

An active learning method for the comparison of agent-based models

Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS

Thorve, Swapna; Hu, Zhihao; Lakkaraju, Kiran; Letchford, Joshua; Vullikanti, Anil; Marathe, Achla; Swarup, Samarth

We develop a methodology for comparing two or more agent-based models that are developed for the same domain, but may differ in the particular data sets (e.g., geographical regions) to which they are applied, and in the structure of the model. Our approach is to learn a response surface in the common parameter space of the models and compare the regions corresponding to qualitatively different behaviors in the models. As an example, we develop an active learning algorithm to learn phase transition boundaries in contagion processes in order to compare two agent-based models of rooftop solar panel adoption.

More Details

Theoretical and experimental study of breakdown delay time in pulse discharge

Proceedings - International Symposium on Discharges and Electrical Insulation in Vacuum, ISDEIV

Schweiger, Irina; Hopkins, Matthew M.; Barnat, Edward; Keidar, Michael

PIC MCC simulation results on the breakdown in the pulse discharge in helium at pressure of 100 Torr and voltage of U=3.25 kV are presented. The delay of the breakdown development is studied with different initial densities of plasma and excited helium atoms, which corresponds to various discharge operation frequencies. It is shown that for high concentration of excited atoms the photoemission determines the breakdown delay time. In opposite case of low excited atoms density, the ion-electron emission plays a key role in the breakdown development. The photoemission from the cathode is set with a flux of the photons with Doppler shift over the frequency. These photons are generated in reactions between exited atoms and fast atoms. A wide distribution of breakdown delay time was observed in different runs and analyzed.

More Details

Residual core maximization: An efficient algorithm for maximizing the size of the k-core

Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020

Laishram, Ricky; Sariyuce, Ahmet E.; Eliassi-Rad, Tina; Pinar, Ali P.; Soundarajan, Sucheta

In many online social networking platforms, the participation of an individual is motivated by the participation of others. If an individual chooses to leave a platform, this may produce a cascade in which that person’s friends then choose to leave, causing their friends to leave, and so on. In some cases, it may be possible to incentivize key individuals to stay active within the network, thus preventing such a cascade. This problem is modeled using the anchored k-core of a network, which, for a network G and set of anchor nodes A, is the maximal subgraph of G in which every node has a total of at least k neighbors between the subgraph and anchors. In this work, we propose Residual Core Maximization (RCM), a novel algorithm for finding b anchor nodes so that the size of the anchored k-core is maximized. We perform a comprehensive experimental evaluation on numerous real-world networks and compare RCM to various baselines. We observe that RCM is more effective and efficient than the state-of-the-art methods: on average, RCM produces anchored k-cores that are 1.65 times larger than those produced by the baseline algorithm, and is approximately 500 times faster on average.

More Details

Phase-locked fiber interferometer with high frequency, low voltage fiber-stretcher and application to optical field reconstruction

Proceedings of SPIE - The International Society for Optical Engineering

Katzenmeyer, Aaron M.

Light carries a great deal of information in the form of amplitude, phase, and polarization, any or most powerfully all, of which may be exploited for the characterization of materials or development of novel technologies. However, extracting the full set of information carried by light becomes increasingly difficult as sample feature sizes shrink and the footprint and cost of detection schemes must decrease as well. Here, a fiber-based interferometric scheme is deployed to extract this information from optical systems which may be assessed three dimensionally down to the nanoscale and/or temporally up to the bandwidth of electronic data acquisition available. The setup utilizes a homemade fiber stretcher to achieve phase-locking of the reference arm and is compatible with heterodyning. More interestingly, a simplified and less expensive approach is demonstrated which employs the fiber stretcher for arbitrarily frequency up-converted (with respect to driving voltage frequency) phase modulation in addition to locking. This improves the detection system's size, weight, power, and cost requirements, eliminating the need for an acousto-optic modulator and reducing the drive power required by orders of magnitude. High performance is maintained as evidenced by imaging amplitude and phase (and inherently polarization state) in micro and nano optical systems such as lensed fibers and focusing waveguide grating couplers previously imaged only for intensity distribution.

More Details

Stochastic Gradients for Large-Scale Tensor Decomposition

SIAM Journal on Mathematics of Data Science

Kolda, Tamara G.; Hong, David

Tensor decomposition is a well-known tool for multiway data analysis. This work proposes using stochastic gradients for efficient generalized canonical polyadic (GCP) tensor decomposition of large-scale tensors. GCP tensor decomposition is a recently proposed version of tensor decomposition that allows for a variety of loss functions such as Bernoulli loss for binary data or Huber loss for robust estimation. The stochastic gradient is formed from randomly sampled elements of the tensor and is efficient because it can be computed using the sparse matricized-tensor times Khatri-Rao product tensor kernel. For dense tensors, we simply use uniform sampling. For sparse tensors, we propose two types of stratified sampling that give precedence to sampling nonzeros. Numerical results demonstrate the advantages of the proposed approach and its scalability to large-scale problems.

More Details

Wave data assimilation in support of wave energy converter powerprediction: Yakutat, Alaska case study

Proceedings of the Annual Offshore Technology Conference

Dallman, Ann; Khalil, Mohammad; Raghukumar, Kaus; Jones, Craig; Kasper, Jeremy; Flanary, Christopher; Chang, Grace; Roberts, Jesse D.

Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.

More Details

Initial simulations of empty room collapse and reconsolidation at the waste isolation pilot plant

54th U.S. Rock Mechanics/Geomechanics Symposium

Reedlunn, Benjamin; Moutsanidis, Georgios; Baek, Jonghyuk; Huang, Tsung H.; Koester, Jacob K.; He, Xiaolong; Taneja, Karan; Wei, Haoyan; Bazilevs, Yuri; Chen, Jiun S.

Room ceilings and walls at the Waste Isolation Pilot Plant tend to collapse over time, causing rubble piles on floors of empty rooms. The surrounding rock formation will gradually compact these rubble piles until they eventually become solid salt, but the length of time for a rubble pile to reach a certain porosity and permeability is unknown. This paper details the initial model development to predict the porosity and fluid flow network of a closing empty room. Conventional geomechanical numerical methods would struggle to model empty room collapse and rubble pile consolidation, so three different meshless methods, the Immersed Isogeometric Analysis (IGA) Meshfree Method, Reproducing Kernel Particle Method (RKPM), and Conformal Reproducing Kernel (CRK) method, were assessed. First, each meshless method simulated gradual room closure, without ceiling or wall collapse. All methods produced equivalent predictions to a finite element method reference solution, with comparable computational speed. Second, the Immersed IGA Meshfree method and RKPM simulated two-dimensional empty room collapse and rubble pile consolidation. Both methods successfully simulated large viscoplastic deformations, fracture, and rubble pile rearrangement to produce qualitatively realistic results. Finally, the meshless simulation results helped identify a mechanism for empty room closure that had been previously overlooked.

More Details

Two-phase flow properties of a wellbore microannulus

54th U.S. Rock Mechanics/Geomechanics Symposium

Garcia Fernandez, S.; Anwar, I.; Reda Taha, M.M.; Stormont, J.C.; Matteo, Edward N.

The interface between the steel casing and cemented annulus of a typical wellbore may de-bond and become permeable; this flow path is commonly referred to as a microannulus. Because there are often multiple fluids associated with wellbores, understanding two-phase flow behavior in the microannulus is important when evaluating the risks and hazards associated with leaky wellbores. A microannulus was created in a mock wellbore specimen by thermal debonding, which is one of the possible mechanisms for microannulus creation in the field. The specimen was saturated with silicone oil, and the intrinsic permeability through the microannulus was measured. Nitrogen was then injected at progressively increasing pressures, first to find the breakthrough pressure, and secondly, to obtain the relation between capillary pressure and gas relative permeability. The nitrogen was injected through the bottom of the specimen, to simulate the field condition where the gas migrates upwards along the casing. The measured data was successfully fit to common functional forms, such as the models of Brooks-Corey and Van Genuchten, which relate capillary pressure, saturation, and relative permeability of the two phases. The results can be used in computational models of flow along a wellbore microannulus.

More Details

High-temperature particle flow testing in parallel plates for particle-to-supercritical Co2 heat exchanger applications

ASME 2020 14th International Conference on Energy Sustainability, ES 2020

Laubscher, Hendrik F.; Albrecht, Kevin; Ho, Clifford K.

Realizing cost-effective, dispatchable, renewable energy production using concentrated solar power (CSP) relies on reaching high process temperatures to increase the thermal-to-electrical efficiency. Ceramic based particles used as both the energy storage medium and heat transfer fluid is a promising approach to increasing the operating temperature of next generation CSP plants. The particle-to-supercritical CO2 (sCO2) heat exchanger is a critical component in the development of this technology for transferring thermal energy from the heated ceramic particles to the sCO2 working fluid of the power cycle. The leading design for the particle-to-sCO2 heat exchanger is a shell-and-plate configuration. Currently, design work is focused on optimizing the performance of the heat exchanger through reducing the plate spacing. However, the particle channel geometry is limited by uniformity and reliability of particle flow in narrow vertical channels. Results of high temperature experimental particle flow testing are presented in this paper.

More Details

Improved load modelling for emerging distribution system assessments

CIRED - Open Access Proceedings Journal

Siratarnsophon, Piyapath; Hernandez, Miguel; Peppanen, Jouni; Deboever, Jeremiah; Rylander, Matthew; Reno, Matthew J.

Distribution system modelling and analysis with growing penetration of distributed energy resources (DERs) requires more detailed and accurate distribution load modelling. Smart meters, DER monitors, and other distribution system sensors provide a new level of visibility to distribution system loads and DERs. However, there is a limited understanding of how to efficiently leverage the new information in distribution system load modelling. This study presents the assessment of 11 methods to leverage the emerging information for improved distribution system active and reactive power load modelling. The accuracy of these load modelling methods is assessed both at the primary and the secondary distribution levels by analysing over 2.7 billion data points of results of feeder node voltages and element phase currents obtained by performing annual quasi-static time series simulations on EPRI's Ckt5 feeder model.

More Details

Robust terrain classification of high spatial resolution remote sensing data employing probabilistic feature fusion and pixelwise voting

Proceedings of SPIE - The International Society for Optical Engineering

West, Roger D.; Redman, Brian J.; Yocky, David A.; Foulk, James W.; Anderson, Dylan Z.

There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.

More Details

Permeability prediction of porous media using convolutional neural networks with physical properties

CEUR Workshop Proceedings

Yoon, Hongkyu; Melander, Darryl; Verzi, Stephen J.

Permeability prediction of porous media system is very important in many engineering and science domains including earth materials, bio-, solid-materials, and energy applications. In this work we evaluated how machine learning can be used to predict the permeability of porous media with physical properties. An emerging challenge for machine learning/deep learning in engineering and scientific research is the ability to incorporate physics into machine learning process. We used convolutional neural networks (CNNs) to train a set of image data of bead packing and additional physical properties such as porosity and surface area of porous media are used as training data either by feeding them to the fully connected network directly or through the multilayer perception network. Our results clearly show that the optimal neural network architecture and implementation of physics-informed constraints are important to properly improve the model prediction of permeability. A comprehensive analysis of hyperparameters with different CNN architectures and the data implementation scheme of the physical properties need to be performed to optimize our learning system for various porous media system.

More Details

GMLS-NEts: A machine learning framework for unstructured data

CEUR Workshop Proceedings

Trask, Nathaniel A.; Patel, Ravi; Gross, Ben J.; Atzberger, Paul J.

Data fields sampled on irregularly spaced points arise in many science and engineering applications. For regular grids, Convolutional Neural Networks (CNNs) gain benefits from weight sharing and invariances. We generalize CNNs by introducing methods for data on unstructured point clouds using Generalized Moving Least Squares (GMLS). GMLS is a nonparametric meshfree technique for estimating linear bounded functionals from scattered data, and has emerged as an effective technique for solving partial differential equations (PDEs). By parameterizing the GMLS estimator, we obtain learning methods for linear and non-linear operators with unstructured stencils. The requisite calculations are local, embarrassingly parallelizable, and supported by a rigorous approximation theory. We show how the framework may be used for unstructured physical data sets to perform operator regression, develop predictive dynamical models, and obtain feature extractors for engineering quantities of interest. The results show the promise of these architectures as foundations for data-driven model development in scientific machine learning applications.

More Details

Probabilistic assessment of damage from same shock response spectra due to variations in damping

Shock and Vibration

Maji, Arup

Interpretation of field data from shock tests and subsequent assessment of product safety margins via laboratory testing are based on the shock response spectra (SRS). The SRS capture how a single degree of freedom (SDOF) structure responds to the shock at differing frequencies and, therefore, no longer contain the duration or other temporal parameters pertaining to the shock. A single duration can often be included in the technical specification or in the recreation of acceleration vs. time history from the specified SRS; however, there is little basis for that beyond technical judgment. The loss of such temporal information can result in the recreated SRS being the same while its effect on a system or component can be different. This paper attempts to quantify this deficiency as well as propose a simple method of capturing damping from shock waves that can allow the original waveform to be more accurately reconstructed from the SRS. In this study the decay rate associated with various frequencies that comprise the overall shock was varied. This variation in the decay rate leads to a variation in the acceleration vs. time history, which can be correlated to a “Damage Index” that captures the fatigue damage imparted to the object under shock. Several waveforms that have the same SRS but varying rates of decay for either high- or low-frequency components of the shock were investigated. The resulting variation in stress cycles and Damage Index is discussed in the context of the lognormal distribution of fatigue failure data. It is proposed that, along with the SRS, the decay rate is also captured to minimize the discrepancy between field data and representative laboratory tests.

More Details

Extended SWIR InGaAs/GaAsSb type-II superlattice photodetector on InP

Proceedings of SPIE - The International Society for Optical Engineering

Stephenson, Chad A.; Klem, John F.; Olesberg, Jonathon T.; Kadlec, Clark N.; Coon, Wesley; Weiner, Phillip H.

An InGaAs/GaAsSb Type-II superlattice is explored as an absorber material for extended short-wave infrared detection. A 10.5 nm period was grown with an InGaAs/GaAsSb thickness ratio of 2 with a target In composition of 46% and target Sb composition of 62%. Cutoff wavelengths near 2.8 μm were achieved with responsivity beyond 3 μm. Demonstrated dark current densities were as low as 1.4 mA/cm2 at 295K and 13 μA/cm2 at 235K at -1V bias. A significant barrier to hole extraction was identified in the detector design that severely limited the external quantum efficiency (EQE) of the detectors. A redesign of the detector that removes that barrier could make InGaAs/GaAsSb very competitive with current commercial HgCdTe and extended InGaAs technology.

More Details

Multipole-based cable braid electromagnetic penetration model: Magnetic penetration case

Progress In Electromagnetics Research C

Campione, Salvatore; Warne, Larry K.; Langston, William L.

The goal of this paper is to present, for the first time, calculations of the magnetic penetration case of a first principles multipole-based cable braid electromagnetic penetration model. As a first test case, a one-dimensional array of perfect electrically conducting wires, for which an analytical solution is known, is investigated: We compare both the self-inductance and the transfer inductance results from our first principles cable braid electromagnetic penetration model to those obtained using the analytical solution. These results are found in good agreement up to a radius to half spacing ratio of about 0.78, demonstrating a robustness needed for many commercial and non-commercial cables. We then analyze a second set of test cases of a square array of wires whose solution is the same as the one-dimensional array result and of a rhomboidal array whose solution can be estimated from Kley’s model. As a final test case, we consider two layers of one-dimensional arrays of wires to investigate porpoising effects analytically. We find good agreement with analytical and Kley’s results for these geometries, verifying our proposed multipole model. Note that only our multipole model accounts for the full dependence on the actual cable geometry which enables us to model more complicated cable geometries.

More Details

Wave data assimilation in support of wave energy converter powerprediction: Yakutat, Alaska case study

Proceedings of the Annual Offshore Technology Conference

Dallman, Ann; Khalil, Mohammad; Raghukumar, Kaus; Jones, Craig; Kasper, Jeremy; Flanary, Christopher; Chang, Grace; Roberts, Jesse D.

Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.

More Details

Thermal-hydraulic investigations of a horizontal dry cask simulator

International Conference on Nuclear Engineering, Proceedings, ICONE

Pulido, Ramon J.; Lindgren, Eric; Durbin, S.; Foulk, James W.

Recent advances in horizontal cask designs for commercial spent nuclear fuel have significantly increased maximum thermal loading. This is due in part to greater efficiency in internal conduction pathways. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating thermal-hydraulic models of these storage cask designs. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of this investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to calculate cladding temperatures and induced cooling air flows in modern, horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and complement the existing knowledge base. Transverse and axial temperature profiles along with induced-cooling air flow are measured using various backfills of gases for a wide range of decay powers and canister pressures. The data from the HDCS tests will be used to host a blind model validation effort.

More Details

Model reduction for hypersonic aerodynamics via conservative lspg projection and hyper-reduction

AIAA Scitech 2020 Forum

Blonigan, Patrick J.; Rizzi, Francesco; Howard, Micah; Fike, Jeffrey; Carlberg, Kevin T.

High-speed aerospace engineering applications rely heavily on computational fluid dynamics (CFD) models for design and analysis due to the expense and difficulty of flight tests and experiments. This reliance on CFD models necessitates performing accurate and reliable uncertainty quantification (UQ) of the CFD models. However, it is very computationally expensive to run CFD for hypersonic flows due to the fine grid resolution required to capture the strong shocks and large gradients that are typically present. Additionally, UQ approaches are “many-query” problems requiring many runs with a wide range of input parameters. One way to enable computationally expensive models to be used in such many-query problems is to employ projection-based reduced-order models (ROMs) in lieu of the (high-fidelity) full-order model. In particular, the least-squares Petrov–Galerkin (LSPG) ROM (equipped with hyper-reduction) has demonstrated the ability to significantly reduce simulation costs while retaining high levels of accuracy on a range of problems including subsonic CFD applications [1, 2]. This allows computationally inexpensive LSPG ROM simulations to replace the full-order model simulations in UQ studies, which makes this many-query task tractable, even for large-scale CFD models. This work presents the first application of LSPG to a hypersonic CFD application. In particular, we present results for LSPG ROMs of the HIFiRE-1 in a three-dimensional, turbulent Mach 7.1 flow, showcasing the ability of the ROM to significantly reduce computational costs while maintaining high levels of accuracy in computed quantities of interest.

More Details

Computational modeling of terry turbine airflow testing to support the expansion of operating band in beyond design basis conditions

International Conference on Nuclear Engineering, Proceedings, ICONE

Gilkey, Lindsay N.; Andrews, Nathan C.; Ross, Kyle; Solom, Matthew

The performance of the Reactor Core Isolation Cooling (RCIC) system under beyond design basis event (BDBE) conditions is not well-characterized. The operating band of the RCIC system is currently specified utilizing conservative assumptions, with restrictive operational guidelines not allowing for an adequate credit of the true capability of the system. For example, it is assumed that battery power is needed for RCIC operation to maintain the reactor pressure vessel (RPV) water level—a loss of battery power is conservatively assumed to result in failure of the RCIC turbopump system in a range of safety and risk assessments. However, the accidents at Fukushima Daiichi Nuclear Power Station (FDNPS) showed that the Unit 2 RCIC did not cease to operate following loss of battery power. In fact, it continued to inject water into the RPV for nearly 3 days following the earthquake. Improved understanding of Terry turbopump operations under BDBE conditions can support enhancement of accident management procedures and guidelines, promoting more robust severe accident prevention. Therefore, the U.S. Department of Energy (DOE), U.S. nuclear industry, and international stakeholders have funded the Terry Turbine Expanded Operating Band (TTEXOB) program. This program aims to better understand RCIC operations during BDBE conditions through combined experimental and modeling efforts. As part of the TTEXOB, airflow testing was performed at Texas A&M University (TAMU) of a small-scale ZS-1 and a full-scale GS-2 Terry turbine. This paper presents the corresponding efforts to model operation of the TAMU ZS-1 and GS-2 Terry turbines with Sandia National Laboratories’ (SNL) MELCOR code. The current MELCOR modeling approach represents the Terry turbine with a system of equations expressing the conservation of angular momentum. The joint analysis and experimental program identified that a) it is possible for the Terry turbine to develop the same power at different speeds, and b) turbine losses appear to be insensitive to the size of the turbine. As part of this program, further study of Terry turbine modeling unknowns and uncertainties is planned to support more extensive application of modeling and simulation to the enhancement of plant-specific operational and accident procedures.

More Details

Determining airborne release fraction from dot 7A drums exposed to a thermal insult

International Conference on Nuclear Engineering, Proceedings, ICONE

Mendoza, Hector; Figueroa Faria, Victor G.; Gill, Walter; Sanborn, Scott E.

Fire suppression systems for transuranic (TRU) waste facilities are designed to minimize radioactive material release to the public and to facility employees in the event of a fire. Currently, facilities with Department of Transportation (DOT) 7A drums filled with TRU waste follow guidelines that assume a fraction of the drums experience lid ejection in case of a fire. This lid loss is assumed to result in significant TRU waste material from the drum experiencing an unconfined burn during the fire, and fire suppression systems are thus designed to respond and mitigate potential radioactive material release. However, recent preliminary tests where the standard lid filters of 7A drums were replaced with a UT-9424S filter suggest that the drums could retain their lid if equipped with this filter. The retention of the drum lid could thus result in a very different airborne release fraction (ARF) of a 7A drum's contents when exposed to a pool fire than what is assumed in current safety basis documents. This potentially different ARF is currently unknown because, while studies have been performed in the past to quantify ARF for 7A drums in a fire, no comprehensive measurements have been performed for drums equipped with a UT-9424S filter. If the ARF is lower than what is currently assumed, it could change the way TRU waste facilities operate. Sandia National Laboratories has thus developed a set of tests and techniques to help determine an ARF value for 7A drums filled with TRU waste and equipped with a UT-9424S filter when exposed to the hypothetical accident conditions (HAC) of a 30-minute hydrocarbon pool fire. In this multi-phase test series, SNL has accomplished the following: (1) performed a thermogravimetric analysis (TGA) on various combustible materials typically found in 7A drums in order to identify a conservative load for 7A drums in a pool fire; (2) performed a 30-minute pool fire test to (a) determine if lid ejection is possible under extreme conditions despite the UT-9424S filter, and (b) to measure key parameters in order to replicate the fire environment using a radiant heat setup; and (3) designed a radiant heat setup to demonstrate capability of reproducing the fire environment with a system that would facilitate measurements of ARF. This manuscript thus discusses the techniques, approach, and unique capabilities SNL has developed to help determine an ARF value for DOT 7A drums exposed to a 30-minute fully engulfing pool fire while equipped with a UT-9424S filter on the drum lid.

More Details

Full function sampling of uncertain correlations

ASME 2020 Verification and Validation Symposium, VVS 2020

Irick, Kevin W.; Engerer, Jeffrey D.; Lance, Blake; Roberts, Scott A.; Schroeder, Benjamin B.

Empirically-based correlations are commonly used in modeling and simulation but rarely have rigorous uncertainty quantification that captures the nature of the underlying data. In many applications, a mathematical description for a parameter response to some input stimulus is often either unknown, unable to be measured, or both. Likewise, the data used to observe a parameter response is often noisy, and correlations are derived to approximate the bulk response. Practitioners frequently treat the chosen correlation-sometimes referred to as the "surrogate"or "reduced-order"model of the response-as a constant mathematical description of the relationship between input and output. This assumption, as with any model, is incorrect to some degree, and the uncertainty in the correlation can potentially have significant impacts on system responses. Thus, proper treatment of correlation uncertainty is necessary. In this paper, a method is proposed for high-level abstract sampling of uncertain data correlations. Whereas uncertainty characterization is often assigned to scalar values for direct sampling, functional uncertainty is not always straightforward. A systematic approach for sampling univariable uncertain correlations was developed to perform more rigorous uncertainty analyses and more reliably sample the correlation space. This procedure implements pseudo-random sampling of a correlation with a bounded input range to maintain the correlation form, to respect variable uncertainty across the range, and to ensure function continuity with respect to the input variable.

More Details

Self-updating models with error remediation

Proceedings of SPIE - The International Society for Optical Engineering

Doak, Justin E.; Smith, Michael R.; Ingram, Joe B.

Many environments currently employ machine learning models for data processing and analytics that were built using a limited number of training data points. Once deployed, the models are exposed to significant amounts of previously-unseen data, not all of which is representative of the original, limited training data. However, updating these deployed models can be difficult due to logistical, bandwidth, time, hardware, and/or data sensitivity constraints. We propose a framework, Self-Updating Models with Error Remediation (SUMER), in which a deployed model updates itself as new data becomes available. SUMER uses techniques from semi-supervised learning and noise remediation to iteratively retrain a deployed model using intelligently-chosen predictions from the model as the labels for new training iterations. A key component of SUMER is the notion of error remediation as self-labeled data can be susceptible to the propagation of errors. We investigate the use of SUMER across various data sets and iterations. We find that self-updating models (SUMs) generally perform better than models that do not attempt to self-update when presented with additional previously-unseen data. This performance gap is accentuated in cases where there is only limited amounts of initial training data. We also find that the performance of SUMER is generally better than the performance of SUMs, demonstrating a benefit in applying error remediation. Consequently, SUMER can autonomously enhance the operational capabilities of existing data processing systems by intelligently updating models in dynamic environments.

More Details

An Energy Consistent Discretization of the Nonhydrostatic Equations in Primitive Variables

Journal of Advances in Modeling Earth Systems

Taylor, Mark A.; Guba, Oksana; Steyer, Andrew J.; Ullrich, Paul A.; Hall; Eldred, Christopher

We derive a formulation of the nonhydrostatic equations in spherical geometry with a Lorenz staggered vertical discretization. The combination conserves a discrete energy in exact time integration when coupled with a mimetic horizontal discretization. The formulation is a version of Dubos and Tort (2014, https://doi.org/10.1175/MWR-D-14-00069.1) rewritten in terms of primitive variables. It is valid for terrain following mass or height coordinates and for both Eulerian or vertically Lagrangian discretizations. The discretization relies on an extension to Simmons and Burridge (1981, https://doi.org/10.1175/1520-0493(1981)109<0758:AEAAMC>2.0.CO;2) vertical differencing, which we show obeys a discrete derivative product rule. This product rule allows us to simplify the treatment of the vertical transport terms. Energy conservation is obtained via a term-by-term balance in the kinetic, internal, and potential energy budgets, ensuring an energy-consistent discretization up to time truncation error with no spurious sources of energy. We demonstrate convergence with respect to time truncation error in a spectral element code with a horizontal explicit vertically implicit implicit-explicit time stepping algorithm.

More Details

Group Formation Theory at Multiple Scales

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Doyle, Casey L.; Naugle, Asmeret B.; Bernard, Michael; Lakkaraju, Kiran; Kittinger, Robert; Sweitzer, Matthew D.; Rothganger, Fredrick R.

There is a wealth of psychological theory regarding the drive for individuals to congregate and form social groups, positing that people may organize out of fear, social pressure, or even to manage their self-esteem. We evaluate three such theories for multi-scale validity by studying them not only at the individual scale for which they were originally developed, but also for applicability to group interactions and behavior. We implement this multi-scale analysis using a dataset of communications and group membership derived from a long-running online game, matching the intent behind the theories to quantitative measures that describe players’ behavior. Once we establish that the theories hold for the dataset, we increase the scope to test the theories at the higher scale of group interactions. Despite being formulated to describe individual cognition and motivation, we show that some group dynamics theories hold at the higher level of group cognition and can effectively describe the behavior of joint decision making and higher-level interactions.

More Details

Finite element analysis of a moving packed-bed particle-to-sco2 heat exchanger testing and performance

ASME 2020 14th International Conference on Energy Sustainability, ES 2020

Delovato, Nicolas; Albrecht, Kevin; Ho, Clifford K.

A focus in the development of the next generation of concentrating solar power (CSP) plants is the integration of high temperature particle receivers with improved efficiency supercritical carbon dioxide (sCO2) power cycles. The feasibility of this type of system depends on the design of a particle-to-sCO2 heat exchanger. This work presents a finite element analysis (FEA) model to analyze the thermal performance of a particle-to-sCO2 heat exchanger for potential use in a CSP plant. The heat exchanger design utilizes a moving packed bed of particles in crossflow with sCO2 which flows in a serpentine pattern through banks of microchannel plates. The model contains a thermal analysis to determine the heat exchanger's performance in transferring thermal energy from the particle bed to the sCO2. Test data from a prototype heat exchanger was used to verify the performance predictions of the model. The verification of the model required a multitude of sensitivity tests to identify where fidelity needed to be added to reach agreement between the experimental and simulated results. For each sensitivity test in the model, the effect on the performance is discussed. The model was shown to be in good agreement on the overall heat transfer coefficient of the heat exchanger with the experimental results for a low temperature set of conditions with a combination of added sensitives. A set of key factors with a major impact on the performance of the heat exchanger are discussed.

More Details

Finite element analysis of a moving packed-bed particle-to-sco2 heat exchanger testing and performance

ASME 2020 14th International Conference on Energy Sustainability, ES 2020

Delovato, Nicolas; Albrecht, Kevin; Ho, Clifford K.

A focus in the development of the next generation of concentrating solar power (CSP) plants is the integration of high temperature particle receivers with improved efficiency supercritical carbon dioxide (sCO2) power cycles. The feasibility of this type of system depends on the design of a particle-to-sCO2 heat exchanger. This work presents a finite element analysis (FEA) model to analyze the thermal performance of a particle-to-sCO2 heat exchanger for potential use in a CSP plant. The heat exchanger design utilizes a moving packed bed of particles in crossflow with sCO2 which flows in a serpentine pattern through banks of microchannel plates. The model contains a thermal analysis to determine the heat exchanger's performance in transferring thermal energy from the particle bed to the sCO2. Test data from a prototype heat exchanger was used to verify the performance predictions of the model. The verification of the model required a multitude of sensitivity tests to identify where fidelity needed to be added to reach agreement between the experimental and simulated results. For each sensitivity test in the model, the effect on the performance is discussed. The model was shown to be in good agreement on the overall heat transfer coefficient of the heat exchanger with the experimental results for a low temperature set of conditions with a combination of added sensitives. A set of key factors with a major impact on the performance of the heat exchanger are discussed.

More Details

Evaluating the effective solar absorptance of dilute particle configurations

ASME 2020 14th International Conference on Energy Sustainability, ES 2020

Ho, Clifford K.; Gonzalez-Portillo, Luis F.; Albrecht, Kevin J.

Ray-tracing and heat-transfer simulations of discrete particles in a representative elementary volume were performed to determine the effective particle-cloud absorptance and temperature profiles as a function of intrinsic particle absorptance values (0 - 1) for dilute solids volume fractions (1 - 3%) representative of falling particle receivers used in concentrating solar power applications. Results showed that the average particle-cloud absorptance is increased above intrinsic particle absorptance values as a result of reflections and subsequent reabsorption (light trapping). The relative increase in effective particle-cloud absorptance was greater for lower values of intrinsic particle absorptance and could be as high as a factor of two. Higher values of intrinsic particle absorptance led to higher simulated steady-state particle temperatures. Significant temperature gradients within the particle cloud and within the particles themselves were also observed in the simulations. Findings indicate that dilute particle-cloud configurations within falling particle receivers can significantly enhance the apparent effective absorptance of the particle curtain, and materials with higher values of intrinsic particle absorptance will yield greater radiative absorptance and temperatures.

More Details

Testing and simulations of spatial and temporal temperature variations in a particle-based thermal energy storage bin

ASME 2020 14th International Conference on Energy Sustainability, ES 2020

Sment, Jeremy N.I.; Martinez, Mario J.; Albrecht, Kevin; Ho, Clifford K.

The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories is conducting research on a Generation 3 Particle Pilot Plant (G3P3) that uses falling sand-like particles as the heat transfer medium. The system will include a thermal energy storage (TES) bin with a capacity of 6 MWht¬ requiring ~120,000 kg of flowing particles. Testing and modeling were conducted to develop a validated modeling tool to understand temporal and spatial temperature distributions within the storage bin as it charges and discharges. Flow and energy transport in funnel-flow was modeled using volume averaged conservation equations coupled with level set interface tracking equations that prescribe the dynamic geometry of particle flow within the storage bin. A thin layer of particles on top of the particle bed was allowed to flow toward the center and into the flow channel above the outlet. Model results were validated using particle discharge temperatures taken from thermocouples mounted throughout a small steel bin. The model was then used to predict heat loss during charging, storing, and discharging operational modes at the G3P3 scale. Comparative results from the modeling and testing of the small bin indicate that the model captures many of the salient features of the transient particle outlet temperature over time.

More Details

A nonlocal feature-driven exemplar-based approach for image inpainting

SIAM Journal on Imaging Sciences

Trageser, Jeremy; Reshniak, Viktor; Webster, Clayton G.

We present a nonlocal variational image completion technique which admits simultaneous inpainting of multiple structures and textures in a unified framework. The recovery of geometric structures is achieved by using general convolution operators as a measure of behavior within an image. These are combined with a nonlocal exemplar-based approach to exploit the self-similarity of an image in the selected feature domains and to ensure the inpainting of textures. We also introduce an anisotropic patch distance metric to allow for better control of the feature selection within an image and present a nonlocal energy functional based on this metric. Finally, we derive an optimization algorithm for the proposed variational model and examine its validity experimentally with various test images.

More Details

A self-tuning WEC controller for changing sea states

IFAC-PapersOnLine

Forbush, Dominic; Bacelli, Giorgio; Spencer, Steven J.; Coe, Ryan G.

A self-tuning proportional-integral control law prescribing motor torques was tested in experiment on a three degree-of-freedom wave energy converter. The control objective was to maximize electrical power. The control law relied upon an identified model of device intrinsic impedance to generate a frequency-domain estimate of the wave-induced excitation force and measurements of device velocities. The control law was tested in irregular sea-states that evolved over hours (a rapid, but realistic time-scale) and that changed instantly (an unrealistic scenario to evaluate controller response). For both cases, the controller converges to gains that closely approximate the post-calculated optimal gains for all degrees of freedom. Convergence to near-optimal gains occurred reliably over a sufficiently short time for realistic sea states. In addition, electrical power was found to be relatively insensitive to gain tuning over a broad range of gains, implying that an imperfectly tuned controller does not result in a large penalty to electrical power capture. An extension of this control law that allows for adaptation to a changing device impedance model over time is proposed for long-term deployments, as well as an approach to explicitly handle constraints within this architecture.

More Details

A frequency-shaped controller for damping inter-area oscillations in power systems

IFAC-PapersOnLine

Wilches-Bernal, Felipe; Schoenwald, David A.; Pierre, Brian J.; Byrne, Raymond H.

This paper discusses how to design an inter-area oscillations damping controller using a frequency-shaped optimal output feedback control approach. This control approach was chosen because inter-area oscillations occur at a particular frequency range, from 0.2 to 1 Hz, which is the interval the control action must be prioritized. This paper shows that using only the filter for the system states can sufficiently damp the system modes. In addition, the paper shows that the filter for the input can be adjusted to provide primary frequency regulation to the system with no effect to the desired damping control action. Time domain simulations of a power system with a set of controllable power injection devices are presented to show the effectiveness of the designed controller.

More Details

Particle flow testing of a multistage falling particle receiver concept: Staggered angle iron receiver (stair)

ASME 2020 14th International Conference on Energy Sustainability, ES 2020

Yue, Lindsey; Schroeder, Nathaniel R.; Ho, Clifford K.

Falling particle receivers are an emerging technology for use in concentrating solar power systems. In this work, a staggered angle iron receiver concept is investigated, with the goals of increasing particle curtain stability and opacity in a receiver. The concept consists of angle iron-shaped troughs placed in line with a falling particle curtain in order to collect particles and re-release them, decreasing the downward velocity of the particles and the curtain spread. A particle flow test apparatus has been fabricated. The effect of staggered angle iron trough geometry, orientation, and position on the opacity and uniformity of a falling particle curtain for different particle linear mass flow rates is investigated using the particle flow test apparatus. For the baseline free falling curtain and for different trough configurations, particle curtain transmissivity is measured, and profile images of the particle curtain are taken. Particle mass flow rate and trough position affect curtain transmissivity more than trough orientation and geometry. Optimal trough position for a given particle mass flow rate can result in improved curtain stability and decreased transmissivity. The case with a slot depth of 1/4”, hybrid trough geometry at 36” below the slot resulted in the largest improvement over the baseline curtain: 0.40 transmissivity for the baseline and 0.14 transmissivity with the trough. However, some trough configurations have a detrimental effect on curtain stability and result in increased curtain transmissivity and/or substantial particle bouncing.

More Details

A frequency-shaped controller for damping inter-area oscillations in power systems

IFAC-PapersOnLine

Wilches-Bernal, Felipe; Schoenwald, David A.; Pierre, Brian J.; Byrne, Raymond H.

This paper discusses how to design an inter-area oscillations damping controller using a frequency-shaped optimal output feedback control approach. This control approach was chosen because inter-area oscillations occur at a particular frequency range, from 0.2 to 1 Hz, which is the interval the control action must be prioritized. This paper shows that using only the filter for the system states can sufficiently damp the system modes. In addition, the paper shows that the filter for the input can be adjusted to provide primary frequency regulation to the system with no effect to the desired damping control action. Time domain simulations of a power system with a set of controllable power injection devices are presented to show the effectiveness of the designed controller.

More Details

Dealing with measurement uncertainties as nuisance parameters in bayesian model calibration

SIAM-ASA Journal on Uncertainty Quantification

Rumsey, Kelin; Huerta, Jose G.; Brown, Justin L.; Hund, Lauren

In the presence of model discrepancy, the calibration of physics-based models for physical parameter inference is a challenging problem. Lack of identifiability between calibration parameters and model discrepancy requires additional identifiability constraints to be placed on the model discrepancy to obtain unique physical parameter estimates. If these assumptions are violated, the inference for the calibration parameters can be systematically biased. In many applications, such as in dynamic material property experiments, many of the calibration inputs refer to measurement uncertainties. In this setting, we develop a metric for identifying overfitting of these measurement uncertainties, propose a prior capable of reducing this overfitting, and show how this leads to a diagnostic tool for validation of physical parameter inference. The approach is demonstrated for a benchmark example and applied for a material property application to perform inference on the equation of state parameters of tantalum.

More Details

KKT preconditioners for pde-constrained optimization with the helmholtz equation

SIAM Journal on Scientific Computing

Kouri, Drew P.; Ridzal, Denis; Tuminaro, Raymond S.

This paper considers preconditioners for the linear systems that arise from optimal control and inverse problems involving the Helmholtz equation. Specifically, we explore an all-at-once approach. The main contribution centers on the analysis of two block preconditioners. Variations of these preconditioners have been proposed and analyzed in prior works for optimal control problems where the underlying partial differential equation is a Laplace-like operator. In this paper, we extend some of the prior convergence results to Helmholtz-based optimization applications. Our analysis examines situations where control variables and observations are restricted to subregions of the computational domain. We prove that solver convergence rates do not deteriorate as the mesh is refined or as the wavenumber increases. More specifically, for one of the preconditioners we prove accelerated convergence as the wavenumber increases. Additionally, in situations where the control and observation subregions are disjoint, we observe that solver convergence rates have a weak dependence on the regularization parameter. We give a partial analysis of this behavior. We illustrate the performance of the preconditioners on control problems motivated by acoustic testing.

More Details

Melcor demonstration analysis of accident scenarios at a spent nuclear reprocessing plant

International Conference on Nuclear Engineering, Proceedings, ICONE

Wagner, Kenneth C.; Foulk, James W.

The work presented in this paper applies the MELCOR code developed at Sandia National Laboratories to evaluate the source terms from potential accidents in non-reactor nuclear facilities. The present approach provides an integrated source term approach that would be well-suited for uncertainty analysis and probabilistic risk assessments. MELCOR is used to predict the thermal-hydraulic conditions during fires or explosions that includes a release of radionuclides. The radionuclides are tracked throughout the facility from the initiating event to predict the time-dependent source term to the environment for subsequent dose or consequence evaluations. In this paper, we discuss the MELCOR input model development and the evaluation of the potential source terms from the dominated fire and explosion scenarios for a spent fuel nuclear reprocessing plant.

More Details

srMO-BO-3GP: A sequential regularized multi-objective constrained Bayesian optimization for design applications

Proceedings of the ASME Design Engineering Technical Conference

Foulk, James W.; Eldred, Michael; Mccann, Scott; Wang, Yan

Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.

More Details

WearGP: A UQ/ML wear prediction framework for slurry pump impellers and casings

American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FEDSM

Foulk, James W.; Visintainer, Robert; Furlan, John; Pagalthivarthi, Krishnan V.; Garman, Mohamed; Cutright, Aaron; Wang, Yan

Wear prediction is important in designing reliable machinery for slurry industry. It usually relies on multi-phase computational fluid dynamics, which is accurate but computationally expensive. Each run of the simulations can take hours or days even on a high-performance computing platform. The high computational cost prohibits a large number of simulations in the process of design optimization. In contrast to physics-based simulations, data-driven approaches such as machine learning are capable of providing accurate wear predictions at a small fraction of computational costs, if the models are trained properly. In this paper, a recently developed WearGP framework [1] is extended to predict the global wear quantities of interest by constructing Gaussian process surrogates. The effects of different operating conditions are investigated. The advantages of the WearGP framework are demonstrated by its high accuracy and low computational cost in predicting wear rates.

More Details

Generalized Canonical Polyadic Tensor Decomposition

SIAM Review

Hong, David; Kolda, Tamara G.

Tensor decomposition is a fundamental unsupervised machine learning method in data science, with applications including network analysis and sensor data processing. This work develops a generalized canonical polyadic (GCP) low-rank tensor decomposition that allows other loss functions besides squared error. For instance, we can use logistic loss or Kullback{Leibler divergence, enabling tensor decomposition for binary or count data. We present a variety of statistically motivated loss functions for various scenarios. We provide a generalized framework for computing gradients and handling missing data that enables the use of standard optimization methods for fitting the model. We demonstrate the exibility of the GCP decomposition on several real-world examples including interactions in a social network, neural activity in a mouse, and monthly rainfall measurements in India.

More Details

Thermal-hydraulic investigations of a horizontal dry cask simulator

International Conference on Nuclear Engineering Proceedings ICONE

Pulido, Ramon J.; Lindgren, Eric; Durbin, S.; Foulk, James W.

Recent advances in horizontal cask designs for commercial spent nuclear fuel have significantly increased maximum thermal loading. This is due in part to greater efficiency in internal conduction pathways. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating thermal-hydraulic models of these storage cask designs. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of this investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to calculate cladding temperatures and induced cooling air flows in modern, horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and complement the existing knowledge base. Transverse and axial temperature profiles along with induced-cooling air flow are measured using various backfills of gases for a wide range of decay powers and canister pressures. The data from the HDCS tests will be used to host a blind model validation effort.

More Details

Experiments at Sandia to measure the effect of temperature on critical systems

Transactions of the American Nuclear Society

Harms, Gary A.; Foulk, James W.

Estimation of the uncertainty in a critical experiment attributable to uncertainties in the measured experiment temperature is done by calculating the variation of the eigenvalue of a benchmark configuration as a function of temperature. In the low-enriched water-moderated critical experiments performed at Sandia, this is done by 1) estimating the effects of changing the water temperature while holding the UO2 fuel temperature constant, 2) estimating the effects of changing the UO2 temperature while holding the water temperature constant, and 3) combining the two results. This assumes that the two effects are separable. The results of such an analysis are nonintuitive and need experimental verification. Critical experiments are being planned at Sandia National Laboratories (Sandia) to measure the effect of temperature on critical systems and will serve to test the methods used in estimating the temperature effects in critical experiments.

More Details

LEAST COST MICROGRID RESOURCE PLANNING for the NATURAL ENERGY LABORATORY of HAWAII AUTHORITY RESEARCH PARK

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Headley, Alexander; Schenkman, Benjamin L.; Olson, Keith; Sombardier, Laurence

The Natural Energy Laboratory of Hawaii Authority's (NELHA) campus on The Island of Hawai'i supplies resources for a number of renewable energy and aquaculture research projects. There is a growing interest at NELHA to convert the research campus to a 100% renewable, islanded microgrid to improve the resiliency of the campus for critical ocean water pumping loads and to limit the increase in the long-term cost of operations. Currently, the campus has solar array to cover some electricity needs but scaling up this system to fully meet the needs of the entire research campus will require significant changes and careful planning to minimize costs. This study will investigate least-cost solar and energy storage system sizes capable of meeting the needs of the campus. The campus is split into two major load centers that are electrically isolated and have different amounts of available land for solar installations. The value of adding an electrical transmission line if NELHA converts to a self-contained microgrid is explored by estimating the cost of resources for each load center individually and combined. Energy storage using lithium-ion and hydrogen-based technologies is investigated. For the hydrogen-based storage system, a variable efficiency and fixed efficiency representation of the electrolysis and fuel cell systems are used. Results using these two models show the importance of considering the changing performance of hydrogen systems for sizing algorithms.

More Details

Evaluating the efficiency of openmp tasking for unbalanced computation on diverse cpu architectures

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Olivier, Stephen L.

In the decade since support for task parallelism was incorporated into OpenMP, its use has remained limited in part due to concerns about its performance and scalability. This paper revisits a study from the early days of OpenMP tasking that used the Unbalanced Tree Search (UTS) benchmark as a stress test to gauge implementation efficiency. The present UTS study includes both Clang/LLVM and vendor OpenMP implementations on four different architectures. We measure parallel efficiency to examine each implementation’s performance in response to varying task granularity. We find that most implementations achieve over 90% efficiency using all available cores for tasks of O(100k) instructions, and the best even manage tasks of O(10k) instructions well.

More Details

HALucinator: Firmware re-hosting through abstraction layer emulation

Proceedings of the 29th USENIX Security Symposium

Clements, Abraham; Gustafson, Eric; Grosen, Paul; Fritz, David; Kruegel, Christopher; Vigna, Giovanni; Bagchi, Saurabh; Payer, Mathias

Given the increasing ubiquity of online embedded devices, analyzing their firmware is important to security, privacy, and safety. The tight coupling between hardware and firmware and the diversity found in embedded systems makes it hard to perform dynamic analysis on firmware. However, firmware developers regularly develop code using abstractions, such as Hardware Abstraction Layers (HALs), to simplify their job. We leverage such abstractions as the basis for the re-hosting and analysis of firmware. By providing high-level replacements for HAL functions (a process termed High-Level Emulation - HLE), we decouple the hardware from the firmware. This approach works by first locating the library functions in a firmware sample, through binary analysis, and then providing generic implementations of these functions in a full-system emulator. We present these ideas in a prototype system, HALucinator, able to re-host firmware, and allow the virtual device to be used normally. First, we introduce extensions to existing library matching techniques that are needed to identify library functions in binary firmware, to reduce collisions, and for inferring additional function names. Next, we demonstrate the re-hosting process, through the use of simplified handlers and peripheral models, which make the process fast, flexible, and portable between firmware samples and chip vendors. Finally, we demonstrate the practicality of HLE for security analysis, by supplementing HALucinator with the American Fuzzy Lop fuzzer, to locate multiple previously-unknown vulnerabilities in firmware middleware libraries.

More Details

Development and validation of passive yaw in the open-source wec-sim code

Proceedings of the International Conference on Offshore Mechanics and Arctic Engineering - OMAE

Forbush, Dominic; Ruehl, Kelley M.; Ogden, David; Van Rij, Jennifer; Yu, Yi H.; Tom, Nathan

A passive yaw implementation is developed, validated, and explored for the WEC-Sim, an open-source wave energy converter modeling tool that works within MATLAB/Simulink. The Reference Model 5 (RM5) is selected for this investigation, and a WEC-Sim model of the device is modified to allow yaw motion. A boundary element method (BEM) code was used to calculate the excitation force coefficients for a range of wave headings. An algorithm was implemented in WEC-Sim to determine the equivalent wave heading from a body's instantaneous yaw angle and interpolate the appropriate excitation coefficients to ensure the correct time-domain excitation force. This approach is able to determine excitation force for a body undergoing large yaw displacement. For the mathematically simple case of regular wave excitation, the dynamic equation was integrated numerically and found to closely approximate the results from this implementation in WEC-Sim. A case study is presented for the same device in irregular waves. In this case, computation time is increased by 32x when this interpolation is performed at every time step. To reduce this expense, a threshold yaw displacement can be set to reduce the number of interpolations performed. A threshold of 0.01o was found to increase computation time by only 22x without significantly affecting time domain results. Similar amplitude spectra for yaw force and displacements are observed for all threshold values less than 1o, for which computation time is only increased by 2.2x.

More Details

Data-driven Compact Models for Circuit Design and Analysis

Proceedings of Machine Learning Research

Aadithya, Karthik V.; Kuberry, Paul; Paskaleva, Biliana S.; Bochev, Pavel B.; Leeson, Kenneth M.; Mar, Alan; Mei, Ting; Keiter, Eric R.

Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics (e.g., radiation effects) into an existing model is not trivial and may require redevelopment from scratch. Machine Learning (ML) techniques have the potential to automate and significantly speed up the development of compact models. In addition, ML provides a range of modeling options that can be used to develop hierarchies of compact models tailored to specific circuit design stages. In this paper, we explore three such options: (1) table-based interpolation, (2) Generalized Moving Least-Squares, and (3) feed-forward Deep Neural Networks, to develop compact models for a p-n junction diode. We evaluate the performance of these “data-driven” compact models by (1) comparing their voltage-current characteristics against laboratory data, and (2) building a bridge rectifier circuit using these devices, predicting the circuit's behavior using SPICE-like circuit simulations, and then comparing these predictions against laboratory measurements of the same circuit.

More Details

Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint

Proceedings of Machine Learning Research

Cyr, Eric C.; Gulian, Mamikon; Patel, Ravi; Perego, Mauro; Trask, Nathaniel A.

Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs. The adoption of an adaptive basis viewpoint of DNNs leads to novel initializations and a hybrid least squares/gradient descent optimizer. We provide analysis of these techniques and illustrate via numerical examples dramatic increases in accuracy and convergence rate for benchmarks characterizing scientific applications where DNNs are currently used, including regression problems and physics-informed neural networks for the solution of partial differential equations.

More Details

The single-volume scatter camera

Proceedings of SPIE - The International Society for Optical Engineering

Manfredi, Juan J.; Adamek, Evan; Brown, Joshua A.; Brubaker, E.; Cabrera-Palmer, B.; Cates, Joshua; Dorrill, Ryan; Druetzler, Andrew; Elam, Jeff; Feng, Patrick L.; Folsom, Micah; Galindo-Tellez, Aline; Goldblum, Bethany L.; Hausladen, Paul; Kaneshige, Nathan; Keefe, Kevin P.; Laplace, Thibault A.; Learned, John G.; Mane, Anil; Marleau, P.; Mattingly, John; Mishra, Mudit; Moustafa, Ahmed; Nattress, Jason; Steele, J.; Sweany, Melinda D.; Weinfurther, Kyle J.; Ziock, Klaus P.

The multi-institution Single-Volume Scatter Camera (SVSC) collaboration led by Sandia National Laboratories (SNL) is developing a compact, high-efficiency double-scatter neutron imaging system. Kinematic emission imaging of fission-energy neutrons can be used to detect, locate, and spatially characterize special nuclear material. Neutron-scatter cameras, analogous to Compton imagers for gamma ray detection, have a wide field of view, good event-by-event angular resolution, and spectral sensitivity. Existing systems, however, suffer from large size and/or poor efficiency. We are developing high-efficiency scatter cameras with small form factors by detecting both neutron scatters in a compact active volume. This effort requires development and characterization of individual system components, namely fast organic scintillators, photodetectors, electronics, and reconstruction algorithms. In this presentation, we will focus on characterization measurements of several SVSC candidate scintillators. The SVSC collaboration is investigating two system concepts: the monolithic design in which isotropically emitted photons are detected on the sides of the volume, and the optically segmented design in which scintillation light is channeled along scintillator bars to segmented photodetector readout. For each of these approaches, we will describe the construction and performance of prototype systems. We will conclude by summarizing lessons learned, comparing and contrasting the two system designs, and outlining plans for the next iteration of prototype design and construction.

More Details

Formal verification of run-to-completion style statecharts using event-b

Communications in Computer and Information Science

Foulk, James W.; Snook, Colin; Hoang, Thai S.; Hulette, Geoffrey C.; Armstrong, Robert C.; Butler, Michael

Although popular in industry, state-chart notations with ‘run to completion’ semantics lack formal refinement and rigorous verification methods. State-chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. We introduce a notion of refinement into a ‘run to completion’ statechart modelling notation, and leverage Event-B ’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how critical (e.g. safety) invariant properties can be verified by proof despite the reactive nature of the system. We also show how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic model checking approach.

More Details

Automatic detection of defects in high reliability as-built parts using x-ray CT

Proceedings of SPIE - The International Society for Optical Engineering

Potter, Kevin M.; Donohoe, Brendan D.; Greene, Benjamin; Pribisova, Abigail L.; Donahue, Emily

Automatic detection of defects in as-built parts is a challenging task due to the large number of potential manufacturing flaws that can occur. X-Ray computed tomography (CT) can produce high-quality images of the parts in a non-destructive manner. The images, however, are grayscale valued, often have artifacts and noise, and require expert interpretation to spot flaws. In order for anomaly detection to be reproducible and cost effective, an automated method is needed to find potential defects. Traditional supervised machine learning techniques fail in the high reliability parts regime due to large class imbalance: there are often many more examples of well-built parts than there are defective parts. This, coupled with the time expense of obtaining labeled data, motivates research into unsupervised techniques. In particular, we build upon the AnoGAN and f-AnoGAN work by T. Schlegl et al. and created a new architecture called PandaNet. PandaNet learns an encoding function to a latent space of defect-free components and a decoding function to reconstruct the original image. We restrict the training data to defect-free components so that the encode-decode operation cannot learn to reproduce defects well. The difference between the reconstruction and the original image highlights anomalies that can be used for defect detection. In our work with CT images, PandaNet successfully identifies cracks, voids, and high z inclusions. Beyond CT, we demonstrate PandaNet working successfully with little to no modifications on a variety of common 2-D defect datasets both in color and grayscale.

More Details

Substructure interface reduction techniques to capture nonlinearities in bolted structures

Proceedings of the ASME Design Engineering Technical Conference

Singh, Aabhas; Allen, Matthew S.; Kuether, Robert J.

Structural dynamic finite element models typically use multipoint constraints (MPC) to condense the degrees of freedom (DOF) near bolted joints down to a single node, which can then be joined to neighboring structures with linear springs or nonlinear elements. Scalability becomes an issue when multiple joints are present in a system, because each requires its own model to capture the nonlinear behavior. While this increases the computational cost, the larger problem is that the parameters of the joint models are not known, and so one must solve a nonlinear model updating problem with potentially hundreds of unknown variables to fit the model to measurements. Furthermore, traditional MPC approaches are limited in how the flexibility of the interface is treated (i.e. with rigid bar elements the interface has no flexibility). To resolve this shortcoming, this work presents an alternative approach where the contact interface is reduced to a set of modal DOF which retain the flexibility of the interface and are capable of modeling multiple joints simultaneously. Specifically, system-level characteristic constraint (S-CC) reduction is used to reduce the motion at the contact interface to a small number of shapes. To capture the hysteresis and energy dissipation that is present during microslip of joints, a hysteretic element is applied to a small number of the S-CC Shapes. This method is compared against a traditional MPC method (using rigid bar elements) on a two-dimensional finite element model of a cantilever beam with a single joint near the free end. For all methods, a four-parameter Iwan element is applied to the interface DOF to capture how the amplitude dependent modal frequency and damping change with vibration amplitude.

More Details

Minimizing residual stress in brazed joints by optimizing the brazing thermal profile

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Mann, Benjamin; Ford, Kurtis; Neilsen, Michael K.; Kammler, Daniel

Ceramic to metal brazing is a common bonding process usedin many advanced systems such as automotive engines, aircraftengines, and electronics. In this study, we use optimizationtechniques and finite element analysis utilizing viscoplastic andthermo-elastic material models to find an optimum thermalprofile for a Kovar® washer bonded to an alumina button that istypical of a tension pull test. Several active braze filler materialsare included in this work. Cooling rates, annealing times, aging,and thermal profile shapes are related to specific materialbehaviors. Viscoplastic material models are used to represent thecreep and plasticity behavior in the Kovar® and braze materialswhile a thermo-elastic material model is used on the alumina.The Kovar® is particularly interesting because it has a Curiepoint at 435°C that creates a nonlinearity in its thermal strain andstiffness profiles. This complex behavior incentivizes theoptimizer to maximize the stress above the Curie point with afast cooling rate and then favors slow cooling rates below theCurie point to anneal the material. It is assumed that if failureoccurs in these joints, it will occur in the ceramic material.Consequently, the maximum principle stress of the ceramic isminimized in the objective function. Specific details of the stressstate are considered and discussed.

More Details

Bounding uncertainty in functional data: A case study

Quality Engineering

Tucker, J.D.; King, Caleb; Martin, Nevin

Functional data are fast becoming a preeminent source of information across a wide range of industries. A particularly challenging aspect of functional data is bounding uncertainty. In this unique case study, we present our attempts at creating bounding functions for selected applications at Sandia National Laboratories (SNL). The first attempt involved a simple extension of functional principal component analysis (fPCA) to incorporate covariates. Though this method was straightforward, the extension was plagued by poor coverage accuracy for the bounding curve. This led to a second attempt utilizing elastic methodology which yielded more accurate coverage at the cost of more complexity.

More Details

Heterogeneous integration of silicon electronics and compound semiconductor optoelectronics for miniature rf photonic transceivers

ECS Transactions

Nordquist, Christopher D.; Skogen, Erik J.; Fortuna, Seth A.; Hollowell, Andrew E.; Hemmady, Caroline S.; Foulk, James W.; Forbes, Travis; Wood, Michael G.; Jordan, Matthew; Mcclain, Jaime; Lepkowski, Stefan; Alford, Charles; Peake, Gregory M.; Pomerene, Andrew; Long, Christopher M.; Serkland, Darwin K.; Dean, Kenneth A.

Heterogeneous Integration (HI) may enable optoelectronic transceivers for short-range and long-range radio frequency (RF) photonic interconnect using wavelength-division multiplexing (WDM) to aggregate signals, provide galvanic isolation, and reduce crosstalk and interference. Integration of silicon Complementary Metal-Oxide-Semiconductor (CMOS) electronics with InGaAsP compound semiconductor photonics provides the potential for high-performance microsystems that combine complex electronic functions with optoelectronic capabilities from rich bandgap engineering opportunities, and intimate integration allows short interconnects for lower power and latency. The dominant pure-play foundry model plus the differences in materials and processes between these technologies dictate separate fabrication of the devices followed by integration of individual die, presenting unique challenges in die preparation, metallization, and bumping, especially as interconnect densities increase. In this paper, we describe progress towards realizing an S-band WDM RF photonic link combining 180 nm silicon CMOS electronics with InGaAsP integrated optoelectronics, using HI processes and approaches that scale into microwave and millimeter-wave frequencies.

More Details

Randomized projection for rank-revealing matrix factorizations and low-rank approximations

SIAM Review

Duersch, Jed A.; Gu, Ming

Rank-revealing matrix decompositions provide an essential tool in spectral analysis of matrices, including the Singular Value Decomposition (SVD) and related low-rank approximation techniques. QR with Column Pivoting (QRCP) is usually suitable for these purposes, but it can be much slower than the unpivoted QR algorithm. For large matrices, the difference in performance is due to increased communication between the processor and slow memory, which QRCP needs in order to choose pivots during decomposition. Our main algorithm, Randomized QR with Column Pivoting (RQRCP), uses randomized projection to make pivot decisions from a much smaller sample matrix, which we can construct to reside in a faster level of memory than the original matrix. This technique may be understood as trading vastly reduced communication for a controlled increase in uncertainty during the decision process. For rank-revealing purposes, the selection mechanism in RQRCP produces results that are the same quality as the standard algorithm, but with performance near that of unpivoted QR (often an order of magnitude faster for large matrices). We also propose two formulas that facilitate further performance improvements. The first efficiently updates sample matrices to avoid computing new randomized projections. The second avoids large trailing updates during the decomposition in truncated low-rank approximations. Our truncated version of RQRCP also provides a key initial step in our truncated SVD approximation, TUXV. These advances open up a new performance domain for large matrix factorizations that will support efficient problem-solving techniques for challenging applications in science, engineering, and data analysis.

More Details

Automated drilling of high aspect ratio, small diameter holes in remote, confined spaces

ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)

Rittikaidachar, Michal; Hobart, Clinton; Slightam, Jonathon E.; Su, Jiann-Cherng; Buerger, Stephen P.

We describe the development and benchtop prototype performance characterization of a mechatronic system for automatically drilling small diameter holes of arbitrary depth, to enable monitoring the integrity of oil and gas wells in situ. The precise drilling of very small diameter, high aspect ratio holes, particularly in dimensionally constrained spaces, presents several challenges including bit buckling, limited torsional stiffness, chip clearing, and limited space for the bit and mechanism. We describe a compact mechanism that overcomes these issues by minimizing the unsupported drill bit length throughout the process, enabling the bit to be progressively fed from a chuck as depth increases. When used with flexible drill bits, holes of arbitrary depth and aspect ratio may be drilled orthogonal to the wellbore. The mechanism and a conventional drilling system are tested in deep hole drilling operation. The experimental results show that the system operates as intended and achieves holes with substantially greater aspect ratios than conventional methods with very long drill bits. The mechanism enabled successful drilling of a 1/16" diameter hole to a depth of 9", a ratio of 144:1. Dysfunctions prevented drilling of the same hole using conventional methods.

More Details

Determination of background doping type in type-II superlattice using capacitance-voltage measurements with double mesa structure

Proceedings of SPIE - The International Society for Optical Engineering

Fink, D.R.; Lee, S.; Kodati, S.H.; Rogers, V.; Ronningen, T.J.; Winslow, M.; Grein, C.H.; Jones, A.H.; Campbell, J.C.; Klem, John F.; Krishna, S.

We present a method of determining the background doping type in semiconductors using capacitance-voltage measurements on overetched double mesa p-i-n or n-i-p structures. Unlike Hall measurements, this method is not limited by the conductivity of the substrate. By measuring the capacitance of devices with varying top and bottom mesa sizes, we were able to conclusively determine which mesa contained the p-n junction, revealing the polarity of the intrinsic layer. This method, when demonstrated on GaSb p-i-n and n-i-p structures, determined that the material is residually doped p-type, which is well established by other sources. The method was then applied on a 10 monolayer InAs/10 monolayer AlSb superlattice, for which the doping polarity was unknown, and indicated that this material is also p-type.

More Details

Lateral inhibition in magnetic domain wall racetrack arrays for neuromorphic computing

Proceedings of SPIE - The International Society for Optical Engineering

Cui, Can; Akinola, Otitoaleke G.; Hassan, Naimul; Bennett, Christopher; Marinella, Matthew; Friedman, Joseph S.; Incorvia, Jean A.C.

Neuromorphic computing captures the quintessential neural behaviors of the brain and is a promising candidate for the beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. The neuronal lateral inhibition feature, closely associated with the biological receptive field, is crucial to neuronal competition in the nervous system as well as its neuromorphic hardware counterpart. The domain wall - magnetic tunnel junction (DW-MTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. This work discusses lateral inhibition mechanism of the DW-MTJ neuron and shows by micromagnetic simulation that lateral inhibition is efficiently enhanced by the Dzyaloshinskii-Moriya interaction (DMI).

More Details

An Unbalanced Sinuous Antenna for Near-Surface Polarimetric Ground-Penetrating Radar

IEEE Open Journal of Antennas and Propagation

Crocker, Dylan A.; Scott, Waymond R.

Sinuous antennas are capable of producing ultra-wideband radiation with polarization diversity. This capability makes the sinuous antenna an attractive candidate for UWB polarimetric radar applications. Additionally, the ability of the sinuous antenna to be implemented as a planar structure makes it a good fit for close-in sensing applications such as ground-penetrating radar (GPR). In this work, each arm of a four-port sinuous antenna is operated independently to achieve a quasi-monostatic antenna system capable of polarimetry while separating transmit and receive channels-which is often desirable in GPR systems. The quasi-monostatic configuration of the sinuous antenna reduces system size as well as prevents extreme bistatic angles, which may significantly reduce sensitivity when attempting to detect near-surface targets. A prototype four-port sinuous antenna is fabricated and integrated into a GPR testbed. The polarimetric data obtained with the antenna is then used to distinguish between buried target symmetries.

More Details

Matrix completion for compressive sensing using consensus equilibrium

Proceedings of SPIE - The International Society for Optical Engineering

Lee, Dennis J.

We propose a technique for reconstruction from incomplete compressive measurements. Our approach combines compressive sensing and matrix completion using the consensus equilibrium framework. Consensus equilibrium breaks the reconstruction problem into subproblems to solve for the high-dimensional tensor. This framework allows us to apply two constraints on the statistical inversion problem. First, matrix completion enforces a low rank constraint on the compressed data. Second, the compressed tensor should be consistent with the uncompressed tensor when it is projected onto the low-dimensional subspace. We validate our method on the Indian Pines hyperspectral dataset with varying amounts of missing data. This work opens up new possibilities for data reduction, compression, and reconstruction.

More Details

Penetration through slots in cylindrical cavities operating at fundamental cavity modes in the presence of electromagnetic absorbers

Progress In Electromagnetics Research M

Campione, Salvatore; Warne, Larry K.; Reines, Isak C.; Gutierrez, Roy K.; Williams, Jeffery T.

Placing microwave absorbing materials into a high-quality factor resonant cavity may in general reduce the large interior electromagnetic fields excited under external illumination. In this paper, we aim to combine two analytical models we previously developed: 1) an unmatched formulation for frequencies below the slot resonance to model shielding effectiveness versus frequency; and 2) a perturbation model approach to estimate the quality factor of cavities in the presence of absorbers. The resulting model realizes a toolkit with which design guidelines of the absorber’s properties and location can be optimized over a frequency band. Analytic predictions of shielding effectiveness for three transverse magnetic modes for various locations of the absorber placed on the inside cavity wall show good agreement with both full-wave simulations and experiments, and validate the proposed model. This analysis opens new avenues for specialized ways to mitigate harmful fields within cavities.

More Details

The mean logarithm emerges with self-similar energy balance

Journal of Fluid Mechanics

Hwang, Yongyun; Lee, Myoungkyu

The attached eddy hypothesis of Townsend (The Structure of Turbulent Shear Flow, 1956, Cambridge University Press) states that the logarithmic mean velocity admits self-similar energy-containing eddies which scale with the distance from the wall. Over the past decade, there has been a significant amount of evidence supporting the hypothesis, placing it to be the central platform for the statistical description of the general organisation of coherent structures in wall-bounded turbulent shear flows. Nevertheless, the most fundamental question, namely why the hypothesis has to be true, has remained unanswered over many decades. Under the assumption that the integral length scale is proportional to the distance from the wall y, in the present study we analytically demonstrate that the mean velocity is a logarithmic function of y if and only if the energy balance at the integral length scale is self-similar with respect to y, providing a theoretical basis for the attached eddy hypothesis. The analysis is subsequently verified with the data from a direct numerical simulation of incompressible channel flow at the friction Reynolds number Reτ ≃ 5200 (Lee & Moser, J. Fluid Mech., vol. 774, 2015, pp. 395-415).

More Details

Plasticity-enhanced domain-wall MTJ neural networks for energy-efficient online learning

Proceedings - IEEE International Symposium on Circuits and Systems

Bennett, Christopher; Xiao, Tianyao P.; Cui, Can; Hassan, Naimul; Akinola, Otitoaleke G.; Incorvia, Jean A.C.; Velasquez, Alvaro; Friedman, Joseph S.; Marinella, Matthew

Machine learning implements backpropagation via abundant training samples. We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly (with few samples). We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20µJ even for large tasks used typically in machine learning.

More Details

Minimizing residual stress in brazed joints by optimizing the brazing thermal profile

ASME International Mechanical Engineering Congress and Exposition Proceedings Imece

Mann, Benjamin; Ford, Kurtis; Neilsen, Michael K.; Kammler, Daniel

Ceramic to metal brazing is a common bonding process usedin many advanced systems such as automotive engines, aircraftengines, and electronics. In this study, we use optimizationtechniques and finite element analysis utilizing viscoplastic andthermo-elastic material models to find an optimum thermalprofile for a Kovar® washer bonded to an alumina button that istypical of a tension pull test. Several active braze filler materialsare included in this work. Cooling rates, annealing times, aging,and thermal profile shapes are related to specific materialbehaviors. Viscoplastic material models are used to represent thecreep and plasticity behavior in the Kovar® and braze materialswhile a thermo-elastic material model is used on the alumina.The Kovar® is particularly interesting because it has a Curiepoint at 435°C that creates a nonlinearity in its thermal strain andstiffness profiles. This complex behavior incentivizes theoptimizer to maximize the stress above the Curie point with afast cooling rate and then favors slow cooling rates below theCurie point to anneal the material. It is assumed that if failureoccurs in these joints, it will occur in the ceramic material.Consequently, the maximum principle stress of the ceramic isminimized in the objective function. Specific details of the stressstate are considered and discussed.

More Details

Measuring fatigue crack growth behavior of ferritic steels near threshold in high pressure hydrogen gas

American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP

Ronevich, Joseph; San Marchi, Chris; Nibur, Kevin A.; Bortot, Paolo; Bassanini, Gianluca; Sileo, Michele

Following the ASME codes, the design of pipelines and pressure vessels for transportation or storage of high-pressure hydrogen gas requires measurements of fatigue crack growth rates at design pressure. However, performing tests in high pressure hydrogen gas can be very costly as only a few laboratories have the unique capabilities. Recently, Code Case 2938 was accepted in ASME Boiler and Pressure Vessel Code (BPVC) VIII-3 allowing for design curves to be used in lieu of performing fatigue crack growth rate (da/dN vs. ?K) and fracture threshold (KIH) testing in hydrogen gas. The design curves were based on data generated at 100 MPa H2 on SA-372 and SA-723 grade steels; however, the data used to generate the design curves are limited to measurements of ?K values greater than 6 MPa m1/2. The design curves can be extrapolated to lower ?K (<6 MPa m1/2), but the threshold stress intensity factor (?Kth) has not been measured in hydrogen gas. In this work, decreasing ?K tests were performed at select hydrogen pressures to explore threshold (?Kth) for ferritic-based structural steels (e.g. pipelines and pressure vessels). The results were compared to decreasing ?K tests in air, showing that the fatigue crack growth rates in hydrogen gas appear to yield similar or even slightly lower da/dN values compared to the curves in air at low ?K values when tests were performed at stress ratios of 0.5 and 0.7. Correction for crack closure was implemented, which resulted in better agreement with the design curves and provide an upper bound throughout the entire ?K range, even as the crack growth rates approach ?Kth. This work gives further evidence of the utility of the design curves described in Code Case 2938 of the ASME BPVC VIII-3 for construction of high pressure hydrogen vessels.

More Details

Progress in micron-scale field emission models based on nanoscale surface characterization for use in PIC-DSMC vacuum arc simulations

Proceedings - International Symposium on Discharges and Electrical Insulation in Vacuum, ISDEIV

Moore, Christopher H.; Jindal, Ashish K.; Bussmann, Ezra; Ohta, Taisuke; Berg, Morgann; Thomas, Cherrelle; Clem, Paul; Hopkins, Matthew M.

3D Particle-In-Cell Direct Simulation Monte Carlo (PIC-DSMC) simulations of cm-sized devices cannot resolve atomic-scale (nm) surface features and thus one must generate micron-scale models for an effective “local” work function, field enhancement factor, and emission area. Here we report on development of a stochastic effective model based on atomic-scale characterization of as-built electrode surfaces. Representative probability density distributions of the work function and geometric field enhancement factor (beta) for a sputter-deposited Pt surface are generated from atomic-scale surface characterization using Scanning Tunneling Microscopy (STM), Atomic Force Microscopy (AFM), and Photoemission Electron Microscopy (PEEM). In the micron-scale model every simulated PIC-DSMC surface element draws work functions and betas for many independent “atomic emitters”. During the simulation the field emitted current from an element is computed by summing each “atomic emitter's” current. This model has reasonable agreement with measured micron-scale emitted currents across a range of electric field values.

More Details

Input signal synthesis for open-loop multiple-input/multiple-output testing

Conference Proceedings of the Society for Experimental Mechanics Series

Schultz, Ryan; Nelson, Garrett

Many in the structural dynamics community are currently researching a range of multiple-input/multiple-output problems and largely rely on commercially-available closed-loop controllers to execute their experiments. Generally, these commercially-available control systems are robust and prove adequate for a wide variety of testing. However, with the development of new techniques in this field, researchers will want to exercise these new techniques in laboratory tests. For example, modifying the control or input estimation method can have benefits to the accuracy of control, or provide higher response for a given input. Modification of the control methods is not typically possible in commercially-available control systems, therefore it is desirable to have some methodology available which allows researchers to synthesize input signals for multiple-input/multiple-output experiments. Here, methods for synthesizing multiply-correlated time histories based on desired cross spectral densities are demonstrated and then explored to understand effects of various parameters on the resulting signals, their statistics, and their relation to the specified cross spectral densities. This paper aims to provide researchers with a simple, step-by-step process which can be implemented to generate input signals for open-loop multiple-input/multiple-output experiments.

More Details

Multiresolution Localization with Temporal Scanning for Super-Resolution Diffuse Optical Imaging of Fluorescence

IEEE Transactions on Image Processing

Bentz, Brian Z.; Lin, Dergan; Patel, Justin A.; Webb, Kevin J.

A super-resolution optical imaging method is presented that relies on the distinct temporal information associated with each fluorescent optical reporter to determine its spatial position to high precision with measurements of heavily scattered light. This multiple-emitter localization approach uses a diffusion equation forward model in a cost function, and has the potential to achieve micron-scale spatial resolution through centimeters of tissue. Utilizing some degree of temporal separation for the reporter emissions, position and emission strength are determined using a computationally efficient temporal-scanning multiresolution algorithm. The approach circumvents the spatial resolution challenges faced by earlier optical imaging approaches by using a diffusion equation forward model, and is promising for in vivo applications. For example, in principle, the method could be used to localize individual neurons firing throughout a rodent brain, enabling the direct imaging of neural network activity.

More Details

Comparison of multi-axis testing of the BARC structure with varying boundary conditions

Conference Proceedings of the Society for Experimental Mechanics Series

Rohe, Daniel P.; Schultz, Ryan; Schoenherr, Tyler F.; Skousen, Troy J.; Jones, Richard J.

The Box Assembly with Removable Component (BARC) structure was developed as a challenge problem for those investigating boundary conditions and their effect on structural dynamic tests. To investigate the effects of boundary conditions on the dynamic response of the Removable Component, it was tested in three configurations, each with a different fixture and thus a different boundary condition. A “truth” configuration test with the component attached to its next-level assembly (the Box) was first performed to provide data that multi-axis tests of the component would aim to replicate. The following two tests aimed to reproduce the component responses of the first test through multi-axis testing. The first of these tests is a more “traditional” vibration test with the removable component attached to a “rigid” plate fixture. A second set of these tests replaces the fixture plate with flexible fixtures designed using topology optimization and created using additive manufacturing. These two test approaches are compared back to the truth test to determine how much improvement can be obtained in a laboratory test by using a fixture that is more representative of the compliance of the component’s assembly.

More Details

Defining component environments and margin through zemblanic consideration of function spaces

Conference Proceedings of the Society for Experimental Mechanics Series

Starr, Michael; Segalman, Daniel J.

Historically the qualification process for vehicles carrying vulnerable components has centered around the Shock Response Spectrum (SRS) and qualification consisted of devising a collection of tests whose collective SRS enveloped the qualification SRS. This involves selecting whatever tests are convenient that will envelope the qualification SRS over at least part of its spectrum; this selection is without any consideration of the details of structural response or the nature of anticipated failure of its components. It is asserted that this approach often leads to over-testing, however, as has been pointed out several times in the literature, this approach may not even be conservative. Given the advances in computational and experimental technology in the last several decades, it would be appropriate to seek some strategy of test selection that does account for structural response and failure mechanism and that pushes against the vulnerabilities of that specific structure. A strategy for such a zemblanic (zemblanity is the opposite of serendipity, the faculty of making unhappy, unlucky and expected discoveries by design) approach is presented.

More Details

On Inexact Solvers for the Coarse Problem of BDDC

Lecture Notes in Computational Science and Engineering

Dohrmann, Clark R.; Pierson, Kendall H.; Widlund, Olof B.

In this study, we present Balancing Domain Decomposition by Constraints (BDDC) preconditioners for three-dimensional scalar elliptic and linear elasticity problems in which the direct solution of the coarse problem is replaced by a preconditioner based on a smaller vertex-based coarse space.

More Details

FROSch: A Fast And Robust Overlapping Schwarz Domain Decomposition Preconditioner Based on Xpetra in Trilinos

Lecture Notes in Computational Science and Engineering

Heinlein, Alexander; Klawonn, Axel; Rajamanickam, Sivasankaran; Rheinbach, Oliver

This article describes a parallel implementation of a two-level overlapping Schwarz preconditioner with the GDSW (Generalized Dryja–Smith–Widlund) coarse space described in previous work [12, 10, 15] into the Trilinos framework; cf. [16]. The software is a significant improvement of a previous implementation [12]; see Sec. 4 for results on the improved performance.

More Details

The diameter effect in Bullseye powder

Shock Waves

Miner, T.; Dalton, D.; Romero, D.; Heine, M.; Todd, S.

Detonation velocity as a function of charge diameter is reported for Alliant Bullseye powder. Results are compared to those of mixtures of ammonium nitrate mixed with aluminum and ammonium nitrate mixed with fuel oil. Additionally, measurements of free surface velocity of flyers in contact with detonating Bullseye are presented and results are compared to those of hydrocode calculations using a Jones–Wilkins–Lee equation of state generated in a thermochemical code. Comparison to the experimental results shows that both the free surface and terminal velocities were under-predicted.

More Details

Multifidelity uncertainty propagation for cardiovascular hemodynamics

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Schiavazzi, Daniele E.; Fleeter, Casey M.; Geraci, Gianluca; Marsden, Alison L.

Predictions from numerical hemodynamics are increasingly adopted and trusted in the diagnosis and treatment of cardiovascular disease. However, the predictive abilities of deterministic numerical models are limited due to the large number of possible sources of uncertainty including boundary conditions, vessel wall material properties, and patient specific model anatomy. Stochastic approaches have been proposed as a possible improvement, but are penalized by the large computational cost associated with repeated solutions of the underlying deterministic model. We propose a stochastic framework which leverages three cardiovascular model fidelities, i.e., three-, one- and zero-dimensional representations of cardiovascular blood flow. Specifically, we employ multilevel and multifidelity estimators from Sandia's open-source Dakota toolkit to reduce the variance in our estimated quantities of interest, while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for both global and local hemodynamic indicators.

More Details

Multifideliy optimization under uncertainty for a scramjet-inspired problem

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Menhorn, Friedrich M.; Geraci, Gianluca; Eldred, Michael; Marzouk, Youssef M.

SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) is a method for stochastic nonlinear constrained derivative-free optimization. For such problems, it extends the path-augmented constraints framework introduced by the deterministic optimization method NOWPAC and uses a noise-adapted trust region approach and Gaussian processes for noise reduction. In recent developments, SNOWPAC is available in the DAKOTA framework which offers a highly flexible interface to couple the optimizer with different sampling strategies or surrogate models. In this paper we discuss details of SNOWPAC and demonstrate the coupling with DAKOTA. We showcase the approach by presenting design optimization results of a shape in a 2D supersonic duct. This simulation is supposed to imitate the behavior of the flow in a SCRAMJET simulation but at a much lower computational cost. Additionally different mesh or model fidelities can be tested. Thus, it serves as a convenient test case before moving to costly SCRAMJET computations. Here, we study deterministic results and results obtained by introducing uncertainty on inflow parameters. As sampling strategies we compare classical Monte Carlo sampling with multilevel Monte Carlo approaches for which we developed new error estimators. All approaches show a reasonable optimization of the design over the objective while maintaining or seeking feasibility. Furthermore, we achieve significant reductions in computational cost by using multilevel approaches that combine solutions from different grid resolutions.

More Details

Krylov Smoothing for Fully-Coupled AMG Preconditioners for VMS Resistive MHD

Lecture Notes in Computational Science and Engineering

Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.

This study explores the use of a Krylov iterative method (GMRES) as a smoother for an algebraic multigrid (AMG) preconditioned Newton–Krylov iterative solution approach for a fully-implicit variational multiscale (VMS) finite element (FE) resistive magnetohydrodynamics (MHD) formulation. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play an essential role. Krylov smoothers are considered an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. This brief study presents three time dependent resistive MHD test cases to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.

More Details

Optimization-based property-preserving solution recovery for fault-tolerant scalar transport

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Ridzal, Denis; Bochev, Pavel B.

As the mean time between failures on the future high-performance computing platforms is expected to decrease to just a few minutes, the development of “smart”, property-preserving checkpointing schemes becomes imperative to avoid dramatic decreases in application utilization. In this paper we formulate a generic optimization-based approach for fault-tolerant computations, which separates property preservation from the compression and recovery stages of the checkpointing processes. We then specialize the approach to obtain a fault recovery procedure for a model scalar transport equation, which preserves local solution bounds and total mass. Numerical examples showing solution recovery from a corrupted application state for three different failure modes illustrate the potential of the approach.

More Details

Multilevel uncertainty quantification of a wind turbine large eddy simulation model

Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018

Maniaci, David C.; Frankel, A.; Geraci, Gianluca; Blaylock, Myra L.; Eldred, Michael

Wind energy is stochastic in nature; the prediction of aerodynamic quantities and loads relevant to wind energy applications involves modeling the interaction of a range of physics over many scales for many different cases. These predictions require a range of model fidelity, as predictive models that include the interaction of atmospheric and wind turbine wake physics can take weeks to solve on institutional high performance computing systems. In order to quantify the uncertainty in predictions of wind energy quantities with multiple models, researchers at Sandia National Laboratories have applied Multilevel-Multifidelity methods. A demonstration study was completed using simulations of a NREL 5MW rotor in an atmospheric boundary layer with wake interaction. The flow was simulated with two models of disparate fidelity; an actuator line wind plant large-eddy scale model, Nalu, using several mesh resolutions in combination with a lower fidelity model, OpenFAST. Uncertainties in the flow conditions and actuator forces were propagated through the model using Monte Carlo sampling to estimate the velocity defect in the wake and forces on the rotor. Coarse-mesh simulations were leveraged along with the lower-fidelity flow model to reduce the variance of the estimator, and the resulting Multilevel-Multifidelity strategy demonstrated a substantial improvement in estimator efficiency compared to the standard Monte Carlo method.

More Details

Material properties of ceramic slurries for applications in additive manufacturing using stereolithography

Solid Freeform Fabrication 2018: Proceedings of the 29th Annual International Solid Freeform Fabrication Symposium - An Additive Manufacturing Conference, SFF 2018

Maines, Erin; Bell, Nelson S.; Evans, Lindsey; Roach, Matthew; Tsui, Lok-Kun; Lavin, Judith M.; Keicher, David

Stereolithography (SL) is a process that uses photosensitive polymer solutions to create 3D parts in a layer by layer approach. Sandia National Labs is interested in using SL for the printing of ceramic loaded resins, namely alumina, that we are formulating here at the labs. One of the most important aspects for SL printing of ceramics is the properties of the slurry itself. The work presented here will focus on the use of a novel commercially available low viscosity resin provided by Colorado Photopolymer Solutions, CPS 2030, and a Hypermer KD1 dispersant from Croda. Two types of a commercially available alumina powder, Almatis A16 SG and Almatis A15 SG, are compared to determine the effects that the size and the distribution of the powder have on the loading of the solution using rheology. The choice of a low viscosity resin allows for a high particle loading, which is necessary for the printing of high density parts using a commercial SL printer. The Krieger-Dougherty equation was used to evaluate the maximum particle loading for the system. This study found that a bimodal distribution of micron sized powder (A15 SG) reduced the shear thickening effects caused by hydroclusters, and allows for the highest alumina powder loading. A final sintered density of 90% of the theoretical density of alumina was achieved based on the optimized formulation and printing conditions.

More Details

Optimization Based Particle-Mesh Algorithm for High-Order and Conservative Scalar Transport

Lecture Notes in Computational Science and Engineering

Maljaars, Jakob M.; Labeur, Robert J.; Trask, Nathaniel A.; Sulsky, Deborah L.

A particle-mesh strategy is presented for scalar transport problems which provides diffusion-free advection, conserves mass locally (i.e. cellwise) and exhibits optimal convergence on arbitrary polyhedral meshes. This is achieved by expressing the convective field naturally located on the Lagrangian particles as a mesh quantity by formulating a dedicated particle-mesh projection based via a PDE-constrained optimization problem. Optimal convergence and local conservation are demonstrated for a benchmark test, and the application of the scheme to mass conservative density tracking is illustrated for the Rayleigh–Taylor instability.

More Details

An algebraic sparsified nested dissection algorithm using low-rank approximations

SIAM Journal on Matrix Analysis and Applications

Cambier, Leopold; Boman, Erik G.; Rajamanickam, Sivasankaran; Tuminaro, Raymond S.; Darve, Eric

We propose a new algorithm for the fast solution of large, sparse, symmetric positive-definite linear systems, spaND (sparsified Nested Dissection). It is based on nested dissection, sparsification, and low-rank compression. After eliminating all interiors at a given level of the elimination tree, the algorithm sparsifies all separators corresponding to the interiors. This operation reduces the size of the separators by eliminating some degrees of freedom but without introducing any fill-in. This is done at the expense of a small and controllable approximation error. The result is an approximate factorization that can be used as an efficient preconditioner. We then perform several numerical experiments to evaluate this algorithm. We demonstrate that a version using orthogonal factorization and block-diagonal scaling takes fewer CG iterations to converge than previous similar algorithms on various kinds of problems. Furthermore, this algorithm is provably guaranteed to never break down and the matrix stays symmetric positive-definite throughout the process. We evaluate the algorithm on some large problems show it exhibits near-linear scaling. The factorization time is roughly \scrO (N), and the number of iterations grows slowly with N.

More Details

Investigating Nonlinearity in a Bolted Structure Using Force Appropriation Techniques

Conference Proceedings of the Society for Experimental Mechanics Series

Pacini, Benjamin R.; Roettgen, Daniel R.; Rohe, Daniel P.

Understanding the dynamic response of a structure is critical to design. This is of extreme importance in high-consequence systems on which human life can depend. Historically, these structures have been modeled as linear, where response scales proportionally with excitation amplitude. However, most structures are nonlinear to the extent that linear models are no longer sufficient to adequately capture important dynamics. Sources of nonlinearity include, but are not limited to: large deflections (so called geometric nonlinearities), complex materials, and frictional interfaces/joints in assemblies between subcomponents. Joint nonlinearities usually cause the natural frequency to decrease and the effective damping ratio to increase with response amplitude due to microslip effects. These characteristics can drastically alter the dynamics of a structure and, if not well understood, could lead to unforeseen failure or unnecessarily over-designed features. Nonlinear structural dynamics has been a subject of study for many years, and provide a summary of recent developments and discoveries in this field. One topic discussed in these papers are nonlinear normal modes (NNMs) which are periodic solutions of the underlying conservative system. They provide a theoretical framework for describing the energy-dependence of natural frequencies and mode shapes of nonlinear systems, and lead to a promising method to validate nonlinear models. In and, a force appropriation testing technique was developed which allowed for the experimental tracking of undamped NNMs by achieving phase quadrature between the excitation and response. These studies considered damping to be small to moderate, and constant. Nonlinear damping of an NNM was studied in using power-based quantities for a structure with a discrete, single-bolt interface. In this work, the force appropriation technique where phase quadrature is achieved between force and response as described in is applied to a target mode of a structure with two bolted joints, one of which comprised a large, continuous interface. This is a preliminary investigation which includes a study of nonlinear natural frequency, mode shape, and damping trends extracted from the measured data.

More Details

A method for determining impact force for single and tri axis resonant plate shock simulations

Conference Proceedings of the Society for Experimental Mechanics Series

Ferri, Brian; Hopkins, Ronald N.

In the past year, resonant plate tests designed to excite all three axes simultaneously have become increasingly popular at Sandia National Labs. Historically, only one axis was tested at a time, but unintended off axis responses were generated. In order to control the off-axis motion so that off-axis responses were created which satisfy appropriate test specifications, the test setup has to be iteratively modified so that the coupling between axes was desired. The iterative modifications were done with modeling and simulation. To model the resonant plate test, an accurate forcing function must be specified. For resonant plate shock experiments, the input force of the projectile impacting the plate is prohibitively difficult to measure in situ. To improve on current simulation results, a method to use contact forces from an explicit simulation as an input load was implemented. This work covers an overview and background of three axes resonant plate shock tests, their design, their value in experiments, and the difficulties faced in simulating them. The work also covers a summary of contact force implementation in an explicit dynamics code and how it is used to evaluate an input force for a three axes resonant plate simulation. The results from the work show 3D finite element projectile and impact block interactions as well as simulation shock response data compared to experimental shock response data.

More Details

Human performance differences between drawing-based and model-based reference materials

Advances in Intelligent Systems and Computing

Heiden, Siobhan M.; Moyer, Eric M.

The Sandia National Laboratories Human Factors team designed and executed an experiment to quantify the differences between 2D and 3D reference materials with respect to task performance and cognitive workload. A between-subjects design was used where 27 participants were randomly assigned either 2D or 3D reference material condition (14 and 13 participants, respectively). The experimental tasks required participants to interpret, locate, and report dimensions on their assigned reference material. Performance was measured by accuracy of task completion and time-to-complete. After all experimental tasks were completed, cognitive workload data were collected. Response times were longer in the 3D condition vice the 2D. However, no differences were found between conditions with respect to response accuracy and cognitive workload, which may indicate no negative cognitive impacts concerning the sole use of 3D reference materials in the work-place. This paper concludes with possible future efforts to address the limitations of this experiment and to explore the mechanisms behind the findings of this work.

More Details

A Graphical Design Approach for Two-Input Single-Output Systems Exploiting Plant/Controller Alignment: Design and Application

Journal of Dynamic Systems, Measurement and Control, Transactions of the ASME

Weir, Nathan A.; Alleyne, Andrew G.

Due to the unique structure of two-input single-output (TISO) feedback systems, several closed-loop properties can be characterized using the concepts of plant and controller "directions"and "alignment."Poor plant/controller alignment indicates significant limitations in terms of closed-loop performance. In general, it is desirable to design a controller that is well aligned with the plant in order to minimize the size of the closed-loop sensitivity functions and closed-loop interactions. Although the concept of alignment can be a useful analysis tool for a given plant/controller pair, it is not obvious how a controller should be designed to achieve good alignment. We present a new controller design approach, based on the PQ method (Schroeck et al., 2001, "On Compensator Design for Linear Time invariant Dual-Input Single-Output Systems,"IEEE/ASME Trans. Mechatronics, 6(1), pp. 50-57), which explicitly incorporates knowledge of alignment into the design process. This is accomplished by providing graphical information about the alignment angle on the Bode plot of the PQ frequency response. We show the utility of this approach through a design example.

More Details

Infrared absorption cross section of SiNx thin films

Journal of Vacuum Science and Technology A: Vacuum, Surfaces and Films

Digregorio, Sara; Habermehl, Scott D.

At the molecular level, resonant coupling of infrared radiation with oscillations of the electric dipole moment determines the absorption cross section, σ. The parameter σ relates the bond density to the total integrated absorption. In this work, σ was measured for the Si-N asymmetric stretch mode in SiNx thin films of varying composition and thickness. Thin films were deposited by low pressure chemical vapor deposition at 850 °C from mixtures of dichlorosilane and ammonia. σ for each film was determined from Fourier transform infrared spectroscopy and ellipsometric measurements. Increasing the silicon content from 0% to 25% volume fraction amorphous silicon led to increased optical absorption and a corresponding systematic increase in σ from 4.77 × 10-20 to 6.95 × 10-20cm2, which is consistent with literature values. The authors believe that this trend is related to charge transfer induced structural changes in the basal SiNx tetrahedron as the volume fraction of amorphous silicon increases. Experimental σ values were used to calculate the effective dipole oscillating charge, q, for four films of varying composition. The authors find that q increases with increasing amorphous silicon content, indicating that compositional factors contribute to modulation of the Si-N dipole moment. Additionally, in the composition range investigated, the authors found that σ agrees favorably with trends observed in films deposited by plasma enhanced chemical vapor deposition.

More Details

Regular sensitivity computation avoiding chaotic effects in particle-in-cell plasma methods

Journal of Computational Physics

Chung, Seung W.; Bond, Stephen D.; Cyr, Eric C.; Freund, Jonathan B.

Particle-in-cell (PIC) simulation methods are attractive for representing species distribution functions in plasmas. However, as a model, they introduce uncertain parameters, and for quantifying their prediction uncertainty it is useful to be able to assess the sensitivity of a quantity-of-interest (QoI) to these parameters. Such sensitivity information is likewise useful for optimization. However, computing sensitivity for PIC methods is challenging due to the chaotic particle dynamics, and sensitivity techniques remain underdeveloped compared to those for Eulerian discretizations. This challenge is examined from a dual particle–continuum perspective that motivates a new sensitivity discretization. Two routes to sensitivity computation are presented and compared: a direct fully-Lagrangian particle-exact approach provides sensitivities of each particle trajectory, and a new particle-pdf discretization, which is formulated from a continuum perspective but discretized by particles to take the advantages of the same type of Lagrangian particle description leveraged by PIC methods. Since the sensitivity particles in this approach are only indirectly linked to the plasma-PIC particles, they can be positioned and weighted independently for efficiency and accuracy. The corresponding numerical algorithms are presented in mathematical detail. The advantage of the particle-pdf approach in avoiding the spurious chaotic sensitivity of the particle-exact approach is demonstrated for Debye shielding and sheath configurations. In essence, the continuum perspective makes implicit the distinctness of the particles, which circumvents the Lyapunov instability of the N-body PIC system. The cost of the particle-pdf approach is comparable to the baseline PIC simulation.

More Details

Photocurrent from single collision 14-MeV neutrons in GaN and GaAs

IEEE Transactions on Nuclear Science

Jasica, M.J.; Wampler, William R.; Vizkelethy, Gyorgy; Hehr, Brian D.; Bielejec, Edward S.

Accurate predictions of device performance in 14-MeV neutron environments rely upon understanding the recoil cascades that may be produced. Recoils from 14-MeV neutrons impinging on both gallium nitride (GaN) and gallium arsenide (GaAs) devices were modeled and compared to the recoil spectra of devices exposed to 14-MeV neutrons. Recoil spectra were generated using nuclear reaction modeling programs and converted into an ionizing energy loss (IEL) spectrum. We measured the recoil IEL spectra by capturing the photocurrent pulses produced by single neutron interactions with the device. Good agreement, with a factor of two, was found between the model and the experiment under strongly depleted conditions. However, this range of agreement between the model and the experiment decreased significantly when the bias was removed, indicating partial energy deposition due to cascades that escape the active volume of the device not captured by the model. Consistent event rates across multiple detectors confirm the reliability of our neutron recoil detection method.

More Details

Safeguards and process modeling for molten salt reactors

GLOBAL 2019 - International Nuclear Fuel Cycle Conference and TOP FUEL 2019 - Light Water Reactor Fuel Performance Conference

Shoman, Nathan; Cipiti, Benjamin B.; Betzler, Benjamin

Renewed interest in the development of molten salt reactors has created the need for analytical tools that can perform safeguards assessments on these advanced reactors. This work outlines a flexible framework to perform safeguards analyses on a wide range of advanced reactor designs. The framework consists of two parts, a process model and a safeguards tool. The process model, developed in MATLAB Simulink, simulates the flow materials through a reactor facility. These models are linked to SCALE/TRITON and SCALE/ORIGEN to approximate depletion and decay of fuel salts but are flexible enough to accommodate higher fidelity tools if needed. The safeguards tool uses the process data to calculate common statistical quantities of interest such as material unaccounted for (MUF) and Page's trend test on the standardized independent transformed MUF (SITMUF). This paper documents the development of these tools.

More Details

Volume-averaged electrochemical performance modeling of 3D interpenetrating battery electrode architectures

Journal of the Electrochemical Society

Trembacki, Bradley L.; Vadakkepatt, Ajay; Roberts, Scott A.; Murthy, Jayathi Y.

Recent advancements in micro-scale additive manufacturing techniques have created opportunities for design of novel electrode geometries that improve battery performance by deviating from the traditional layered battery design. These 3D batteries typically exhibit interpenetrating anode and cathode materials throughout the design space, but the existing well-established porous electrode theory models assume only one type of electrode is present in each battery layer. We therefore develop and demonstrate a multielectrode volume-averaged electrochemical transport model to simulate transient discharge performance of these new interpenetrating electrode architectures. We implement the new reduced-order model in the PETSc framework and asses its accuracy by comparing predictions to corresponding mesoscale-resolved simulations that are orders of magnitude more computationally-intensive. For simple electrode designs such as alternating plates or cylinders, the volume-averaged model predicts performance within ∼2% for electrode feature sizes comparable to traditional particle sizes (5-10μm) at discharge rates up to 3C. When considering more complex geometries such as minimal surface designs (i.e. gyroid, Schwarz P), we show that using calibrated characteristic diffusion lengths for each design results in errors below 3% for discharge rates up to 3C. These comparisons verify that this novel model has made reliable cell-scale simulations of interpenetrating electrode designs possible.

More Details

Three-Dimensional Additively Manufactured Microstructures and Their Mechanical Properties

JOM

Rodgers, Theron M.; Lim, Hojun; Brown, Judith A.

Metal additive manufacturing (AM) allows for the freeform creation of complex parts. However, AM microstructures are highly sensitive to the process parameters used. Resulting microstructures vary significantly from typical metal alloys in grain morphology distributions, defect populations and crystallographic texture. AM microstructures are often anisotropic and possess three-dimensional features. These microstructural features determine the mechanical properties of AM parts. Here, we reproduce three “canonical” AM microstructures from the literature and investigate their mechanical responses. Stochastic volume elements are generated with a kinetic Monte Carlo process simulation. A crystal plasticity-finite element model is then used to simulate plastic deformation of the AM microstructures and a reference equiaxed microstructure. Results demonstrate that AM microstructures possess significant variability in strength and plastic anisotropy compared with conventional equiaxed microstructures.

More Details

Reduced-order atomistic cascade method for simulating radiation damage in metals

Journal of Physics Condensed Matter

Chen, Elton Y.; Deo, Chaitanya; Dingreville, Remi

Atomistic modeling of radiation damage through displacement cascades is deceptively non-trivial. Due to the high energy and stochastic nature of atomic collisions, individual primary knock-on atom (PKA) cascade simulations are computationally expensive and ill-suited for length and dose upscaling. Here, we propose a reduced-order atomistic cascade model capable of predicting and replicating radiation events in metals across a wide range of recoil energies. Our methodology approximates cascade and displacement damage production by modeling the cascade as a core-shell atomic structure composed of two damage production estimators, namely an athermal recombination corrected displacements per atom (arc-dpa) in the shell and a replacements per atom (rpa) representing atomic mixing in the core. These estimators are calibrated from explicit PKA simulations and a standard displacement damage model that incorporates cascade defect production efficiency and mixing effects. We illustrate the predictability and accuracy of our reduced-order atomistic cascade method for the cases of copper and niobium by comparing its results with those from full PKA simulations in terms of defect production as well as the resulting cascade evolution and structure. We provide examples for simulating high energy cascade fragmentation and large dose ion-bombardment to demonstrate its possible applicability. Finally, we discuss the various practical considerations and challenges associated with this methodology especially when simulating subcascade formation and dose effects.

More Details

Scale-out edge storage systems with embedded storage nodes to get better availability and cost-efficiency at the same time

HotEdge 2020 - 3rd USENIX Workshop on Hot Topics in Edge Computing

Liu, Jianshen; Curry, Matthew L.; Maltzahn, Carlos; Kufeldt, Philip

In the resource-rich environment of data centers most failures can quickly failover to redundant resources. In contrast, failure in edge infrastructures with limited resources might require maintenance personnel to drive to the location in order to fix the problem. The operational cost of these“truck rolls” to locations at the edge infrastructure competes with the operational cost incurred by extra space and power needed for redundant resources at the edge. Computational storage devices with network interfaces can act as network-attached storage servers and offer a new design point for storage systems at the edge. In this paper we hypothesize that a system consisting of a larger number of such small “embedded” storage nodes provides higher availability due to a larger number of failure domains while also saving operational cost in terms of space and power. As evidence for our hypothesis, we compared the possibility of data loss between two different types of storage systems: one is constructed with general-purpose servers, and the other one is constructed with embedded storage nodes. Our results show that the storage system constructed with general-purpose servers has 7 to 20 times higher risk of losing data over the storage system constructed with embedded storage devices. We also compare the two alternatives in terms of power and space using the Media-Based Work Unit (MBWU) that we developed in an earlier paper as a reference point.

More Details

Impact of divertor material on neutral recycling and discharge fueling in DIII-D

Physica Scripta

Bykov, I.; Rudakov, D.L.; Pigarov, A.Y.; Hollmann, E.M.; Guterl, J.; Boedo, J.A.; Chrobak, C.P.; Abrams, T.; Guo, H.Y.; Lasnier, C.J.; Mclean, A.G.; Wang, H.Q.; Watkins, Jonathan; Thomas, D.M.

Experiments with the lower divertor of DIII-D during the Metal Rings Campaign (MRC) show that the fraction F of atomic D in the total recycling flux is material-dependent and varies through the ELM cycle, which may affect divertor fueling. Between ELMs, F C ∼ 10% and F W ∼ 40%, consistent with expectations if all atomic recycling is due to reflections. During ELMs, FC increases to 50% and F W to 60%. In contrast, the total D recycling coefficient including atoms and molecules R stays close to unity near the strike point where the surface is saturated with D. During ELMs, R can deviate from unity, increasing during high energy ELM-ion deposition (net D release) and decreasing at the end of the ELM which leads to ability of the target to trap the ELM-deposited D. The increase of R > 1 in response to an increase in ion impact energy E i has been studied with small divertor target samples using Divertor Materials Evaluation System (DiMES). An electrostatic bias was applied to DiMES to change E i by 90 eV. On all studied materials including C, Mo, uncoated and W-coated TZM (>99% Mo, Ti, and Zr alloy), W, and W fuzz, an increase of E i transiently increased the D yield (and R) by ∼10%. On C there was also an increase in the molecular D2 yield, probably due to ion-induced D2 desorption. Despite the measured increase in F on W compared to C, attached H-mode shots with OSP on W during MRC did not demonstrate a higher pedestal density. About 8% increase in the edge density could be seen only in attached L-mode scenarios. The difference can be explained by higher D trapping in the divertor and lower divertor fueling efficiency in H-versus L-mode.

More Details

Synchronous and concurrent multidomain computing method for cloud computing platforms

SIAM Journal on Scientific Computing

Anguiano, Marcelino; Kuberry, Paul; Bochev, Pavel B.; Masud, Arif

We present a numerical method for synchronous and concurrent solution of transient elastodynamics problem where the computational domain is divided into subdomains that may reside on separate computational platforms. This work employs the variational multiscale discontinuous Galerkin (VMDG) method to develop interdomain transmission conditions for transient problems. The fine-scale modeling concept leads to variationally consistent coupling terms at the common interfaces. The method admits a large class of time discretization schemes, and decoupling of the solution for each subdomain is achieved by selecting any explicit algorithm. Numerical tests with a manufactured solution problem show optimal convergence rates. The energy history in a free vibration problem is in agreement with that of the solution from a monolithic computational domain.

More Details

TENSOR BASIS GAUSSIAN PROCESS MODELS OF HYPERELASTIC MATERIALS

Journal of Machine Learning for Modeling and Computing

Frankel, A.; Jones, Reese E.; Swiler, Laura P.

In this work, we develop Gaussian process regression (GPR) models of isotropic hyperelastic material behavior. First, we consider the direct approach of modeling the components of the Cauchy stress tensor as a function of the components of the Finger stretch tensor in a Gaussian process. We then consider an improvement on this approach that embeds rotational invariance of the stress-stretch constitutive relation in the GPR representation. This approach requires fewer training examples and achieves higher accuracy while maintaining invariance to rotations exactly. Finally, we consider an approach that recovers the strain-energy density function and derives the stress tensor from this potential. Although the error of this model for predicting the stress tensor is higher, the strain-energy density is recovered with high accuracy from limited training data. The approaches presented here are examples of physics-informed machine learning. They go beyond purely data-driven approaches by embedding the physical system constraints directly into the Gaussian process representation of materials models.

More Details

Loop antennas for use on/off ground planes

IEEE Access

Borchardt, John

Many applications benefit from the ability of an RFID tag to operate both on and off a conducting ground plane. This paper presents an electrically small loop antenna at 433 MHz that passively maintains its free-space tune and match when located a certain distance away from a large conducting ground plane. The design achieves this using a single radiation mechanism (that of a loop) in both environments without the use of a ground plane or EBG/AMC structure. An equivalent circuit model is developed that explains the dual-environment behavior and shows that the geometry balances inductive and capacitive parasitics introduced by the ground plane such that the free-space loop reactance, and thus resonant frequency, does not change. A design equation for balancing the inductive and capacitive parasitic effects is derived. Finally, experimental data showing the design eliminates ground plane detuning in practice is presented. The design is suitable for active, 'hard' RFID tag applications.

More Details

Temporal dynamics of large-scale structures for turbulent Rayleigh-Bénard convection in a moderate aspect-ratio cylinder

Journal of Fluid Mechanics

Sakievich, Philip; Peet, Y.T.; Adrian, R.J.

We investigate the spatial organization and temporal dynamics of large-scale, coherent structures in turbulent Rayleigh-Bénard convection via direct numerical simulation of a 6.3 aspect-ratio cylinder with Rayleigh and Prandtl numbers of and, respectively. Fourier modal decomposition is performed to investigate the structural organization of the coherent turbulent motions by analysing the length scales, time scales and the underlying dynamical processes that are ultimately responsible for the large-scale structure formation and evolution. We observe a high level of rotational symmetry in the large-scale structure in this study and that the structure is well described by the first four azimuthal Fourier modes. Two different large-scale organizations are observed during the duration of the simulation and these patterns are dominated spatially and energetically by azimuthal Fourier modes with frequencies of 2 and 3. Studies of the transition between these two large-scale patterns, radial and vertical variations in the azimuthal energy spectra, as well as the spatial and modal variations in the system's correlation time are conducted. Rotational dynamics are observed for individual Fourier modes and the global structure with strong similarities to the dynamics that have been reported for unit aspect-ratio domains in prior works. It is shown that the large-scale structures have very long correlation time scales, on the order of hundreds to thousands of free-fall time units, and that they are the primary source for a horizontal inhomogeneity within the system that can be observed during a finite, but a very long-time simulation or experiment.

More Details

Bayesian inference of stochastic reaction networks using multifidelity sequential tempered markov chain monte carlo

International Journal for Uncertainty Quantification

Catanach, Thomas A.; Vo, Huy D.; Munsky, Brian

Stochastic reaction network models are often used to explain and predict the dynamics of gene regulation in single cells. These models usually involve several parameters, such as the kinetic rates of chemical reactions, that are not directly measurable and must be inferred from experimental data. Bayesian inference provides a rigorous probabilistic framework for identifying these parameters by finding a posterior parameter distribution that captures their uncer-tainty. Traditional computational methods for solving inference problems such as Markov chain Monte Carlo methods based on the classical Metropolis-Hastings algorithm involve numerous serial evaluations of the likelihood function, which in turn requires expensive forward solutions of the chemical master equation (CME). We propose an alternate approach based on a multifidelity extension of the sequential tempered Markov chain Monte Carlo (ST-MCMC) sam-pler. This algorithm is built upon sequential Monte Carlo and solves the Bayesian inference problem by decomposing it into a sequence of efficiently solved subproblems that gradually increase both model fidelity and the influence of the observed data. We reformulate the finite state projection (FSP) algorithm, a well-known method for solving the CME, to produce a hierarchy of surrogate master equations to be used in this multifidelity scheme. To determine the appro-priate fidelity, we introduce a novel information-theoretic criterion that seeks to extract the most information about the ultimate Bayesian posterior from each model in the hierarchy without inducing significant bias. This novel sampling scheme is tested with high-performance computing resources using biologically relevant problems.

More Details

Molecular dynamics discovery of an extraordinary ionic migration mechanism in dislocation-containing TlBr crystals

Physical Chemistry Chemical Physics

Zhou, Xiaowang; Doty, F.P.; Yang, Pin; Foster, Michael E.; Kim, H.; Cirignano, L.J.

TlBr can surpass CZT as the leading semiconductor for γ- A nd X-radiation detection. Unfortunately, the optimum properties of TlBr quickly decay when an operating electrical field is applied. Quantum mechanical studies indicated that if this property degradation comes from the conventional mechanism of ionic migration of vacancies, then an unrealistically high vacancy concentration is required to account for the rapid aging of TlBr seen in experiments. In this work, we have applied large scale molecular dynamics simulations to study the effects of dislocations on ionic migration of TlBr crystals under electrical fields. We found that electrical fields can drive the motion of edge dislocations in both slip and climb directions. These combined motions eject enormous vacancies in the dislocation trail. Both dislocation motion and a high vacancy concentration can account for the rapid aging of the TlBr detectors. These findings suggest that strengthening methods to pin dislocations should be explored to increase the lifetimes of TlBr crystals.

More Details

Theoretical study on the microscopic mechanism of lignin solubilization in Keggin-type polyoxometalate ionic liquids

Physical Chemistry Chemical Physics

Ju, Zhaoyang; Xiao, Weihua; Yao, Xiaoqian; Tan, Xin; Simmons, Blake A.; Sale, Kenneth L.; Sun, Ning

Keggin-type polyoxometalate derived ionic liquids (POM-ILs) have recently been presented as effective solvent systems for biomass delignification. To investigate the mechanism of lignin dissolution in POM-ILs, the system involving POM-IL ([C4C1Im]3[PW12O40]) and guaiacyl glycerol-β-guaiacyl ether (GGE), which contains a β-O-4 bond (the most dominant bond moiety in lignin), was studied using quantum mechanical calculations and molecular dynamics simulations. These studies show that more stable POM-IL structures are formed when [C4C1Im]+ is anchored in the connecting four terminal oxygen region of the [PW12O40]3- surface. The cations in POM-ILs appear to stabilize the geometry by offering strong and positively charged sites, and the POM anion is a good H-bond acceptor. Calculations of POM-IL interacting with GGE show the POM anion interacts strongly with GGE through many H-bonds and π-π interactions which are the main interactions between the POM-IL anion and GGE and are strong enough to force GGE into highly bent conformations. These simulations provide fundamental models of the dissolution mechanism of lignin by POM-IL, which is promoted by strong interactions of the POM-IL anion with lignin.

More Details

Detecting and imaging stress corrosion cracking in stainless steel, with application to inspecting storage canisters for spent nuclear fuel

NDT and E International

Remillieux, Marcel C.; Kaoumi, Djamel; Ohara, Yoshikazu; Stuber Geesey, Marcie A.; Xi, Li; Schoell, Ryan; Bryan, C.R.; Enos, David; Summa, Deborah A.; Ulrich, T.J.; Anderson, Brian E.; Shayer, Zeev

One of the primary concerns with the long-term performance of storage systems for spent nuclear fuel (SNF) is the potential for corrosion due to deliquescence of salts deposited as aerosols on the surface of the canister, which is typically made of austentic stainless steel. In regions of high residual weld stresses, this may lead to localized stress-corrosion cracking (SCC). The ability to detect and image SCC at an early stage (long before the cracks are susceptible to propagate through the thickness of the canister wall and leaks of radioactive material may occur) is essential to the performance evaluation and licensing process of the storage systems. In this paper, we explore a number of nondestructive testing techniques to detect and image SCC in austenitic stainless steel. Our attention is focused on a small rectangular sample of 1 × 2 in2 with two cracks of mm-scale sizes. The techniques explored in this paper include nonlinear resonant ultrasound spectroscopy (NRUS) for detection, Linear Elastodynamic Gradient Imaging Technique (LEGIT), ultrasonic C-scan, vibrothermography, and synchrotron X-ray diffraction for imaging. Results obtained from these techniques are compared. Cracks of mm-scale sizes can be detected and imaged with all the techniques explored in this study.

More Details

A Machine Learning Evaluation of Maintenance Records for Common Failure Modes in PV Inverters

IEEE Access

Gunda, Thushara; Hackett, Sean; Kraus, Laura; Downs, Christopher; Jones, Ryan; Mcnalley, Christopher; Bolen, Michael; Walker, Andy

Inverters are a leading source of hardware failures and contribute to significant energy losses at photovoltaic (PV) sites. An understanding of failure modes within inverters requires evaluation of a dataset that captures insights from multiple characterization techniques (including field diagnostics, production data analysis, and current-voltage curves). One readily available dataset that can be leveraged to support such an evaluation are maintenance records, which are used to log all site-related technician activities, but vary in structuring of information. Using machine learning, this analysis evaluated a database of 55,000 maintenance records across 800+ sites to identify inverter-related records and consistently categorize them to gain insight into common failure modes within this critical asset. Communications, ground faults, heat management systems, and insulated gate bipolar transistors emerge as the most frequently discussed inverter subsystems. Further evaluation of these failure modes identified distinct variations in failure frequencies over time and across inverter types, with communication failures occurring more frequently in early years. Increased understanding of these failure patterns can inform ongoing PV system reliability activities, including simulation analyses, spare parts inventory management, cost estimates for operations and maintenance, and development of standards for inverter testing. Advanced implementations of machine learning techniques coupled with standardization of asset labels and descriptions can extend these insights into actionable information that can support development of algorithms for condition-based maintenance, which could further reduce failures and associated energy losses at PV sites.

More Details

How transition metals enable electron transfer through the SEI: Part II. Redox-cycling mechanism model and experiment

Journal of the Electrochemical Society

Harris, Oliver C.; Lin, Yuxiao; Qi, Yue; Leung, Kevin; Tang, Maureen H.

At high operating voltages, metals like Mn, Ni, and Co dissolve from Li-ion cathodes, deposit at the anode, and interfere with the performance of the solid-electrolyte interphase (SEI) to cause constant Li loss. The mechanism by which these metals disrupt SEI processes at the anode remains poorly understood. Experiments from Part I of this work demonstrate that Mn, Ni, and Co all affect the electronic properties of the SEI much more than the morphology, and that Mn is the most aggressively disruptive of the three metals. In this work we determine how a proposed electrocatalytic mechanism can explain why Mn contamination is uniquely detrimental to SEI passivation. We develop a microkinetic model of the redox cycling mechanism and apply it to experiments from Part I. The results show that the thermodynamic metal reduction potential does not explain why Mn is the most active of the three metals. Instead, kinetic differences between the three metals are more likely to govern their reactivity in the SEI. Our results emphasize the importance of local coordination environment and proximity to the anode within the SEI for controlling electron transfer and resulting capacity fade.

More Details

Interface Engineered Room-Temperature Ferromagnetic Insulating State in Ultrathin Manganite Films

Advanced Science

Lu, Ping

Ultrathin epitaxial films of ferromagnetic insulators (FMIs) with Curie temperatures near room temperature are critically needed for use in dissipationless quantum computation and spintronic devices. However, such materials are extremely rare. Here, a room-temperature FMI is achieved in ultrathin La0.9Ba0.1MnO3 films grown on SrTiO3 substrates via an interface proximity effect. Detailed scanning transmission electron microscopy images clearly demonstrate that MnO6 octahedral rotations in La0.9Ba0.1MnO3 close to the interface are strongly suppressed. As determined from in situ X-ray photoemission spectroscopy, O K-edge X-ray absorption spectroscopy, and density functional theory, the realization of the FMI state arises from a reduction of Mn eg bandwidth caused by the quenched MnO6 octahedral rotations. The emerging FMI state in La0.9Ba0.1MnO3 together with necessary coherent interface achieved with the perovskite substrate gives very high potential for future high performance electronic devices.

More Details

IRDFF-II: A New Neutron Metrology Library

Nuclear Data Sheets

Griffin, Patrick J.; Trkov, A.; Simakov, S.P.; Greenwood, L.R.; Zolotarev, K.I.; Capote, R.; Destouches, C.; Kahler, A.C.; Konno, C.; Kostal, M.; Aldama, D.L.; Chechev, V.; Majerle, M.; Malambu, E.; Ohta, M.; Pronyaev, V.G.; Yashima, H.; White, M.; Wagemans, J.; Vavtar, I.; Simeckova, E.; Radulovic, V.; Sato, S.

High quality nuclear data is the most fundamental underpinning for all neutron metrology applications. This paper describes the release of version II of the International Reactor Dosimetry and Fusion File (IRDFF-II) that contains a consistent set of nuclear data for fission and fusion neutron metrology applications up to 60 MeV neutron energy. The library is intended to support: a) applications in research reactors; b) safety and regulatory applications in the nuclear power generation in commercial fission reactors; and c) material damage studies in support of the research and development of advanced fusion concepts. The paper describes the contents of the library, documents the thorough verification process used in its preparation, and provides an extensive set of validation data gathered from a wide range of neutron benchmark fields. The new IRDFF-II library includes 119 metrology reactions, four cover material reactions to support self-shielding corrections, five metrology metrics used by the dosimetry community, and cumulative fission products yields for seven fission products in three different neutron energy regions. In support of characterizing the measurement of the residual nuclei from the dosimetry reactions and the fission product decay modes, the present document lists the recommended decay data, particle emission energies and probabilities for 68 activation products. It also includes neutron spectral characterization data for 29 neutron benchmark fields for the validation of the library contents. Additional six reference fields were assessed (four from plutonium critical assemblies, two measured fields for thermal-neutron induced fission on 233U and 239Pu targets) but not used for validation due to systematic discrepancies in C/E reaction rate values or lack of reaction-rate experimental data. Another ten analytical functions are included that can be useful for calculating average cross sections, average energy, thermal spectrum average cross sections and resonance integrals. The IRDFF-II library and comprehensive documentation is available online at www-nds.iaea.org/IRDFF/. Evaluated cross sections can be compared with experimental data and other evaluations at www-nds.iaea.org/exfor/endf.htm. The new library is expected to become the international reference in neutron metrology for multiple applications.

More Details

Initial Results From the Super-Parameterized E3SM

Journal of Advances in Modeling Earth Systems

Hannah, W.M.; Jones, C.R.; Hillman, Benjamin R.; Norman, M.R.; Bader, D.C.; Taylor, Mark A.; Leung, L.R.; Pritchard, M.S.; Branson, M.D.; Lin, G.; Pressel, K.G.; Lee, J.M.

Results from the new Department of Energy super-parameterized (SP) Energy Exascale Earth System Model (SP-E3SM) are analyzed and compared to the traditionally parameterized E3SMv1 and previous studies using SP models. SP-E3SM is unique in that it utilizes Graphics Processing Unit hardware acceleration, cloud resolving model mean-state acceleration, and reduced radiation to dramatically increase the model throughput and allow decadal experiments at 100-km external resolution. It also differs from other SP models by using a spectral element dynamical core on a cubed-sphere grid and a finer vertical grid with a higher model top. Despite these differences, SP-E3SM generally reproduces the behavior of other SP models. Tropical wave variability is improved relative to E3SM, including the emergence of a Madden-Julian Oscillation and a realistic slowdown of Moist Kelvin Waves. However, the distribution of precipitation exhibits indicates an overly frequent occurrence of rain rates less than 1 mm day-1, and while the timing of diurnal rainfall shows modest improvements the signal is not as coherent as observations. A notable grid imprinting bias is identified in the precipitation field and attributed to a unique feedback associated with the interactions between the explicit cloud resolving model convection and the spectral element grid structure. Spurious zonal mean column water tendencies due to grid imprinting are quantified—while negligible for the conventionally parameterized E3SM, they become large with super-parameterization, approaching 10% of the physical tendencies. The implication is that finding a remedy to grid imprinting will become especially important as spectral element dynamical cores begin to be combined with explicitly resolved convection.

More Details

Generalizing information to the evolution of rational belief

Entropy

Duersch, Jed A.; Catanach, Thomas A.

Information theory provides a mathematical foundation to measure uncertainty in belief. Belief is represented by a probability distribution that captures our understanding of an outcome's plausibility. Information measures based on Shannon's concept of entropy include realization information, Kullback-Leibler divergence, Lindley's information in experiment, cross entropy, and mutual information. We derive a general theory of information from first principles that accounts for evolving belief and recovers all of these measures. Rather than simply gauging uncertainty, information is understood in this theory to measure change in belief. We may then regard entropy as the information we expect to gain upon realization of a discrete latent random variable. This theory of information is compatible with the Bayesian paradigm in which rational belief is updated as evidence becomes available. Furthermore, this theory admits novel measures of information with well-defined properties, which we explored in both analysis and experiment. This view of information illuminates the study of machine learning by allowing us to quantify information captured by a predictive model and distinguish it from residual information contained in training data. We gain related insights regarding feature selection, anomaly detection, and novel Bayesian approaches.

More Details

The effects of atmospheric models on the estimation of infrasonic source functions at the source physics experiment

Bulletin of the Seismological Society of America

Poppeliers, Christian; Wheeler, Lauren B.; Preston, Leiph

We invert infrasound signals for an equivalent seismoacoustic source function using different atmospheric models to produce the necessary Green’s functions. The infrasound signals were produced by a series of underground chemical explosions as part of the Source Physics Experiment (SPE). In a previous study, we inverted the infrasound data using so-called predictive atmospheric models, which were based on historic, regional-scaled, publicly available weather observations interpolated onto a 3D grid. For the work presented here, we invert the same infrasound data, but using atmospheric models based on weather data collected in a time window that includes the approximate time of the explosion experiments, which we term postdictive models. We build two versions of the postdictive models for each SPE event: one that is based solely on the regional scaled observations, and one that is based on both regional scaled observations combined with on-site observations obtained by a weather sonde released at the time of the SPE. We then invert the observed data set three times, once for each atmospheric model type. We find that the estimated seismoacoustic source functions are relatively similar in waveform shape regardless of which atmospheric model that we used to construct the Green’s functions. However, we find that the amplitude of the estimated source functions is systematically dependent on the atmospheric model type: using the predictive atmospheric models to invert the data generally yields estimated source functions that are larger in amplitude than those estimated using the postdictive models.

More Details

Experimental exploration of near-field radiative heat transfer

Annual Review of Heat Transfer

Ghashami, Mohammad; Jarzembski, Amun; Lim, Mikyung; Lee, Bong J.; Park, Keunhan

This paper presents an in-depth review of ongoing experimental research efforts to fundamentally understand the strong near-field enhancement of radiative heat transfer and make use of the underlying physics for various novel applications. Compared to theoretical studies on near-field radiative heat transfer (NFRHT), its experimental demonstration has not been explored as much until recently due to technical challenges in precision gap control and heat transfer measurement. However, recent advances in micro-/nanofabrication and nanoscale instrumentation/control techniques as well as unprecedented growth in materials science and engineering have created remarkable opportunities to overcome the existing challenges in the measurement and engineering of NFRHT. Beginning with the pioneering works in 1960s, this paper tracks the past and current experimental efforts of NFRHT in three different configurations (i.e., sphere-plane, plane-plane, and tip-plane). In addition, future remarks on how to address current challenges in the experimental research of NFRHT are briefly discussed.

More Details

An energy-based coupling approach to nonlocal interface problems

D'Elia, Marta; Capodaglio, Giacomo; Bochev, Pavel B.; Gunzburger, Max D.

Nonlocal models provide accurate representations of physical phenomena ranging from fracture mechanics to complex subsurface flows, where traditional partial differential equations fail to capture effects caused by long-range forces at the microscale and mesoscale. However, the application of nonlocal models to problems involving interfaces such as multimaterial simulations and fluid-structure interaction, is hampered by the lack of a rigorous nonlocal interface theory needed to support numerical developments. In this paper, we use an energy-based approach to develop a mathematically rigorous nonlocal interface theory which provides a physically consistent extension of the classical perfect interface PDE formulation. Numerical examples validate the proposed framework and demonstrate the scope of our theory.

More Details

SYS645 Design for Reliability Maintainability and Supportability: H12 Universal Cartridge Carrier (circa 1952)

Foulk, James W.

The initial product specification' for the H12 Universal Cartridge Carrier (UUC) was released in October 1952 and is the twelfth piece of H-Gear (sequentially numbered) ever developed. It is the oldest piece of H-Gear currently in use. To gain perspective on the number of H-Gear since designed, the most currently developed and deployed H-Gear is the H1768, Inspection Stand. The UUC, (commonly referred to as just the "H12") has since been renamed to the H12 Adjustable Hand Truck. It was developed to support various maintenance operations for ordnance assembly and disassembly. This paper will provide evidence (where available) for the H12s current state of reliability, maintainability, and sustainability (RMA). Where documented evidence is not available, conclusions will be drawn based on its continued effective use over the past 67-years of service.

More Details

Arroyo Seco Improvement Program ( 2019 Annual Report)

Holland, Robert C.

The Arroyo Seco Improvement Program is being carried out at Sandia National Laboratories, California in order to address erosion and other streambed instability issues in the Arroyo Seco as it crosses the Sandia National Laboratories, California. The work involves both repair of existing eroded areas, and habitat enhancement. This work is being carried out under the requirements of Army Corps of Engineers permit 2006-400195S and California Regional Water Quality Control Board, San Francisco Bay Region Water Quality Certification Site No. 02-01-00987.

More Details

Momentary Cessation: Improving Dynamic Performance and Modeling of Utility-Scale Inverter Based Resources During Grid Disturbances

Guttromson, Ross; Behnke, Michael

Sandia National Laboratories worked with NERC staff to provide stakeholder guidance in responding to a May 2018 NERC alert regarding dynamic performance and modeling issues for utility-scale inverter-based resources. The NERC alert resulted from event analyses for grid disturbances that occurred in southern California in August 2016 and October 2017. Those disturbances revealed the use of momentary cessation of transmission connected inverter-based generation- a short time period when they ceased to inject current into the grid, counter to desired transmission operation. The event analyses concluded that, in many cases, the Western Interconnection system models used to determine planning and operating criteria do not reflect the actual behavior of solar plants, resulting in overly optimistic planning assessments and substandard operational responses. This technical report summarizes the gaps between the models and actual performance that were observed at those times, and the guidance that Sandia and NERC provided to owners of solar PV power plants, transmission planners, transmission operators and planning/reliability coordinators to modify existing models to reflect that actual performance

More Details

Imaging Atomically Thin Semiconductors Beneath Dielectrics via Deep Ultraviolet Photoemission Electron Microscopy

Physical Review Applied

Ohta, Taisuke; Berg, Morgann; Liu, Fangze; Smith, Sean; Copeland, R.G.; Chan, Calvin K.; Mohite, Aditya D.; Beechem, Thomas E.

Imaging of fabricated nanostructures or nanomaterials covered by dielectrics is highly sought after for diagnostics of optoelectronics components. We show imaging of atomically thin MoS2 flakes grown on SiO2-covered Si substrates and buried beneath HfO2 overlayers up to 120 nm in thickness using photoemission electron microscopy with deep-UV photoexcitation. Comparison of photoemission yield (PEY) to modeled optical absorption evinced the formation of optical standing waves in the dielectric stacks (i.e., cavity resonances of HfO2 and SiO2 layers on Si). The presence of atomically thin MoS2 flakes modifies the optical properties of the dielectric stack locally. Accordingly, the cavity resonance condition varies between the sample locations over buried MoS2 and surrounding areas, resulting in image contrast with submicron lateral resolution. This subsurface sensitivity underscores the role of optical effects in photoemission imaging with low-energy photons. This approach can be extended to nondestructive imaging of buried interfaces and subsurface features needed for analysis of microelectronic circuits and nanomaterial integration into optoelectronic devices.

More Details

Simplified Approach for Scoping Assessment of Non-LWR Source Terms

Luxat, David L.

This report describes a structure to aid in evaluation of release mitigation strategies across a range of reactor technologies. The assessment performed for example reactor concepts utilizes previous studies of postulated accident sequences for each reactor concept. This simplified approach classifies release mitigation strategies based on a range of barriers, physical attenuation processes, and system performance. It is not, however, intended to develop quantitative estimates of radiological release magnitudes and compositions to the environment. Rather, this approach is intended to identify the characteristics of a reactor design concepts release mitigation strategies that are most important to different classes of accident scenarios. It uses a scoping methodology to provide an approximate, order-of-magnitude, estimate of the radiological release to the environment and associated off-site consequences. This scoping method is applied to different reactor concepts, considering the performance of barriers to fission product release for these concepts under sample accident scenarios. The accident scenarios and sensitivity evaluations are selected in this report to evaluate the role of different fission product barriers in ameliorating the source term to the environment and associated off-site consequences. This report applies this structure to characterize how release mitigation measures are integrated to define overall release mitigation strategies for High Temperature Gas Reactors (HTGRs), Sodium Fast Reactors (SFRs), and liquid fueled Molten Salt Reactors (MSRs). To support this evaluation framework, factors defining a chain of release attenuation stages, and thus an overall mitigation strategy, must be established through mechanistic source term calculations. This has typically required the application of an integral plant analysis code such as MELCOR. At present, there is insufficient evidence to support a priori evaluation of the effectiveness of a release mitigation strategy for advanced reactor concepts across the spectrum of events that could challenge the radiological containment function. While it is clear that these designs have significant margin to radiological release to the environment for the scenarios comprising the design basis, detailed studies have not yet been performed to assess the risk profile for these plants. Such studies would require extensive evaluation across a reasonably complete spectrum of accident scenarios that could lead to radiological release to the environment.

More Details

Multivariate Regression of Pyrotechnic Igniter Output

Guo, Shuyue; Cooper, Marcia

Multivariate multiple regression models are applied to simplified pyrotechnic igniters for the first time to understand how changes in manufactured parameters can affect the output gas dynamic response and the timing of ignition events. The statistical modeling technique is applied to demonstrate quantification of the effects of a set of independent variables measured in the as-fabricated igniters on a set of responses experimentally measured from the functioned igniters. Two independent process variables were intentionally varied following a full factorial experimental design while several other independent variables varied within their normal manufacturing variability range. The four igniter performance responses consisted of the timing of sequential events during igniter function and visual gas dynamic output in the form of shock wave strength observed with high-speed schlieren imaging. Linear regression models built using the measurements throughout the manufacturing processes and the output variance provide insight into the critical device parameters that dominate performance

More Details

Treatment of Tilted Sonar Data for Salt Cavern Analysis

Hart, David; Roberts, Barry L.

Structural modeling and visualization of salt caverns requires three-dimensional representations. These representations are typically produced from sonar surveys conducted by companies that then produce a report of depths, distances, and volumes. There are multiple formats that are vendor dependent, and, as technology improves, there have been changes from only horizontal surveys to inclined shots for ceilings and floors to mid-cavern inclined shots. For geomechanical modeling, leaching predictions, and cavern stability visualizations, Sandia has previously written in-house software, called SONAR8, that created a consistent geometry format from the processed sonar reports. However, the increase in the need for mid-cavern inclined surveys led to the discovery of certain limitations in that code. This report describes methods used to process the multiple different formats to handle inclined shots in a consistent and accurate manner in our modeling efforts. A set of file formats and a database schema that was developed for this work is also documented in the appendices.

More Details

Software Requirements for a Consequence Management Sample Data Simulator for Training and Drills

Fournier, Sean D.; Leonard, Elliott

This document describes the requirements for a software tool that will enable FRMAC to simulate large sets of sample result data that is based realistically on simulated radionuclide deposition grids from NARAC. The user of this tool would be scientists involved in exercise and drill planning or part of the simulation cell of an exercise controller team. A key requirement is that this tool must be able to be run with a reasonable amount of training and job aids by any person within the Assessment, Laboratory Analysis, or Monitoring and Sampling divisions of the FRMAC to support any level of exercise from the small IPX to the national level full scale exercise. This tool should be relatively lean and stand-alone so that the user can run it in the field with limited IT resources. This document will describe the desired architecture, design characteristics, order of operations, and algorithms that can be given to a software development team to assist them in project scoping, costing, and eventually, development.

More Details

Impact of Inverter Based Resource Negative Sequence Current Injection on Transmission System Protection

Behnke, Michael R.; Custer, Gary; Farantatos, Evangelos; Fischer, Normann; Guttromson, Ross; Isaacs, Andrew; Majumder, Rajat; Pant, Siddhart; Patel, Manish; Reddy-Konala, Venkat; Voloh, Ilia

This report documents the results of analysis performed to investigate the impact of inverter-based resource (IBR) response to unbalanced faults on transmission system protective relay dependability and security. Electromagnetic transient (EMT) simulations were performed to simulate IBR response to these faults using existing manufacturer-developed EMT models for four separate IBRs. The study team was composed of IBR manufacturers, relay manufacturers, transmission providers, reliability coordinators and industry consultants with experience in EMT simulation and system protection. The results indicate that under certain conditions, IBR response can result in protective relay misoperations if current protection practices, which were developed based on conventional power sources, are not adapted to the characteristics of IBRs.

More Details

Environmental Restorations Operations (Consolidated Quarterly Report Jul - Sep 2019)

Leigh, Christi D.

This Environmental Restoration Operations (ER) Consolidated Quarterly Report (ER Quarterly Report) provides the status of ongoing corrective action activities being implemented at Sandia National Laboratories, New Mexico (SNL/NM) during the July - September 2019 reporting period. Table I-1 lists the Solid Waste Management Units (SWMUs) and Areas of Concern (A0Cs) currently identified for corrective action at SNL/NM. This section of the ER Quarterly Report summarizes the work completed during this quarterly reporting period at sites undergoing corrective action. Corrective action activities were conducted during this reporting period at the three groundwater AOCs (Burn Site Groundwater [BSG] AOC, Technical Area-V [TA-V] Groundwater [TAVG] AOC, and Tijeras Arroyo Groundwater [TAG] AOC). Corrective action activities are deferred at the Long Sled Track (SWMU 83), the Gun Facilities (SWMU 84), and the Short Sled Track (SWMU 240) because these three sites are active mission facilities. These three active mission sites are located in Technical Area-III. There were no SWMUs or AOCs in the corrective action complete regulatory process during this quarterly reporting period.

More Details

Unlimited Release of non-proprietary experimental data from the Scaled Wind Farm Technology (SWiFT) Facility

Riley, Timothy

Each year Wind Energy Technologies Dept. 08821 submits a memo through the Sandia National Labs Review and Approval (R&A) system to facilitate the release of the Scaled Wind Farm Technology (SWiFT) Facility raw logged data. This release of data explicitly does not cover specialized instruments, or guest researcher instruments (i.e. SpiDAR, SpinnerLidar), nor processed data.

More Details

Criticality Control Overpack Fire Testing (Phase III)

Figueroa Faria, Victor G.; Ammerman, Douglas; Foulk, James W.; Gill, Walter

This report will describe the one test conducted during phase III of the Pipe Overpack Container (POC) test campaign, present preliminary results from these tests, and discuss implications for the Criticality Control Overpack (CCO). The goal of this test was to see if aerosol surrogate material inside the Criticality Control Container (CCC) gets released when the drum lid of the CCO comes off during a thirty-minute long, fully-engulfing, fire test. As expected from POC tests conducted in Phase I and II of this test campaign, the CCO drum lid is ejected about one minute after the drum is exposed to fully-engulfing flames. The remaining pressure inside the drum is high enough to eject the top plywood dunnage a considerable distance from the drum. Subsequently, most of the bottom plywood dunnage supporting the CCC burns off during and after the fire. High pressure buildup inside the CCC and inside two primary containers holding the surrogate powder also results in damage to the filter media of the CCC and the filter-house, thread attachment of the primary canisters. No discernable release of surrogate powder material was detected from the two primary containers when pre- and post-test average mass were compared. However, when the average masses are corrected to account for possible uncertainties in mass measurements, error overlap does not preclude the possibility that some surrogate powder mass may have been lost from these primary canisters. Still, post-test conditions of the secondary canisters enclosing these two primary canisters suggest it is very unlikely this mass loss would have escaped into the CCC.

More Details

Characterizing Dynamic Test Fixtures Through the Modal Projection Error

Schoenherr, Tyler F.; Rouse, Jerry W.; Harvie, Julie

Across many industries and engineering disciplines, physical components and systems of components are designed and deployed into their environment of intended use. It is the desire of the design agency to be able to predict whether their component or system will survive its physical environment or if it will fail due to mechanical stresses. One method to determine if the component will survive the environment is to expose the component to a simulation of the environment in a laboratory. One difficulty in doing this is that the component may not have the same boundary condition in the laboratory as is in the field configuration. This paper presents a novel method of quantifying the error in the modal domain that occurs from the impedance difference between the laboratory test fixture and the next level of assembly in the field configuration. The error is calculated from the projection from one mode shape space to the other, and the error is in terms of each mode of the field configuration. This provides insight into the effectiveness of the test fixture with respect to the ability to recreate the mode shapes of the field configuration. A case study is presented to show that the error in the modal projection between two configurations is a lower limit for the error that can be achieved by a laboratory test.

More Details

Scan of an Unpublished Report: "Fission Product Behavior During Severe LWR Accidents: Recommendations for the MELCOR Code System"

Powers, D.A.; Sprung, J.L.; Leigh, C.D.

This document provides a scanned version of a 1987 SAND report that was never formally published. However, this report was referenced within the MELCOR Reference Manual and, therefore, provides historical information and technical basis for the MELCOR code. This document is being made available to permit users of the MELCOR code access to the information. The title page has been edited to prevent any confusion with regards to the possible documentation identifiers, such as the SAND report number or the intended date of publication. Beyond these modifications, a cover, distribution, and back cover are prepended and appended to the document to conform to modern SAND report style guidelines. The first four chapters of this report were updated and released under the title "Fission Product Behavior During Severe LWR Accidents: Recommendations for the MELCOR Code System. Volume I" and were made available by the U.S. NRC through the Adams database under accession number ML19227A327. No prior release of the remaining content of this report has occurred.

More Details

An Assessment of the Potential for Utility-Scale Solar Energy Development on the Navajo Nation

Sneezer, Sherralyn

The Navajo Nation covers about 27,000 square miles in the Southwestern United States with approximately 270 sunny days a year. Therefore, the Navajo Nation has the potential to develop utility-scale solar photovoltaic (PV) energy for the Navajo people and export electricity to major cities to generate revenues. In April 2019, the Navajo Nation issued a proclamation to increase residential and utility-scale renewable energy development on the Navajo Nation. In response, this research assesses the potential for utility-scale solar energy development on the Navajo Nation using criteria such as access to roads, transmission lines, slope/terrain data, aspect/direction, and culturally sensitive sites. These datasets are applied as layers using ArcGIS to identify regions that have good potential for utility-scale solar PV installations. Land availability on the Navajo Nation has been an issue for developing utility-scale solar PV, so this study proposes potential locations for solar PV and how much energy these potential sites could generate. Furthermore, two coal-fired power plants, the Navajo Generating Station and the San Juan Generating Station, will close soon and impact the Navajo Nation's energy supply and economy. This study seeks to answer two main questions: whether utility- scale solar energy could be used to replace the energy generated by both coal-fired powerplants, and what percentage of the Navajo Nation's energy demands can be met by utility-scale solar energy development? Economic development is a major concern; therefore, this study also examines what utility-scale solar development will mean for the Navajo Nation economy. The results of this study show that the Navajo Nation has a potential PV capacity of 45,729 MW to 91,459 MW. Even with the lowest calculated capacity, utility-scale solar PV has the potential to generate more than 11 times the power of the NGS and SJGS combined.

More Details

Preliminary Assessment of Potential for Wind Energy Technology on the Turtle Mountain Band of Chippewa Reservation

Lavallie, Sarah

Wind energy can provide renewable and sustainable electricity to Native American reservations, including rural homes, and power schools and businesses on reservations. It can also provide tribes with a source of income and economic development. The purpose of this paper is to determine the potential for deploying community and utility-scale wind renewable technologies on the Turtle Mountain Band of Chippewa tribal lands. Ideal areas for wind technology development were investigated based on annual wind resources, terrain, land usage, and other factors such as culturally sensitive sites. The result is a preliminary assessment of wind energy potential on Turtle Mountain lands, which can be used to justify further investigation and investment into determining the feasibility of future wind technology projects.

More Details

Sizing Small-Scale Renewable Energy Systems for the Navajo Nation and Rural Communities

Singer, Callie

The Navajo Nation consists of about 55,000 residential homes spread across 27,000 square miles of trust land in the Southwest region of the United States. The Navajo Tribal Utility Authority (NTUA) reports that approximately 15,000 homes on the reservation do not have electricity due to the high costs of connecting rural homes located miles from utility distribution lines. In order to get these rural homeowners access electricity, NTUA and other Native owned companies are examining small-scale renewable energy systems to provide power for necessary usage such as lighting and refrigeration. The goal of this study is to evaluate the current renewable deployment efforts and provide additional considerations for photovoltaic (PV) systems that will optimize performance and improve efficiency to reduce costs. There are three case studies presented in different locations on the Navajo Nation with varying solar resource and energy load requirements. For each location, an assessment is completed that includes environmental parameters of the site- specific landscape and a system performance analysis of an off-grid residential PV system. The technical process, repeated for each location, demonstrates how the variance and uniqueness of each household can impact the system requirements after optimizations are applied. Therefore, the household variabilities and difference in locations must be considered. The differing results of each case study suggests additional analysis is needed for designing small-scale PV systems that takes a home-land-family specific approach to allow for better efficiency and more flexibility for future solar innovations to be considered for overall cost reductions.

More Details

SPND Sensitivity Calculations Using MCNP and Experimental Data from ACRR

King, Joseph; Miller, Aaron M.; Parma, Edward J.

The use of the Monte Carlo N-Particle Transport Code (MCNP) to calculate detector sensitivity for Self-Powered Neutron Detectors (SPNDs) in the Annular Core Research Reactor (ACRR) could be a vital tool in the effort to optimize the design of next-generation SPNDs. Next-generation SPND designs, which consider specific materials and geometry, may provide experimenters with capabilities for advanced mixed field dosimetry. These detectors will need to be optimized for configuration, materials, and geometries and the ability to model and iterate must be available in order to decide on the ideal. SPND design. SPNDs were modeled in MCNP which closely resembled the dimensions and location of actual detectors used in the ACRR. Tallies were used to calculate detector sensitivity. Using metrics from a previous report, oscilloscope data from pulses were manipulated in a Matrix Laboratory computing environment (MATLAB) script to calculate experimental detector sensitivity. This report outlines the process in which experimental data from ACRR pulses verified results from tallies in an MCNP ACRR model. The sensitivity values from experiments and MCNP calculations agreed within one standard deviation. Parametric studies were also performed with MCNP to investigate the effects of materials and dimensions of different SPNDs.

More Details

Annular Core Research Reactor (ACRR) Pulse Curve Characterization

Saucier, David H.; Parma, Edward

Reactor pulse characterization at the Annular Core Research Reactor (ACRR) at Sandia Technical Area V (TA-V) is commonly done through photo conductive detection (PCD) and calorimeter detectors. Each of these offer a mode of analyzing a digital signal with different advantages/methods for determination of integrated dose or temporal based metrology. This report outlines a method and code that takes the millions of data points from such detectors and delivers a characteristic pulse trendline through two main methods: digital signal filtration and machine learning, in particular, Support Vector Machines (SVMs). Each method's endpoint is to deliver a characteristic curve for the many bucket environments of ACRR while considering other points of interest including delayed gamma removal for prompt dose metrics. This work draws and adds on previous work detailing the delayed gamma fraction contributions from CINDER simulations of the ACRR. Results from this project show a method to determine characteristic curves in a way that has previously been limited by data set size.

More Details

Fatigue and Fracture Behavior of Additively Manufactured Austenitic Stainless Steel

Structural Integrity of Additive Manufactured Parts

San Marchi, Chris; Smith, Thale R.; Sugar, Joshua D.; Balch, Dorian K.

Additive manufacturing (AM) includes a diverse suite of innovative manufacturing processes for producing near-net shape components, typically from powder or wire feedstock. Reported mechanical properties of AM materials vary significantly depending on the details of the manufacturing process and the characteristics of the processing defects (namely, lack of fusion defects). However, an excellent combination of strength, ductility and fracture resistance can be achieved in AM type 304L and 316L austenitic stainless steels by minimizing processing defects. It is also important to recognize that localized solidification processing during AIVI produces microstructures more analogous to weld microstructures than wrought microstructures. Consequently, the mechanical behavior of AM austenitic stainless steels in harsh environments can diverge from the performance of wrought materials. This report gives an overview of the fracture and fatigue response of type 304L materials from both directed energy deposition (DED) and powder bed fusion (PBF) techniques. In particular, the mechanical performance of these materials is considered for high-pressure hydrogen applications by evaluating fatigue and fracture resistance after thermally precharging of test specimens in high-pressure gaseous hydrogen. The mechanical behaviors are considered with respect to previous reports on hydrogen-assisted fracture of austenitic stainless steel welds and the unique characteristics of the AM microstructures. Fatigue crack growth can be relatively insensitive to processing defects, displaying similar behavior as wrought materials. Fracture resistance of dense AM austenitic stainless steel, on the other hand, is more consistent with weld metal than with compositionally-similar wrought materials. Hydrogen effects in the AM materials are generally more severe than in wrought materials, but comparable to measurements on welded austenitic stainless steels in hydrogen environments. While hydrogenassisted fracture manifests differently in welded and AM austenitic stainless steel, the fracture process appears to have a common origin in the compositional microsegregation intrinsic to solidification processes.

More Details

Hyper-Differential Sensitivity Analysis of Uncertain Parameters in PDE-Constrained Optimization

International Journal for Uncertainty Quantification

Van Bloemen Waanders, Bart

Many problems in engineering and sciences require the solution of large scale optimization constrained by partial differential equations (PDEs). Though PDE-constrained optimization is itself challenging, most applications pose additional complexity, namely, uncertain parameters in the PDEs. Uncertainty quantification (UQ) is necessary to characterize, prioritize, and study the influence of these uncertain parameters. Sensitivity analysis, a classical tool in UQ, is frequently used to study the sensitivity of a model to uncertain parameters. In this article, we introduce "hyper-differential sensitivity analysis" which considers the sensitivity of the solution of a PDE-constrained optimization problem to uncertain parameters. Our approach is a goal-oriented analysis which may be viewed as a tool to complement other UQ methods in the service of decision making and robust design. We formally define hyper-differential sensitivity indices and highlight their relationship to the existing optimization and sensitivity analysis literatures. Assuming the presence of low rank structure in the parameter space, computational efficiency is achieved by leveraging a generalized singular value decomposition in conjunction with a randomized solver which converts the computational bottleneck of the algorithm into an embarrassingly parallel loop. Two multi-physics examples, consisting of nonlinear steady state control and transient linear inversion, demonstrate efficient identification of the uncertain parameters which have the greatest influence on the optimal solution.

More Details
Results 18801–19000 of 99,299
Results 18801–19000 of 99,299