Accurate diagnosis of failures is critical for meeting photovoltaic (PV) performance objectives and avoiding safety concerns. This analysis focuses on the classification of field-collected string-level current-voltage (IV) curves representing baseline, partial soiling, and cracked failure modes. Specifically, multiple neural network-based architectures (including convolutional and long short-term memory) are evaluated using domain-informed parameters across different portions of the IV curve and a range of irradiance thresholds. The analysis identified two models that were able to accurately classify the relatively small dataset (400 samples) at a high accuracy (99%+). Findings also indicate optimal irradiance thresholds and opportunities for improvements in classification activities by focusing on portions of the IV curve. Such advancements are critical for expanding accurate classification of PV faults, especially for those with low power loss (e.g., cracked cells) or visibly similar IV curve profiles.
A MELCOR severe accident nuclear reactor code study of alkaline carbonate cooling to mitigate exvessel molten corium accident is described. This study is a part of a 3-year laboratory directed research and development project funded by Sandia National Laboratories. This study examines a novel method to provide an injectable mitigation system, capitalizing the endothermic decomposition of alkaline carbonate to absorb the decay heat and cool the molten corium resulting from a reactor vessel failure accident. A simplified granular carbonate decomposition model has been developed and has been implemented into a MELCOR input model to simulate the cooling effect of the carbonate in both a spreading experiment and a full plant accident model. The results seem promising to stop corium spreading and delay the severity of the accident by at least one-half day which may be enough for additional accident management to alleviate the situation.
Risk assessment of nuclear power plants (NPPs) is commonly driven by computer modeling which tracks the evolution of NPP events over time. To capture interactions between nuclear safety and nuclear security, multiple system codes each of which specializes on one space may need to be linked with information transfer among the codes. A systems analysis based on fixed length time blocks is proposed to allow for such a linking within the ADAPT framework without needing to predetermine in which order the safety/security codes interact. A case study using two instances of the Scribe3D code demonstrates the concept and shows agreement with results from a direct solution.
Sandia National Laboratories is committed to being an informed, compassionate and contributing neighbor in our local communities. This commitment has been demonstrated throughout Sandia's history, and is an enduring part of our future. New Mexico faces many challenges, including one of the highest childhood poverty rates in the United States and one of the lowest educational proficiency rates. Lack of affordable housing and insufficient educational achievement are issues in the Bay Area near our Livermore site. To address the greatest challenges faced in Sandia's communities of Albuquerque, NM, and Livermore, CA, and other remote sites, Sandia's contributions leverage resources in three critical focus areas. In 2019, National Technology and Engineering Solutions of Sandia, contributed $1.4M to our local communities, including $151K in the Livermore area.
Ducted fuel injection (DFI) is a technique to attenuate soot formation in compression ignition engines relative to conventional diesel combustion (CDC). The concept is to inject fuel through a small tube inside the combustion chamber to reduce equivalence ratios in the autoignition zone relative to CDC. DFI has been studied at loads as high as 8.5 bar gross indicated mean effective pressure (IMEPg) and as low as 2.5 bar IMEPg using a four-orifice fuel injector. Across previous studies, DFI has been shown to attenuate soot emissions, increase NOx emissions (at constant charge dilution), and slightly decrease fuel conversion efficiencies for most tested points. This study expands on the previous work by testing 1.1 bar IMEPg (low-load/idle) conditions and 10 bar IMEPg (higher-load) conditions with the same four-orifice fuel injector, as well as examining potential causes of the degradations in NOx emissions and fuel conversion efficiencies. DFI and CDC are directly compared at each operating point in the study. At the low-load condition, the intake charge dilution was swept to elucidate the soot and NOx performance of DFI. The low-load range is important because it is the target of impending, more-stringent emissions regulations, and DFI is shown to be a potentially effective approach for helping to meet these regulations. The results also indicate that DFI likely has slightly decreased fuel conversion efficiencies relative to CDC. The increase in NOx emissions with DFI is likely due to longer charge gas residence times at higher temperatures, which arise from shorter combustion durations and advanced combustion phasing relative to CDC.
We consider the problem of decomposing higher-order moment tensors, i.e., the sum of symmetric outer products of data vectors. Such a decomposition can be used to estimate the means in a Gaussian mixture model and for other applications in machine learning. The dth-order empirical moment tensor of a set of p observations of n variables is a symmetric d-way tensor. Our goal is to find a low-rank tensor approximation comprising r < p symmetric outer products. The challenge is that forming the empirical moment tensors costs O(pnd) operations and O(nd) storage, which may be prohibitively expensive; additionally, the algorithm to compute the low-rank approximation costs O(nd) per iteration. Our contribution is avoiding formation of the moment tensor, computing the low-rank tensor approximation of the moment tensor implicitly using O(pnr) operations per iteration and no extra memory. This advance opens the door to more applications of higher-order moments since they can now be efficiently computed. We present numerical evidence of the computational savings and show an example of estimating the means for higher-order moments.
We present a Fourier analysis of wave propagation problems subject to a class of continuous and discontinuous discretizations using high-degree Lagrange polynomials. This allows us to obtain explicit analytical formulas for the dispersion relation and group velocity and, for the first time to our knowledge, characterize analytically the emergence of gaps in the dispersion relation at specific wavenumbers, when they exist, and compute their specific locations. Wave packets with energy at these wavenumbers will fail to propagate correctly, leading to significant numerical dispersion. We also show that the Fourier analysis generates mathematical artifacts, and we explain how to remove them through a branch selection procedure conducted by analysis of eigenvectors and associated reconstructed solutions. The higher frequency eigenmodes, named erratic in this study, are also investigated analytically and numerically.
A programmable logic controller (PLC) emulation methodology can dramatically reduce the cost of high-fidelity operational technology (OT) network emulation without compromising specific functionality. A PLC emulation methodology is developed as part of an ongoing effort at the University of New Mexico's Institute for Space and Nuclear Power Studies (UNM-ISNPS) in collaboration with Sandia National Laboratories (SNL) to develop an emulyticTM platform to support cybersecurity analyses of the instrumentation and control (I&C) systems of pressurized water reactors (PWRs). This methodology identifies and characterizes key physical and digital signatures of interest. The obtained and displayed digital signatures include the network response, traffic, and software version, while the selected physical signatures include the actuation response time and sampling time. An extensive validation analysis is performed to characterize the signatures of the real, hardware-based PLC and the emulated PLC. These signatures are then compared to quantify differences and identify optimum settings for the emulation fidelity.
An optically-segmented single-volume scatter camera is being developed to image MeV-energy neutron sources. The design employs long, thin, optically isolated organic scintillator pillars with 5 mm × 5 mm × 200 mm dimensions (i.e., an aspect-ratio of 1:1:40). Teflon reflector is used to achieve optical isolation and improve light collection. The effect of Teflon on the ability to resolve the radiation interaction locations along such high aspect-ratio pillars is investigated. It was found that reconstruction based on the amplitude of signals collected on both ends of a bare pillar is less precise than reconstruction based on their arrival times. However, this observation is reversed after wrapping in Teflon, such that there is little to no improvement in reconstruction resolution calculated by combining both methods. It may be possible to use another means of optical isolation that does not require wrapping each individual pillar of the camera.
We present a general machine learning algorithm for boundary detection within general signals based on an efficient, accurate, and robust approximation of the universal normalized information distance. Our approach uses an adaptive sliding information distance (SLID) combined with a wavelet-based approach for peak identification to locate the boundaries. Special emphasis is placed on developing an adaptive formulation of SLID to handle general signals with multiple unknown and/or drifting section lengths. Although specialized algorithms may outperform SLID when domain knowledge is available, these algorithms are limited to specific applications and do not generalize. SLID excels in these cases. We demonstrate the versatility and efficacy of SLID on a variety of signal types, including synthetically generated sequences of tokens, binary executables for reverse engineering applications, and time series of seismic events.
Di Matteo, Olivia; Gamble, John; Granade, Chris; Rudinger, Kenneth M.; Wiebe, Nathan
As increasingly impressive quantum information processors are realized in laboratories around the world, robust and reliable characterization of these devices is now more urgent than ever. These diagnostics can take many forms, but one of the most popular categories is tomography, where an underlying parameterized model is proposed for a device and inferred by experiments. Here, we introduce and implement efficient operational tomography, which uses experimental observables as these model parameters. This addresses a problem of ambiguity in representation that arises in current tomographic approaches (the gauge problem). Solving the gauge problem enables us to efficiently implement operational tomography in a Bayesian framework computationally, and hence gives us a natural way to include prior information and discuss uncertainty in fit parameters. We demonstrate this new tomography in a variety of different experimentally-relevant scenarios, including standard process tomography, Ramsey interferometry, randomized benchmarking, and gate set tomography.
Recent advances in horizontal cask designs for commercial spent nuclear fuel have significantly increased maximum thermal loading. This is due in part to greater efficiency in internal conduction pathways. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating thermal-hydraulic models of these storage cask designs. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of this investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to calculate cladding temperatures and induced cooling air flows in modern, horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and complement the existing knowledge base. Transverse and axial temperature profiles along with induced-cooling air flow are measured using various backfills of gases for a wide range of decay powers and canister pressures. The data from the HDCS tests will be used to host a blind model validation effort.
Digital Instrumentation and Control (I&C) systems in critical energy infrastructure, including nuclear power plants, raise cybersecurity concerns. Cyber-attack campaigns have targeted digital Programmable Logic Controllers (PLCs) used for monitoring and autonomous control. This paper describes the Nuclear Instrumentation and Control Simulation (NICSim) platform for emulating PLCs and investigating potential vulnerabilities of the I&C systems in nuclear power plants. It is being developed at the University of New Mexico's Institute for Space and Nuclear Power Studies (UNM-ISNPS), in collaboration with Sandia National Laboratories (SNL), with high fidelity emulytics and modeling capabilities of a physics-based, dynamic model of a PWR nuclear power plant. The NICSim platform would be linked to the SCEPTRE framework at SNL to emulate the response of the plant digital I&C systems during nominal operation and while under cyber-attack.
We propose a very large family of benchmarks for probing the performance of quantum computers. We call them volumetric benchmarks (VBs) because they generalize IBM's benchmark for measuring quantum volume [1]. The quantum volume benchmark defines a family of square circuits whose depth d and width w are the same. A volumetric benchmark defines a family of rectangular quantum circuits, for which d and w are uncoupled to allow the study of time/space performance trade-offs. Each VB defines a mapping from circuit shapes - (w, d) pairs - to test suites C(w, d). A test suite is an ensemble of test circuits that share a common structure. The test suite C for a given circuit shape may be a single circuit C, a specific list of circuits {C1... CN} that must all be run, or a large set of possible circuits equipped with a distribution Pr(C). The circuits in a given VB share a structure, which is limited only by designers' creativity. We list some known benchmarks, and other circuit families, that fit into the VB framework: several families of random circuits, periodic circuits, and algorithm-inspired circuits. The last ingredient defining a benchmark is a success criterion that defines when a processor is judged to have “passed” a given test circuit. We discuss several options. Benchmark data can be analyzed in many ways to extract many properties, but we propose a simple, universal graphical summary of results that illustrates the Pareto frontier of the d vs w trade-off for the processor being benchmarked.
Estimation of the uncertainty in a critical experiment attributable to uncertainties in the measured experiment temperature is done by calculating the variation of the eigenvalue of a benchmark configuration as a function of temperature. In the low-enriched water-moderated critical experiments performed at Sandia, this is done by 1) estimating the effects of changing the water temperature while holding the UO2 fuel temperature constant, 2) estimating the effects of changing the UO2 temperature while holding the water temperature constant, and 3) combining the two results. This assumes that the two effects are separable. The results of such an analysis are nonintuitive and need experimental verification. Critical experiments are being planned at Sandia National Laboratories (Sandia) to measure the effect of temperature on critical systems and will serve to test the methods used in estimating the temperature effects in critical experiments.
Aging plants, efficiency goals, and safety needs are driving increased digitalization in nuclear power plants (NPP). Security has always been a key design consideration for NPP architectures, but increased digitalization and the emergence of malware such as Stuxnet, CRASHOVERRIDE, and TRITON that specifically target industrial control systems have heightened concerns about the susceptibility of NPPs to cyber attacks. The cyber security community has come to realize the impossibility of guaranteeing the security of these plants with 100% certainty, so demand for including resilience in NPP architectures is increasing. Whereas cyber security design features often focus on preventing access by cyber threats and ensuring confidentiality, integrity, and availability (CIA) of control systems, cyber resilience design features complement security features by limiting damage, enabling continued operations, and facilitating a rapid recovery from the attack in the event control systems are compromised. This paper introduces the REsilience VeRification UNit (RevRun) toolset, a software platform that was prototyped to support cyber resilience analysis of NPP architectures. Researchers at Sandia National Laboratories have recently developed models of NPP control and SCADA systems using the SCEPTRE platform. SCEPTRE integrates simulation, virtual hardware, software, and actual hardware to model the operation of cyber-physical systems. RevRun can be used to extract data from SCEPTRE experiments and to process that data to produce quantitative resilience metrics of the NPP architecture modeled in SCEPTRE. This paper details how RevRun calculates these metrics in a customizable, repeatable, and automated fashion that limits the burden placed upon the analyst. This paper describes RevRun's application and use in the context of a hypothetical attack on an NPP control system. The use case specifies the control system and a series of attacks and explores the resilience of the system to the attacks. The use case further shows how to configure RevRun to run experiments, how resilience metrics are calculated, and how the resilience metrics and RevRun tool can be used to conduct the related resilience analysis.
Aerosol jet printing offers a versatile, high-resolution prototyping capability for flexible and hybrid electronics. Despite its rapid growth in recent years, persistent problems such as process drift hinder the adoption of this technology in production environments. Here we explore underlying causes of process drift during aerosol jet printing and introduce an engineered solution to improve deposition stability. It is shown that the ink level within the cartridge is a critical factor in determining atomization efficiency, such that the reduction in ink volume resulting from printing itself can induce significant and systematic process drift. By integrating a custom 3D-printed cartridge with an ink recirculation system, ink composition and level within the cartridge are better maintained. This strategy allows extended duration printing with improved stability, as evidenced by 30 h of printing over 5 production runs. This provides an important tool for extending the duration and improving reliability for aerosol jet printing, a key factor for integration in practical manufacturing operations.
The light output, time resolution, pulse shape discrimination (PSD), neutron light output, and interaction position reconstruction of melt-cast small-molecule organic glass bar scintillators were measured. The trans-stilbene organic scintillator detects fast neutrons and gamma rays with high efficiency and exhibits excellent PSD, but the manufacturing process is slow and expensive and its light output in response to neutrons is anisotropic. Small-molecule organic glass bars offer an easy-to-implement and cost-effective solution to these problems. These properties were characterized to evaluate the efficacy of constructing a compact, low-voltage neutron and gamma-ray imaging system using organic glass bars coupled to silicon photomultiplier arrays. A complete facility for melt-casting organic glass scintillators was setup at the University of Michigan. 6×6×50 mm3 glass bars were produced and the properties listed above were characterized. The first neutron image using organic glass was produced in simple backprojection.
Validation of the extent of water removal in a dry storage system using an industrial vacuum drying procedure is needed. Water remaining in casks upon completion of vacuum drying can lead to cladding corrosion, embrittlement, and breaching, as well as fuel degradation. In order to address the lack of time-dependent industrial drying data, this study employs a vacuum drying procedure to evaluate the efficiency of water removal over time in a scaled system. Isothermal conditions are imposed to generate baseline pressure and moisture data for comparison to future tests under heated conditions. A pressure vessel was constructed to allow for the emplacement of controlled quantities of water and connections to a pumping system and instrumentation. Measurements of pressure and moisture content were obtained over time during sequential vacuum hold points, where the vacuum flow rate was throttled to draw pressures from 100 torr down to 0.7 torr. The pressure rebound, dew point, and water content were observed to eventually diminish with increasingly lower hold points, indicating a reduction in retained water.
Laser vibrometry has become a mature technology for structural dynamics testing, enabling many measurements to be obtained in a short amount of time without mass-loading the part. Recently multi-point laser vibrometers consisting of 48 or more measurement channels have been introduced to overcome some of the limitations of scanning systems, namely the inability to measure multiple data points simultaneously. However, measuring or estimating the alignment (Euler angles) of many laser beams for a given test setup remains tedious and can require a significant amount of time to complete and adds an unquantified source of uncertainty to the measurement. This paper introduces an alignment technique for the multipoint vibrometer system that utilizes photogrammetry to triangulate laser spots from which the Euler angles of each laser head relative to the test coordinate system can be determined. The generated laser beam vectors can be used to automatically create a test geometry and channel table. While the approach described was performed manually for proof of concept, it could be automated using the scripting tools within the vibrometer system.
The National Solar Thermal Test Facility (NSTTF) at Sandia National Laboratories is conducting research on a Generation 3 Particle Pilot Plant (G3P3) that uses falling sand-like particles as the heat transfer medium. The system will include a thermal energy storage (TES) bin with a capacity of 6 MWht¬ requiring ~120,000 kg of flowing particles. Testing and modeling were conducted to develop a validated modeling tool to understand temporal and spatial temperature distributions within the storage bin as it charges and discharges. Flow and energy transport in funnel-flow was modeled using volume averaged conservation equations coupled with level set interface tracking equations that prescribe the dynamic geometry of particle flow within the storage bin. A thin layer of particles on top of the particle bed was allowed to flow toward the center and into the flow channel above the outlet. Model results were validated using particle discharge temperatures taken from thermocouples mounted throughout a small steel bin. The model was then used to predict heat loss during charging, storing, and discharging operational modes at the G3P3 scale. Comparative results from the modeling and testing of the small bin indicate that the model captures many of the salient features of the transient particle outlet temperature over time.
International Conference on Nuclear Engineering, Proceedings, ICONE
Laros, James H.; El-Darazi, Samir; Fyffe, Lyndsey M.; Clark, James L.
Estimation of radionuclide aerosol release to the environment, from fire accident scenarios, are one of the most dominant accident evaluations at the U.S. Department of Energy's (DOE's) nuclear facilities. Of particular interest to safety analysts, is estimating the radionuclide aerosol release, the Source Term (ST), based on aerosol transport from a fire room to a corridor and from the corridor to the environment. However, no existing literature has been found on estimating ST from this multi-room facility configuration. This paper contributes the following to aerosol transport modeling body of work: a validation study on a multiroom fire experiment (this includes a code-to-code comparison between MELCOR and Consolidated Fire and Smoke Transport, a specialized fire code without radionuclide transport capabilities), a sensitivity study to provide insight on the effect of smoke on ST, and a sensitivity study on the effect of aerosol entrainment in the atmosphere (puff and continuous rate) on ST.
The Sodium Chemistry (NAC) package in MELCOR has been developed to enhance application to sodium cooled fast reactors. The models in the NAC package have been assessed through benchmark analyses. The F7-1 pool fire experimental analysis is conducted within the framework of the U.S.-Japan collaboration; Civil Nuclear Energy Research and Development Working Group. This study assesses the capability of the pool fire model in MELCOR and provides recommendations for future model improvements because the physics of sodium pool fire are complex. Based on the preliminary results, analytical conditions, such as heat transfer on the floor catch pan are modified. The current MELCOR analysis yields lower values than the experimental data in pool combustion rate and pool, catch pan, and gas temperature during early time. The current treatment of heat transfer for the catch pan is the primary cause of the difference in the results from the experimental data. After sodium discharge stopping, the pool combustion rate and temperature become higher than experimental data. This is caused by absence of a model for pool fire suppression due to the oxide layer buildup on the pool surface. Based on these results, recommendations for future works are needed, such as heat transfer modification in terms of the catch pan and consideration of the effects of the oxide layer for both the MELCOR input model and pool physic.
To increase understanding of damage associated with underground explosions, a field test program was developed jointly by Sandia and Pacific Northwest National Laboratories at the EMRTC test range in Socorro, NM. The Blue Canyon Dome test site is underlain by a rhyolite that is fractured in places. The test system included deployment of a defined array of 64 probes in eight monitoring boreholes. The monitoring boreholes radially surround a central near vertical shot hole at horizontal distances of 4.6m and 7.6m in cardinal and 45 degrees offset to cardinal directions, respectively. The probes are potted in coarse sand which touches/accesses the rhyolite and are individually accessed via nylon tubing and isolated from each other by epoxy and grout sequences. Pre and post chemical explosion air flow rate measurements, conducted for ~30-45 minutes from each probe, were observed for potential change. The gas flow measurement is a function of the rock mass permeability near a probe. Much of the flow rate change is at depth station 8 (59.4m) and is in the SE quadrant. Flow rate changes are inferred to be caused by the chemical explosion which may have opened pre-existing fractures, fractured the rock and/or caused block displacements by rotations and translations. The air flow rate data acquired here may enable a relationship and/or calibration to rock damage to be developed.
We develop a methodology for comparing two or more agent-based models that are developed for the same domain, but may differ in the particular data sets (e.g., geographical regions) to which they are applied, and in the structure of the model. Our approach is to learn a response surface in the common parameter space of the models and compare the regions corresponding to qualitatively different behaviors in the models. As an example, we develop an active learning algorithm to learn phase transition boundaries in contagion processes in order to compare two agent-based models of rooftop solar panel adoption.
PIC MCC simulation results on the breakdown in the pulse discharge in helium at pressure of 100 Torr and voltage of U=3.25 kV are presented. The delay of the breakdown development is studied with different initial densities of plasma and excited helium atoms, which corresponds to various discharge operation frequencies. It is shown that for high concentration of excited atoms the photoemission determines the breakdown delay time. In opposite case of low excited atoms density, the ion-electron emission plays a key role in the breakdown development. The photoemission from the cathode is set with a flux of the photons with Doppler shift over the frequency. These photons are generated in reactions between exited atoms and fast atoms. A wide distribution of breakdown delay time was observed in different runs and analyzed.
Proceedings of the 2020 SIAM International Conference on Data Mining, SDM 2020
Laishram, Ricky; Sariyuce, Ahmet E.; Eliassi-Rad, Tina; Pinar, Ali P.; Soundarajan, Sucheta
In many online social networking platforms, the participation of an individual is motivated by the participation of others. If an individual chooses to leave a platform, this may produce a cascade in which that person’s friends then choose to leave, causing their friends to leave, and so on. In some cases, it may be possible to incentivize key individuals to stay active within the network, thus preventing such a cascade. This problem is modeled using the anchored k-core of a network, which, for a network G and set of anchor nodes A, is the maximal subgraph of G in which every node has a total of at least k neighbors between the subgraph and anchors. In this work, we propose Residual Core Maximization (RCM), a novel algorithm for finding b anchor nodes so that the size of the anchored k-core is maximized. We perform a comprehensive experimental evaluation on numerous real-world networks and compare RCM to various baselines. We observe that RCM is more effective and efficient than the state-of-the-art methods: on average, RCM produces anchored k-cores that are 1.65 times larger than those produced by the baseline algorithm, and is approximately 500 times faster on average.
Light carries a great deal of information in the form of amplitude, phase, and polarization, any or most powerfully all, of which may be exploited for the characterization of materials or development of novel technologies. However, extracting the full set of information carried by light becomes increasingly difficult as sample feature sizes shrink and the footprint and cost of detection schemes must decrease as well. Here, a fiber-based interferometric scheme is deployed to extract this information from optical systems which may be assessed three dimensionally down to the nanoscale and/or temporally up to the bandwidth of electronic data acquisition available. The setup utilizes a homemade fiber stretcher to achieve phase-locking of the reference arm and is compatible with heterodyning. More interestingly, a simplified and less expensive approach is demonstrated which employs the fiber stretcher for arbitrarily frequency up-converted (with respect to driving voltage frequency) phase modulation in addition to locking. This improves the detection system's size, weight, power, and cost requirements, eliminating the need for an acousto-optic modulator and reducing the drive power required by orders of magnitude. High performance is maintained as evidenced by imaging amplitude and phase (and inherently polarization state) in micro and nano optical systems such as lensed fibers and focusing waveguide grating couplers previously imaged only for intensity distribution.
Battery electrodes are composed of polydisperse particles and a porous, composite binder domain. These materials are arranged into a complex mesostructure whose morphology impacts both electrochemical performance and mechanical response. We present image-based, particle-resolved, mesoscale finite element model simulations of coupled electrochemical-mechanical performance on a representative NMC electrode domain. Beyond predicting macroscale quantities such as half-cell voltage and evolving electrical conductivity, studying behaviors on a per-particle and per-surface basis enables performance and material design insights previously unachievable. Voltage losses are primarily attributable to a complex interplay between interfacial charge transfer kinetics, lithium diffusion, and, locally, electrical conductivity. Mesoscale heterogeneities arise from particle polydispersity and lead to material underutilization at high current densities. Particle-particle contacts, however, reduce heterogeneities by enabling lithium diffusion between connected particle groups. While the porous composite binder domain (CBD) may have slower ionic transport and less available area for electrochemical reactions, its high electrical conductivity makes it the preferred reaction site late in electrode discharge. Mesoscale results are favorably compared to both experimental data and macrohomogeneous models. This work enables improvements in materials design by providing a tool for optimization of particle sizes, CBD morphology, and manufacturing conditions.
Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.
ASME 2020 14th International Conference on Energy Sustainability, ES 2020
Ho, Clifford K.; Gonzalez-Portillo, Luis F.; Albrecht, Kevin J.
Ray-tracing and heat-transfer simulations of discrete particles in a representative elementary volume were performed to determine the effective particle-cloud absorptance and temperature profiles as a function of intrinsic particle absorptance values (0 - 1) for dilute solids volume fractions (1 - 3%) representative of falling particle receivers used in concentrating solar power applications. Results showed that the average particle-cloud absorptance is increased above intrinsic particle absorptance values as a result of reflections and subsequent reabsorption (light trapping). The relative increase in effective particle-cloud absorptance was greater for lower values of intrinsic particle absorptance and could be as high as a factor of two. Higher values of intrinsic particle absorptance led to higher simulated steady-state particle temperatures. Significant temperature gradients within the particle cloud and within the particles themselves were also observed in the simulations. Findings indicate that dilute particle-cloud configurations within falling particle receivers can significantly enhance the apparent effective absorptance of the particle curtain, and materials with higher values of intrinsic particle absorptance will yield greater radiative absorptance and temperatures.
Garcia Fernandez, S.; Anwar, I.; Reda Taha, M.M.; Stormont, J.C.; Matteo, Edward N.
The interface between the steel casing and cemented annulus of a typical wellbore may de-bond and become permeable; this flow path is commonly referred to as a microannulus. Because there are often multiple fluids associated with wellbores, understanding two-phase flow behavior in the microannulus is important when evaluating the risks and hazards associated with leaky wellbores. A microannulus was created in a mock wellbore specimen by thermal debonding, which is one of the possible mechanisms for microannulus creation in the field. The specimen was saturated with silicone oil, and the intrinsic permeability through the microannulus was measured. Nitrogen was then injected at progressively increasing pressures, first to find the breakthrough pressure, and secondly, to obtain the relation between capillary pressure and gas relative permeability. The nitrogen was injected through the bottom of the specimen, to simulate the field condition where the gas migrates upwards along the casing. The measured data was successfully fit to common functional forms, such as the models of Brooks-Corey and Van Genuchten, which relate capillary pressure, saturation, and relative permeability of the two phases. The results can be used in computational models of flow along a wellbore microannulus.
Falling particle receivers are an emerging technology for use in concentrating solar power systems. In this work, a staggered angle iron receiver concept is investigated, with the goals of increasing particle curtain stability and opacity in a receiver. The concept consists of angle iron-shaped troughs placed in line with a falling particle curtain in order to collect particles and re-release them, decreasing the downward velocity of the particles and the curtain spread. A particle flow test apparatus has been fabricated. The effect of staggered angle iron trough geometry, orientation, and position on the opacity and uniformity of a falling particle curtain for different particle linear mass flow rates is investigated using the particle flow test apparatus. For the baseline free falling curtain and for different trough configurations, particle curtain transmissivity is measured, and profile images of the particle curtain are taken. Particle mass flow rate and trough position affect curtain transmissivity more than trough orientation and geometry. Optimal trough position for a given particle mass flow rate can result in improved curtain stability and decreased transmissivity. The case with a slot depth of 1/4”, hybrid trough geometry at 36” below the slot resulted in the largest improvement over the baseline curtain: 0.40 transmissivity for the baseline and 0.14 transmissivity with the trough. However, some trough configurations have a detrimental effect on curtain stability and result in increased curtain transmissivity and/or substantial particle bouncing.
Realizing cost-effective, dispatchable, renewable energy production using concentrated solar power (CSP) relies on reaching high process temperatures to increase the thermal-to-electrical efficiency. Ceramic based particles used as both the energy storage medium and heat transfer fluid is a promising approach to increasing the operating temperature of next generation CSP plants. The particle-to-supercritical CO2 (sCO2) heat exchanger is a critical component in the development of this technology for transferring thermal energy from the heated ceramic particles to the sCO2 working fluid of the power cycle. The leading design for the particle-to-sCO2 heat exchanger is a shell-and-plate configuration. Currently, design work is focused on optimizing the performance of the heat exchanger through reducing the plate spacing. However, the particle channel geometry is limited by uniformity and reliability of particle flow in narrow vertical channels. Results of high temperature experimental particle flow testing are presented in this paper.
Siratarnsophon, Piyapath; Hernandez, Miguel; Peppanen, Jouni; Deboever, Jeremiah; Rylander, Matthew; Reno, Matthew J.
Distribution system modelling and analysis with growing penetration of distributed energy resources (DERs) requires more detailed and accurate distribution load modelling. Smart meters, DER monitors, and other distribution system sensors provide a new level of visibility to distribution system loads and DERs. However, there is a limited understanding of how to efficiently leverage the new information in distribution system load modelling. This study presents the assessment of 11 methods to leverage the emerging information for improved distribution system active and reactive power load modelling. The accuracy of these load modelling methods is assessed both at the primary and the secondary distribution levels by analysing over 2.7 billion data points of results of feeder node voltages and element phase currents obtained by performing annual quasi-static time series simulations on EPRI's Ckt5 feeder model.
Statechart notations with ‘run to completion’ semantics, are popular with engineers for designing controllers that respond to events in the environment with a sequence of state transitions. However, they lack formal refinement and rigorous verification methods., on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. We introduce a notion of refinement into a ‘run to completion’ statechart modelling notation, and leveragetool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics intorefinements and suggest a solution. We outline how safety and liveness properties could be verified.
There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.
Permeability prediction of porous media system is very important in many engineering and science domains including earth materials, bio-, solid-materials, and energy applications. In this work we evaluated how machine learning can be used to predict the permeability of porous media with physical properties. An emerging challenge for machine learning/deep learning in engineering and scientific research is the ability to incorporate physics into machine learning process. We used convolutional neural networks (CNNs) to train a set of image data of bead packing and additional physical properties such as porosity and surface area of porous media are used as training data either by feeding them to the fully connected network directly or through the multilayer perception network. Our results clearly show that the optimal neural network architecture and implementation of physics-informed constraints are important to properly improve the model prediction of permeability. A comprehensive analysis of hyperparameters with different CNN architectures and the data implementation scheme of the physical properties need to be performed to optimize our learning system for various porous media system.
Data fields sampled on irregularly spaced points arise in many science and engineering applications. For regular grids, Convolutional Neural Networks (CNNs) gain benefits from weight sharing and invariances. We generalize CNNs by introducing methods for data on unstructured point clouds using Generalized Moving Least Squares (GMLS). GMLS is a nonparametric meshfree technique for estimating linear bounded functionals from scattered data, and has emerged as an effective technique for solving partial differential equations (PDEs). By parameterizing the GMLS estimator, we obtain learning methods for linear and non-linear operators with unstructured stencils. The requisite calculations are local, embarrassingly parallelizable, and supported by a rigorous approximation theory. We show how the framework may be used for unstructured physical data sets to perform operator regression, develop predictive dynamical models, and obtain feature extractors for engineering quantities of interest. The results show the promise of these architectures as foundations for data-driven model development in scientific machine learning applications.
This work explores how High Perforrnance Computing is enabling acoustic solutions across a wide-range ofscience and engineering applications that were historically intractable.
Interpretation of field data from shock tests and subsequent assessment of product safety margins via laboratory testing are based on the shock response spectra (SRS). The SRS capture how a single degree of freedom (SDOF) structure responds to the shock at differing frequencies and, therefore, no longer contain the duration or other temporal parameters pertaining to the shock. A single duration can often be included in the technical specification or in the recreation of acceleration vs. time history from the specified SRS; however, there is little basis for that beyond technical judgment. The loss of such temporal information can result in the recreated SRS being the same while its effect on a system or component can be different. This paper attempts to quantify this deficiency as well as propose a simple method of capturing damping from shock waves that can allow the original waveform to be more accurately reconstructed from the SRS. In this study the decay rate associated with various frequencies that comprise the overall shock was varied. This variation in the decay rate leads to a variation in the acceleration vs. time history, which can be correlated to a “Damage Index” that captures the fatigue damage imparted to the object under shock. Several waveforms that have the same SRS but varying rates of decay for either high- or low-frequency components of the shock were investigated. The resulting variation in stress cycles and Damage Index is discussed in the context of the lognormal distribution of fatigue failure data. It is proposed that, along with the SRS, the decay rate is also captured to minimize the discrepancy between field data and representative laboratory tests.
An InGaAs/GaAsSb Type-II superlattice is explored as an absorber material for extended short-wave infrared detection. A 10.5 nm period was grown with an InGaAs/GaAsSb thickness ratio of 2 with a target In composition of 46% and target Sb composition of 62%. Cutoff wavelengths near 2.8 μm were achieved with responsivity beyond 3 μm. Demonstrated dark current densities were as low as 1.4 mA/cm2 at 295K and 13 μA/cm2 at 235K at -1V bias. A significant barrier to hole extraction was identified in the detector design that severely limited the external quantum efficiency (EQE) of the detectors. A redesign of the detector that removes that barrier could make InGaAs/GaAsSb very competitive with current commercial HgCdTe and extended InGaAs technology.
The goal of this paper is to present, for the first time, calculations of the magnetic penetration case of a first principles multipole-based cable braid electromagnetic penetration model. As a first test case, a one-dimensional array of perfect electrically conducting wires, for which an analytical solution is known, is investigated: We compare both the self-inductance and the transfer inductance results from our first principles cable braid electromagnetic penetration model to those obtained using the analytical solution. These results are found in good agreement up to a radius to half spacing ratio of about 0.78, demonstrating a robustness needed for many commercial and non-commercial cables. We then analyze a second set of test cases of a square array of wires whose solution is the same as the one-dimensional array result and of a rhomboidal array whose solution can be estimated from Kley’s model. As a final test case, we consider two layers of one-dimensional arrays of wires to investigate porpoising effects analytically. We find good agreement with analytical and Kley’s results for these geometries, verifying our proposed multipole model. Note that only our multipole model accounts for the full dependence on the actual cable geometry which enables us to model more complicated cable geometries.
Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.
Toroidal dielectric metasurface with a Q-factor of 728 in 1500 nm wavelength are reported. The resonance couples strongly to the environment, as demonstrated with a refractometric sensing experiment.
Recent advances in horizontal cask designs for commercial spent nuclear fuel have significantly increased maximum thermal loading. This is due in part to greater efficiency in internal conduction pathways. Carefully measured data sets generated from testing of full-sized casks or smaller cask analogs are widely recognized as vital for validating thermal-hydraulic models of these storage cask designs. While several testing programs have been previously conducted, these earlier validation studies did not integrate all the physics or components important in a modern, horizontal dry cask system. The purpose of this investigation is to produce data sets that can be used to benchmark the codes and best practices presently used to calculate cladding temperatures and induced cooling air flows in modern, horizontal dry storage systems. The horizontal dry cask simulator (HDCS) has been designed to generate this benchmark data and complement the existing knowledge base. Transverse and axial temperature profiles along with induced-cooling air flow are measured using various backfills of gases for a wide range of decay powers and canister pressures. The data from the HDCS tests will be used to host a blind model validation effort.
High-speed aerospace engineering applications rely heavily on computational fluid dynamics (CFD) models for design and analysis due to the expense and difficulty of flight tests and experiments. This reliance on CFD models necessitates performing accurate and reliable uncertainty quantification (UQ) of the CFD models. However, it is very computationally expensive to run CFD for hypersonic flows due to the fine grid resolution required to capture the strong shocks and large gradients that are typically present. Additionally, UQ approaches are “many-query” problems requiring many runs with a wide range of input parameters. One way to enable computationally expensive models to be used in such many-query problems is to employ projection-based reduced-order models (ROMs) in lieu of the (high-fidelity) full-order model. In particular, the least-squares Petrov–Galerkin (LSPG) ROM (equipped with hyper-reduction) has demonstrated the ability to significantly reduce simulation costs while retaining high levels of accuracy on a range of problems including subsonic CFD applications [1, 2]. This allows computationally inexpensive LSPG ROM simulations to replace the full-order model simulations in UQ studies, which makes this many-query task tractable, even for large-scale CFD models. This work presents the first application of LSPG to a hypersonic CFD application. In particular, we present results for LSPG ROMs of the HIFiRE-1 in a three-dimensional, turbulent Mach 7.1 flow, showcasing the ability of the ROM to significantly reduce computational costs while maintaining high levels of accuracy in computed quantities of interest.
The performance of the Reactor Core Isolation Cooling (RCIC) system under beyond design basis event (BDBE) conditions is not well-characterized. The operating band of the RCIC system is currently specified utilizing conservative assumptions, with restrictive operational guidelines not allowing for an adequate credit of the true capability of the system. For example, it is assumed that battery power is needed for RCIC operation to maintain the reactor pressure vessel (RPV) water level—a loss of battery power is conservatively assumed to result in failure of the RCIC turbopump system in a range of safety and risk assessments. However, the accidents at Fukushima Daiichi Nuclear Power Station (FDNPS) showed that the Unit 2 RCIC did not cease to operate following loss of battery power. In fact, it continued to inject water into the RPV for nearly 3 days following the earthquake. Improved understanding of Terry turbopump operations under BDBE conditions can support enhancement of accident management procedures and guidelines, promoting more robust severe accident prevention. Therefore, the U.S. Department of Energy (DOE), U.S. nuclear industry, and international stakeholders have funded the Terry Turbine Expanded Operating Band (TTEXOB) program. This program aims to better understand RCIC operations during BDBE conditions through combined experimental and modeling efforts. As part of the TTEXOB, airflow testing was performed at Texas A&M University (TAMU) of a small-scale ZS-1 and a full-scale GS-2 Terry turbine. This paper presents the corresponding efforts to model operation of the TAMU ZS-1 and GS-2 Terry turbines with Sandia National Laboratories’ (SNL) MELCOR code. The current MELCOR modeling approach represents the Terry turbine with a system of equations expressing the conservation of angular momentum. The joint analysis and experimental program identified that a) it is possible for the Terry turbine to develop the same power at different speeds, and b) turbine losses appear to be insensitive to the size of the turbine. As part of this program, further study of Terry turbine modeling unknowns and uncertainties is planned to support more extensive application of modeling and simulation to the enhancement of plant-specific operational and accident procedures.
The light output, time resolution, pulse shape discrimination (PSD), neutron light output, and interaction position reconstruction of melt-cast small-molecule organic glass bar scintillators were measured. The trans-stilbene organic scintillator detects fast neutrons and gamma rays with high efficiency and exhibits excellent PSD, but the manufacturing process is slow and expensive and its light output in response to neutrons is anisotropic. Small-molecule organic glass bars offer an easy-to-implement and cost-effective solution to these problems. These properties were characterized to evaluate the efficacy of constructing a compact, low-voltage neutron and gamma-ray imaging system using organic glass bars coupled to silicon photomultiplier arrays. A complete facility for melt-casting organic glass scintillators was setup at the University of Michigan. 6×6×50 mm3 glass bars were produced and the properties listed above were characterized. The first neutron image using organic glass was produced in simple backprojection.
Fire suppression systems for transuranic (TRU) waste facilities are designed to minimize radioactive material release to the public and to facility employees in the event of a fire. Currently, facilities with Department of Transportation (DOT) 7A drums filled with TRU waste follow guidelines that assume a fraction of the drums experience lid ejection in case of a fire. This lid loss is assumed to result in significant TRU waste material from the drum experiencing an unconfined burn during the fire, and fire suppression systems are thus designed to respond and mitigate potential radioactive material release. However, recent preliminary tests where the standard lid filters of 7A drums were replaced with a UT-9424S filter suggest that the drums could retain their lid if equipped with this filter. The retention of the drum lid could thus result in a very different airborne release fraction (ARF) of a 7A drum's contents when exposed to a pool fire than what is assumed in current safety basis documents. This potentially different ARF is currently unknown because, while studies have been performed in the past to quantify ARF for 7A drums in a fire, no comprehensive measurements have been performed for drums equipped with a UT-9424S filter. If the ARF is lower than what is currently assumed, it could change the way TRU waste facilities operate. Sandia National Laboratories has thus developed a set of tests and techniques to help determine an ARF value for 7A drums filled with TRU waste and equipped with a UT-9424S filter when exposed to the hypothetical accident conditions (HAC) of a 30-minute hydrocarbon pool fire. In this multi-phase test series, SNL has accomplished the following: (1) performed a thermogravimetric analysis (TGA) on various combustible materials typically found in 7A drums in order to identify a conservative load for 7A drums in a pool fire; (2) performed a 30-minute pool fire test to (a) determine if lid ejection is possible under extreme conditions despite the UT-9424S filter, and (b) to measure key parameters in order to replicate the fire environment using a radiant heat setup; and (3) designed a radiant heat setup to demonstrate capability of reproducing the fire environment with a system that would facilitate measurements of ARF. This manuscript thus discusses the techniques, approach, and unique capabilities SNL has developed to help determine an ARF value for DOT 7A drums exposed to a 30-minute fully engulfing pool fire while equipped with a UT-9424S filter on the drum lid.
Many environments currently employ machine learning models for data processing and analytics that were built using a limited number of training data points. Once deployed, the models are exposed to significant amounts of previously-unseen data, not all of which is representative of the original, limited training data. However, updating these deployed models can be difficult due to logistical, bandwidth, time, hardware, and/or data sensitivity constraints. We propose a framework, Self-Updating Models with Error Remediation (SUMER), in which a deployed model updates itself as new data becomes available. SUMER uses techniques from semi-supervised learning and noise remediation to iteratively retrain a deployed model using intelligently-chosen predictions from the model as the labels for new training iterations. A key component of SUMER is the notion of error remediation as self-labeled data can be susceptible to the propagation of errors. We investigate the use of SUMER across various data sets and iterations. We find that self-updating models (SUMs) generally perform better than models that do not attempt to self-update when presented with additional previously-unseen data. This performance gap is accentuated in cases where there is only limited amounts of initial training data. We also find that the performance of SUMER is generally better than the performance of SUMs, demonstrating a benefit in applying error remediation. Consequently, SUMER can autonomously enhance the operational capabilities of existing data processing systems by intelligently updating models in dynamic environments.
We derive a formulation of the nonhydrostatic equations in spherical geometry with a Lorenz staggered vertical discretization. The combination conserves a discrete energy in exact time integration when coupled with a mimetic horizontal discretization. The formulation is a version of Dubos and Tort (2014, https://doi.org/10.1175/MWR-D-14-00069.1) rewritten in terms of primitive variables. It is valid for terrain following mass or height coordinates and for both Eulerian or vertically Lagrangian discretizations. The discretization relies on an extension to Simmons and Burridge (1981, https://doi.org/10.1175/1520-0493(1981)109<0758:AEAAMC>2.0.CO;2) vertical differencing, which we show obeys a discrete derivative product rule. This product rule allows us to simplify the treatment of the vertical transport terms. Energy conservation is obtained via a term-by-term balance in the kinetic, internal, and potential energy budgets, ensuring an energy-consistent discretization up to time truncation error with no spurious sources of energy. We demonstrate convergence with respect to time truncation error in a spectral element code with a horizontal explicit vertically implicit implicit-explicit time stepping algorithm.
There is a wealth of psychological theory regarding the drive for individuals to congregate and form social groups, positing that people may organize out of fear, social pressure, or even to manage their self-esteem. We evaluate three such theories for multi-scale validity by studying them not only at the individual scale for which they were originally developed, but also for applicability to group interactions and behavior. We implement this multi-scale analysis using a dataset of communications and group membership derived from a long-running online game, matching the intent behind the theories to quantitative measures that describe players’ behavior. Once we establish that the theories hold for the dataset, we increase the scope to test the theories at the higher scale of group interactions. Despite being formulated to describe individual cognition and motivation, we show that some group dynamics theories hold at the higher level of group cognition and can effectively describe the behavior of joint decision making and higher-level interactions.
A focus in the development of the next generation of concentrating solar power (CSP) plants is the integration of high temperature particle receivers with improved efficiency supercritical carbon dioxide (sCO2) power cycles. The feasibility of this type of system depends on the design of a particle-to-sCO2 heat exchanger. This work presents a finite element analysis (FEA) model to analyze the thermal performance of a particle-to-sCO2 heat exchanger for potential use in a CSP plant. The heat exchanger design utilizes a moving packed bed of particles in crossflow with sCO2 which flows in a serpentine pattern through banks of microchannel plates. The model contains a thermal analysis to determine the heat exchanger's performance in transferring thermal energy from the particle bed to the sCO2. Test data from a prototype heat exchanger was used to verify the performance predictions of the model. The verification of the model required a multitude of sensitivity tests to identify where fidelity needed to be added to reach agreement between the experimental and simulated results. For each sensitivity test in the model, the effect on the performance is discussed. The model was shown to be in good agreement on the overall heat transfer coefficient of the heat exchanger with the experimental results for a low temperature set of conditions with a combination of added sensitives. A set of key factors with a major impact on the performance of the heat exchanger are discussed.
A focus in the development of the next generation of concentrating solar power (CSP) plants is the integration of high temperature particle receivers with improved efficiency supercritical carbon dioxide (sCO2) power cycles. The feasibility of this type of system depends on the design of a particle-to-sCO2 heat exchanger. This work presents a finite element analysis (FEA) model to analyze the thermal performance of a particle-to-sCO2 heat exchanger for potential use in a CSP plant. The heat exchanger design utilizes a moving packed bed of particles in crossflow with sCO2 which flows in a serpentine pattern through banks of microchannel plates. The model contains a thermal analysis to determine the heat exchanger's performance in transferring thermal energy from the particle bed to the sCO2. Test data from a prototype heat exchanger was used to verify the performance predictions of the model. The verification of the model required a multitude of sensitivity tests to identify where fidelity needed to be added to reach agreement between the experimental and simulated results. For each sensitivity test in the model, the effect on the performance is discussed. The model was shown to be in good agreement on the overall heat transfer coefficient of the heat exchanger with the experimental results for a low temperature set of conditions with a combination of added sensitives. A set of key factors with a major impact on the performance of the heat exchanger are discussed.
We present a nonlocal variational image completion technique which admits simultaneous inpainting of multiple structures and textures in a unified framework. The recovery of geometric structures is achieved by using general convolution operators as a measure of behavior within an image. These are combined with a nonlocal exemplar-based approach to exploit the self-similarity of an image in the selected feature domains and to ensure the inpainting of textures. We also introduce an anisotropic patch distance metric to allow for better control of the feature selection within an image and present a nonlocal energy functional based on this metric. Finally, we derive an optimization algorithm for the proposed variational model and examine its validity experimentally with various test images.
A self-tuning proportional-integral control law prescribing motor torques was tested in experiment on a three degree-of-freedom wave energy converter. The control objective was to maximize electrical power. The control law relied upon an identified model of device intrinsic impedance to generate a frequency-domain estimate of the wave-induced excitation force and measurements of device velocities. The control law was tested in irregular sea-states that evolved over hours (a rapid, but realistic time-scale) and that changed instantly (an unrealistic scenario to evaluate controller response). For both cases, the controller converges to gains that closely approximate the post-calculated optimal gains for all degrees of freedom. Convergence to near-optimal gains occurred reliably over a sufficiently short time for realistic sea states. In addition, electrical power was found to be relatively insensitive to gain tuning over a broad range of gains, implying that an imperfectly tuned controller does not result in a large penalty to electrical power capture. An extension of this control law that allows for adaptation to a changing device impedance model over time is proposed for long-term deployments, as well as an approach to explicitly handle constraints within this architecture.
Proceedings of the Human Factors and Ergonomics Society
Salazar, George; See, Judi E.; Handley, Holly A.H.; Craft, Richard
The Human Readiness Levels (HRL) scale is a simple nine-level scale developed as an adjunct to complement and supplement the existing Technology Readiness Levels (TRL) scale widely used across government agencies and industry. A multi-agency working group consisting of nearly 30 members representing the broader human systems integration (HSI) community throughout the Department of Defense (DOD), Department of Energy (DOE), other federal agencies, industry, and academia was established in August 2019. The working group’s charter was to mature the HRL scale and evaluate its utility, reliability, and validity for implementation in the systems acquisition lifecycle. Toward that end, the working group examined applicability of the HRL scale for a range of scenarios. This panel will discuss outcomes from the working group’s activities regarding HRL scale structure and usage.
This paper discusses how to design an inter-area oscillations damping controller using a frequency-shaped optimal output feedback control approach. This control approach was chosen because inter-area oscillations occur at a particular frequency range, from 0.2 to 1 Hz, which is the interval the control action must be prioritized. This paper shows that using only the filter for the system states can sufficiently damp the system modes. In addition, the paper shows that the filter for the input can be adjusted to provide primary frequency regulation to the system with no effect to the desired damping control action. Time domain simulations of a power system with a set of controllable power injection devices are presented to show the effectiveness of the designed controller.
The structural and chemical characterization at the atomic-scale plays a critical role in understanding the structure-property relationship in precise electrical devices such as those produced by atomic-precision advanced manufacturing (APAM). APAM, utilizing hydrogen lithography in a scanning tunneling microscope, offers a potential pathway to ultra-efficient transistors, and has been developed to produce phosphorus (P)-based donor devices integrated into bare Si substrates. Structural characterization of the buried, Si with P dopant (Si:P) delta-layer in the devices by scanning transmission electron microscopy (STEM), however, is a challenge due to similar atomic number and low concentration of the P dopants. In this paper, we describe several efforts of utilizing advanced STEM imagining and spectroscopic techniques to quantify the Si:P deltalayers. STEM imaging combining low-angle and high-angle annular dark-field (LAADF, HAADF) detectors as well as atomic-scale elemental mapping using energy-dispersive X-ray spectroscopy (EDS) are used to reveal the P and defect distribution across the delta-layer processed under various thermal conditions.
In the presence of model discrepancy, the calibration of physics-based models for physical parameter inference is a challenging problem. Lack of identifiability between calibration parameters and model discrepancy requires additional identifiability constraints to be placed on the model discrepancy to obtain unique physical parameter estimates. If these assumptions are violated, the inference for the calibration parameters can be systematically biased. In many applications, such as in dynamic material property experiments, many of the calibration inputs refer to measurement uncertainties. In this setting, we develop a metric for identifying overfitting of these measurement uncertainties, propose a prior capable of reducing this overfitting, and show how this leads to a diagnostic tool for validation of physical parameter inference. The approach is demonstrated for a benchmark example and applied for a material property application to perform inference on the equation of state parameters of tantalum.
This paper considers preconditioners for the linear systems that arise from optimal control and inverse problems involving the Helmholtz equation. Specifically, we explore an all-at-once approach. The main contribution centers on the analysis of two block preconditioners. Variations of these preconditioners have been proposed and analyzed in prior works for optimal control problems where the underlying partial differential equation is a Laplace-like operator. In this paper, we extend some of the prior convergence results to Helmholtz-based optimization applications. Our analysis examines situations where control variables and observations are restricted to subregions of the computational domain. We prove that solver convergence rates do not deteriorate as the mesh is refined or as the wavenumber increases. More specifically, for one of the preconditioners we prove accelerated convergence as the wavenumber increases. Additionally, in situations where the control and observation subregions are disjoint, we observe that solver convergence rates have a weak dependence on the regularization parameter. We give a partial analysis of this behavior. We illustrate the performance of the preconditioners on control problems motivated by acoustic testing.
The work presented in this paper applies the MELCOR code developed at Sandia National Laboratories to evaluate the source terms from potential accidents in non-reactor nuclear facilities. The present approach provides an integrated source term approach that would be well-suited for uncertainty analysis and probabilistic risk assessments. MELCOR is used to predict the thermal-hydraulic conditions during fires or explosions that includes a release of radionuclides. The radionuclides are tracked throughout the facility from the initiating event to predict the time-dependent source term to the environment for subsequent dose or consequence evaluations. In this paper, we discuss the MELCOR input model development and the evaluation of the potential source terms from the dominated fire and explosion scenarios for a spent fuel nuclear reprocessing plant.
Bayesian optimization (BO) is an efficient and flexible global optimization framework that is applicable to a very wide range of engineering applications. To leverage the capability of the classical BO, many extensions, including multi-objective, multi-fidelity, parallelization, and latent-variable modeling, have been proposed to address the limitations of the classical BO framework. In this work, we propose a novel multi-objective (MO) extension, called srMOBO-3GP, to solve the MO optimization problems in a sequential setting. Three different Gaussian processes (GPs) are stacked together, where each of the GP is assigned with a different task: the first GP is used to approximate a single-objective computed from the MO definition, the second GP is used to learn the unknown constraints, and the third GP is used to learn the uncertain Pareto frontier. At each iteration, a MO augmented Tchebycheff function converting MO to single-objective is adopted and extended with a regularized ridge term, where the regularization is introduced to smooth the single-objective function. Finally, we couple the third GP along with the classical BO framework to explore the richness and diversity of the Pareto frontier by the exploitation and exploration acquisition function. The proposed framework is demonstrated using several numerical benchmark functions, as well as a thermomechanical finite element model for flip-chip package design optimization.
American Society of Mechanical Engineers, Fluids Engineering Division (Publication) FEDSM
Laros, James H.; Visintainer, Robert; Furlan, John; Pagalthivarthi, Krishnan V.; Garman, Mohamed; Cutright, Aaron; Wang, Yan
Wear prediction is important in designing reliable machinery for slurry industry. It usually relies on multi-phase computational fluid dynamics, which is accurate but computationally expensive. Each run of the simulations can take hours or days even on a high-performance computing platform. The high computational cost prohibits a large number of simulations in the process of design optimization. In contrast to physics-based simulations, data-driven approaches such as machine learning are capable of providing accurate wear predictions at a small fraction of computational costs, if the models are trained properly. In this paper, a recently developed WearGP framework [1] is extended to predict the global wear quantities of interest by constructing Gaussian process surrogates. The effects of different operating conditions are investigated. The advantages of the WearGP framework are demonstrated by its high accuracy and low computational cost in predicting wear rates.
Tensor decomposition is a fundamental unsupervised machine learning method in data science, with applications including network analysis and sensor data processing. This work develops a generalized canonical polyadic (GCP) low-rank tensor decomposition that allows other loss functions besides squared error. For instance, we can use logistic loss or Kullback{Leibler divergence, enabling tensor decomposition for binary or count data. We present a variety of statistically motivated loss functions for various scenarios. We provide a generalized framework for computing gradients and handling missing data that enables the use of standard optimization methods for fitting the model. We demonstrate the exibility of the GCP decomposition on several real-world examples including interactions in a social network, neural activity in a mouse, and monthly rainfall measurements in India.
Estimation of the uncertainty in a critical experiment attributable to uncertainties in the measured experiment temperature is done by calculating the variation of the eigenvalue of a benchmark configuration as a function of temperature. In the low-enriched water-moderated critical experiments performed at Sandia, this is done by 1) estimating the effects of changing the water temperature while holding the UO2 fuel temperature constant, 2) estimating the effects of changing the UO2 temperature while holding the water temperature constant, and 3) combining the two results. This assumes that the two effects are separable. The results of such an analysis are nonintuitive and need experimental verification. Critical experiments are being planned at Sandia National Laboratories (Sandia) to measure the effect of temperature on critical systems and will serve to test the methods used in estimating the temperature effects in critical experiments.
The Natural Energy Laboratory of Hawaii Authority's (NELHA) campus on The Island of Hawai'i supplies resources for a number of renewable energy and aquaculture research projects. There is a growing interest at NELHA to convert the research campus to a 100% renewable, islanded microgrid to improve the resiliency of the campus for critical ocean water pumping loads and to limit the increase in the long-term cost of operations. Currently, the campus has solar array to cover some electricity needs but scaling up this system to fully meet the needs of the entire research campus will require significant changes and careful planning to minimize costs. This study will investigate least-cost solar and energy storage system sizes capable of meeting the needs of the campus. The campus is split into two major load centers that are electrically isolated and have different amounts of available land for solar installations. The value of adding an electrical transmission line if NELHA converts to a self-contained microgrid is explored by estimating the cost of resources for each load center individually and combined. Energy storage using lithium-ion and hydrogen-based technologies is investigated. For the hydrogen-based storage system, a variable efficiency and fixed efficiency representation of the electrolysis and fuel cell systems are used. Results using these two models show the importance of considering the changing performance of hydrogen systems for sizing algorithms.
In the decade since support for task parallelism was incorporated into OpenMP, its use has remained limited in part due to concerns about its performance and scalability. This paper revisits a study from the early days of OpenMP tasking that used the Unbalanced Tree Search (UTS) benchmark as a stress test to gauge implementation efficiency. The present UTS study includes both Clang/LLVM and vendor OpenMP implementations on four different architectures. We measure parallel efficiency to examine each implementation’s performance in response to varying task granularity. We find that most implementations achieve over 90% efficiency using all available cores for tasks of O(100k) instructions, and the best even manage tasks of O(10k) instructions well.
Given the increasing ubiquity of online embedded devices, analyzing their firmware is important to security, privacy, and safety. The tight coupling between hardware and firmware and the diversity found in embedded systems makes it hard to perform dynamic analysis on firmware. However, firmware developers regularly develop code using abstractions, such as Hardware Abstraction Layers (HALs), to simplify their job. We leverage such abstractions as the basis for the re-hosting and analysis of firmware. By providing high-level replacements for HAL functions (a process termed High-Level Emulation - HLE), we decouple the hardware from the firmware. This approach works by first locating the library functions in a firmware sample, through binary analysis, and then providing generic implementations of these functions in a full-system emulator. We present these ideas in a prototype system, HALucinator, able to re-host firmware, and allow the virtual device to be used normally. First, we introduce extensions to existing library matching techniques that are needed to identify library functions in binary firmware, to reduce collisions, and for inferring additional function names. Next, we demonstrate the re-hosting process, through the use of simplified handlers and peripheral models, which make the process fast, flexible, and portable between firmware samples and chip vendors. Finally, we demonstrate the practicality of HLE for security analysis, by supplementing HALucinator with the American Fuzzy Lop fuzzer, to locate multiple previously-unknown vulnerabilities in firmware middleware libraries.
A passive yaw implementation is developed, validated, and explored for the WEC-Sim, an open-source wave energy converter modeling tool that works within MATLAB/Simulink. The Reference Model 5 (RM5) is selected for this investigation, and a WEC-Sim model of the device is modified to allow yaw motion. A boundary element method (BEM) code was used to calculate the excitation force coefficients for a range of wave headings. An algorithm was implemented in WEC-Sim to determine the equivalent wave heading from a body's instantaneous yaw angle and interpolate the appropriate excitation coefficients to ensure the correct time-domain excitation force. This approach is able to determine excitation force for a body undergoing large yaw displacement. For the mathematically simple case of regular wave excitation, the dynamic equation was integrated numerically and found to closely approximate the results from this implementation in WEC-Sim. A case study is presented for the same device in irregular waves. In this case, computation time is increased by 32x when this interpolation is performed at every time step. To reduce this expense, a threshold yaw displacement can be set to reduce the number of interpolations performed. A threshold of 0.01o was found to increase computation time by only 22x without significantly affecting time domain results. Similar amplitude spectra for yaw force and displacements are observed for all threshold values less than 1o, for which computation time is only increased by 2.2x.
Compact semiconductor device models are essential for efficiently designing and analyzing large circuits. However, traditional compact model development requires a large amount of manual effort and can span many years. Moreover, inclusion of new physics (e.g., radiation effects) into an existing model is not trivial and may require redevelopment from scratch. Machine Learning (ML) techniques have the potential to automate and significantly speed up the development of compact models. In addition, ML provides a range of modeling options that can be used to develop hierarchies of compact models tailored to specific circuit design stages. In this paper, we explore three such options: (1) table-based interpolation, (2) Generalized Moving Least-Squares, and (3) feed-forward Deep Neural Networks, to develop compact models for a p-n junction diode. We evaluate the performance of these “data-driven” compact models by (1) comparing their voltage-current characteristics against laboratory data, and (2) building a bridge rectifier circuit using these devices, predicting the circuit's behavior using SPICE-like circuit simulations, and then comparing these predictions against laboratory measurements of the same circuit.
Motivated by the gap between theoretical optimal approximation rates of deep neural networks (DNNs) and the accuracy realized in practice, we seek to improve the training of DNNs. The adoption of an adaptive basis viewpoint of DNNs leads to novel initializations and a hybrid least squares/gradient descent optimizer. We provide analysis of these techniques and illustrate via numerical examples dramatic increases in accuracy and convergence rate for benchmarks characterizing scientific applications where DNNs are currently used, including regression problems and physics-informed neural networks for the solution of partial differential equations.
Although popular in industry, state-chart notations with ‘run to completion’ semantics lack formal refinement and rigorous verification methods. State-chart models are typically used to design complex control systems that respond to environmental triggers with a sequential process. The model is usually constructed at a concrete level and verified and validated using animation techniques relying on human judgement. Event-B, on the other hand, is based on refinement from an initial abstraction and is designed to make formal verification by automatic theorem provers feasible. We introduce a notion of refinement into a ‘run to completion’ statechart modelling notation, and leverage Event-B ’s tool support for theorem proving. We describe the difficulties in translating ‘run to completion’ semantics into Event-B refinements and suggest a solution. We illustrate our approach and show how critical (e.g. safety) invariant properties can be verified by proof despite the reactive nature of the system. We also show how behavioural aspects of the system can be verified by testing the expected reactions using a temporal logic model checking approach.
Automatic detection of defects in as-built parts is a challenging task due to the large number of potential manufacturing flaws that can occur. X-Ray computed tomography (CT) can produce high-quality images of the parts in a non-destructive manner. The images, however, are grayscale valued, often have artifacts and noise, and require expert interpretation to spot flaws. In order for anomaly detection to be reproducible and cost effective, an automated method is needed to find potential defects. Traditional supervised machine learning techniques fail in the high reliability parts regime due to large class imbalance: there are often many more examples of well-built parts than there are defective parts. This, coupled with the time expense of obtaining labeled data, motivates research into unsupervised techniques. In particular, we build upon the AnoGAN and f-AnoGAN work by T. Schlegl et al. and created a new architecture called PandaNet. PandaNet learns an encoding function to a latent space of defect-free components and a decoding function to reconstruct the original image. We restrict the training data to defect-free components so that the encode-decode operation cannot learn to reproduce defects well. The difference between the reconstruction and the original image highlights anomalies that can be used for defect detection. In our work with CT images, PandaNet successfully identifies cracks, voids, and high z inclusions. Beyond CT, we demonstrate PandaNet working successfully with little to no modifications on a variety of common 2-D defect datasets both in color and grayscale.
Structural dynamic finite element models typically use multipoint constraints (MPC) to condense the degrees of freedom (DOF) near bolted joints down to a single node, which can then be joined to neighboring structures with linear springs or nonlinear elements. Scalability becomes an issue when multiple joints are present in a system, because each requires its own model to capture the nonlinear behavior. While this increases the computational cost, the larger problem is that the parameters of the joint models are not known, and so one must solve a nonlinear model updating problem with potentially hundreds of unknown variables to fit the model to measurements. Furthermore, traditional MPC approaches are limited in how the flexibility of the interface is treated (i.e. with rigid bar elements the interface has no flexibility). To resolve this shortcoming, this work presents an alternative approach where the contact interface is reduced to a set of modal DOF which retain the flexibility of the interface and are capable of modeling multiple joints simultaneously. Specifically, system-level characteristic constraint (S-CC) reduction is used to reduce the motion at the contact interface to a small number of shapes. To capture the hysteresis and energy dissipation that is present during microslip of joints, a hysteretic element is applied to a small number of the S-CC Shapes. This method is compared against a traditional MPC method (using rigid bar elements) on a two-dimensional finite element model of a cantilever beam with a single joint near the free end. For all methods, a four-parameter Iwan element is applied to the interface DOF to capture how the amplitude dependent modal frequency and damping change with vibration amplitude.
Ceramic to metal brazing is a common bonding process usedin many advanced systems such as automotive engines, aircraftengines, and electronics. In this study, we use optimizationtechniques and finite element analysis utilizing viscoplastic andthermo-elastic material models to find an optimum thermalprofile for a Kovar® washer bonded to an alumina button that istypical of a tension pull test. Several active braze filler materialsare included in this work. Cooling rates, annealing times, aging,and thermal profile shapes are related to specific materialbehaviors. Viscoplastic material models are used to represent thecreep and plasticity behavior in the Kovar® and braze materialswhile a thermo-elastic material model is used on the alumina.The Kovar® is particularly interesting because it has a Curiepoint at 435°C that creates a nonlinearity in its thermal strain andstiffness profiles. This complex behavior incentivizes theoptimizer to maximize the stress above the Curie point with afast cooling rate and then favors slow cooling rates below theCurie point to anneal the material. It is assumed that if failureoccurs in these joints, it will occur in the ceramic material.Consequently, the maximum principle stress of the ceramic isminimized in the objective function. Specific details of the stressstate are considered and discussed.
Functional data are fast becoming a preeminent source of information across a wide range of industries. A particularly challenging aspect of functional data is bounding uncertainty. In this unique case study, we present our attempts at creating bounding functions for selected applications at Sandia National Laboratories (SNL). The first attempt involved a simple extension of functional principal component analysis (fPCA) to incorporate covariates. Though this method was straightforward, the extension was plagued by poor coverage accuracy for the bounding curve. This led to a second attempt utilizing elastic methodology which yielded more accurate coverage at the cost of more complexity.
Heterogeneous Integration (HI) may enable optoelectronic transceivers for short-range and long-range radio frequency (RF) photonic interconnect using wavelength-division multiplexing (WDM) to aggregate signals, provide galvanic isolation, and reduce crosstalk and interference. Integration of silicon Complementary Metal-Oxide-Semiconductor (CMOS) electronics with InGaAsP compound semiconductor photonics provides the potential for high-performance microsystems that combine complex electronic functions with optoelectronic capabilities from rich bandgap engineering opportunities, and intimate integration allows short interconnects for lower power and latency. The dominant pure-play foundry model plus the differences in materials and processes between these technologies dictate separate fabrication of the devices followed by integration of individual die, presenting unique challenges in die preparation, metallization, and bumping, especially as interconnect densities increase. In this paper, we describe progress towards realizing an S-band WDM RF photonic link combining 180 nm silicon CMOS electronics with InGaAsP integrated optoelectronics, using HI processes and approaches that scale into microwave and millimeter-wave frequencies.
Rank-revealing matrix decompositions provide an essential tool in spectral analysis of matrices, including the Singular Value Decomposition (SVD) and related low-rank approximation techniques. QR with Column Pivoting (QRCP) is usually suitable for these purposes, but it can be much slower than the unpivoted QR algorithm. For large matrices, the difference in performance is due to increased communication between the processor and slow memory, which QRCP needs in order to choose pivots during decomposition. Our main algorithm, Randomized QR with Column Pivoting (RQRCP), uses randomized projection to make pivot decisions from a much smaller sample matrix, which we can construct to reside in a faster level of memory than the original matrix. This technique may be understood as trading vastly reduced communication for a controlled increase in uncertainty during the decision process. For rank-revealing purposes, the selection mechanism in RQRCP produces results that are the same quality as the standard algorithm, but with performance near that of unpivoted QR (often an order of magnitude faster for large matrices). We also propose two formulas that facilitate further performance improvements. The first efficiently updates sample matrices to avoid computing new randomized projections. The second avoids large trailing updates during the decomposition in truncated low-rank approximations. Our truncated version of RQRCP also provides a key initial step in our truncated SVD approximation, TUXV. These advances open up a new performance domain for large matrix factorizations that will support efficient problem-solving techniques for challenging applications in science, engineering, and data analysis.
We describe the development and benchtop prototype performance characterization of a mechatronic system for automatically drilling small diameter holes of arbitrary depth, to enable monitoring the integrity of oil and gas wells in situ. The precise drilling of very small diameter, high aspect ratio holes, particularly in dimensionally constrained spaces, presents several challenges including bit buckling, limited torsional stiffness, chip clearing, and limited space for the bit and mechanism. We describe a compact mechanism that overcomes these issues by minimizing the unsupported drill bit length throughout the process, enabling the bit to be progressively fed from a chuck as depth increases. When used with flexible drill bits, holes of arbitrary depth and aspect ratio may be drilled orthogonal to the wellbore. The mechanism and a conventional drilling system are tested in deep hole drilling operation. The experimental results show that the system operates as intended and achieves holes with substantially greater aspect ratios than conventional methods with very long drill bits. The mechanism enabled successful drilling of a 1/16" diameter hole to a depth of 9", a ratio of 144:1. Dysfunctions prevented drilling of the same hole using conventional methods.
Proceedings of SPIE - The International Society for Optical Engineering
Fink, D.R.; Lee, S.; Kodati, S.H.; Rogers, V.; Ronningen, T.J.; Winslow, M.; Grein, C.H.; Jones, A.H.; Campbell, J.C.; Klem, John F.; Krishna, S.
We present a method of determining the background doping type in semiconductors using capacitance-voltage measurements on overetched double mesa p-i-n or n-i-p structures. Unlike Hall measurements, this method is not limited by the conductivity of the substrate. By measuring the capacitance of devices with varying top and bottom mesa sizes, we were able to conclusively determine which mesa contained the p-n junction, revealing the polarity of the intrinsic layer. This method, when demonstrated on GaSb p-i-n and n-i-p structures, determined that the material is residually doped p-type, which is well established by other sources. The method was then applied on a 10 monolayer InAs/10 monolayer AlSb superlattice, for which the doping polarity was unknown, and indicated that this material is also p-type.
Neuromorphic computing captures the quintessential neural behaviors of the brain and is a promising candidate for the beyond-von Neumann computer architectures, featuring low power consumption and high parallelism. The neuronal lateral inhibition feature, closely associated with the biological receptive field, is crucial to neuronal competition in the nervous system as well as its neuromorphic hardware counterpart. The domain wall - magnetic tunnel junction (DW-MTJ) neuron is an emerging spintronic artificial neuron device exhibiting intrinsic lateral inhibition. This work discusses lateral inhibition mechanism of the DW-MTJ neuron and shows by micromagnetic simulation that lateral inhibition is efficiently enhanced by the Dzyaloshinskii-Moriya interaction (DMI).
The multi-institution Single-Volume Scatter Camera (SVSC) collaboration led by Sandia National Laboratories (SNL) is developing a compact, high-efficiency double-scatter neutron imaging system. Kinematic emission imaging of fission-energy neutrons can be used to detect, locate, and spatially characterize special nuclear material. Neutron-scatter cameras, analogous to Compton imagers for gamma ray detection, have a wide field of view, good event-by-event angular resolution, and spectral sensitivity. Existing systems, however, suffer from large size and/or poor efficiency. We are developing high-efficiency scatter cameras with small form factors by detecting both neutron scatters in a compact active volume. This effort requires development and characterization of individual system components, namely fast organic scintillators, photodetectors, electronics, and reconstruction algorithms. In this presentation, we will focus on characterization measurements of several SVSC candidate scintillators. The SVSC collaboration is investigating two system concepts: the monolithic design in which isotropically emitted photons are detected on the sides of the volume, and the optically segmented design in which scintillation light is channeled along scintillator bars to segmented photodetector readout. For each of these approaches, we will describe the construction and performance of prototype systems. We will conclude by summarizing lessons learned, comparing and contrasting the two system designs, and outlining plans for the next iteration of prototype design and construction.
Sinuous antennas are capable of producing ultra-wideband radiation with polarization diversity. This capability makes the sinuous antenna an attractive candidate for UWB polarimetric radar applications. Additionally, the ability of the sinuous antenna to be implemented as a planar structure makes it a good fit for close-in sensing applications such as ground-penetrating radar (GPR). In this work, each arm of a four-port sinuous antenna is operated independently to achieve a quasi-monostatic antenna system capable of polarimetry while separating transmit and receive channels-which is often desirable in GPR systems. The quasi-monostatic configuration of the sinuous antenna reduces system size as well as prevents extreme bistatic angles, which may significantly reduce sensitivity when attempting to detect near-surface targets. A prototype four-port sinuous antenna is fabricated and integrated into a GPR testbed. The polarimetric data obtained with the antenna is then used to distinguish between buried target symmetries.
We propose a technique for reconstruction from incomplete compressive measurements. Our approach combines compressive sensing and matrix completion using the consensus equilibrium framework. Consensus equilibrium breaks the reconstruction problem into subproblems to solve for the high-dimensional tensor. This framework allows us to apply two constraints on the statistical inversion problem. First, matrix completion enforces a low rank constraint on the compressed data. Second, the compressed tensor should be consistent with the uncompressed tensor when it is projected onto the low-dimensional subspace. We validate our method on the Indian Pines hyperspectral dataset with varying amounts of missing data. This work opens up new possibilities for data reduction, compression, and reconstruction.
Placing microwave absorbing materials into a high-quality factor resonant cavity may in general reduce the large interior electromagnetic fields excited under external illumination. In this paper, we aim to combine two analytical models we previously developed: 1) an unmatched formulation for frequencies below the slot resonance to model shielding effectiveness versus frequency; and 2) a perturbation model approach to estimate the quality factor of cavities in the presence of absorbers. The resulting model realizes a toolkit with which design guidelines of the absorber’s properties and location can be optimized over a frequency band. Analytic predictions of shielding effectiveness for three transverse magnetic modes for various locations of the absorber placed on the inside cavity wall show good agreement with both full-wave simulations and experiments, and validate the proposed model. This analysis opens new avenues for specialized ways to mitigate harmful fields within cavities.
The attached eddy hypothesis of Townsend (The Structure of Turbulent Shear Flow, 1956, Cambridge University Press) states that the logarithmic mean velocity admits self-similar energy-containing eddies which scale with the distance from the wall. Over the past decade, there has been a significant amount of evidence supporting the hypothesis, placing it to be the central platform for the statistical description of the general organisation of coherent structures in wall-bounded turbulent shear flows. Nevertheless, the most fundamental question, namely why the hypothesis has to be true, has remained unanswered over many decades. Under the assumption that the integral length scale is proportional to the distance from the wall y, in the present study we analytically demonstrate that the mean velocity is a logarithmic function of y if and only if the energy balance at the integral length scale is self-similar with respect to y, providing a theoretical basis for the attached eddy hypothesis. The analysis is subsequently verified with the data from a direct numerical simulation of incompressible channel flow at the friction Reynolds number Reτ ≃ 5200 (Lee & Moser, J. Fluid Mech., vol. 774, 2015, pp. 395-415).
Machine learning implements backpropagation via abundant training samples. We demonstrate a multi-stage learning system realized by a promising non-volatile memory device, the domain-wall magnetic tunnel junction (DW-MTJ). The system consists of unsupervised (clustering) as well as supervised sub-systems, and generalizes quickly (with few samples). We demonstrate interactions between physical properties of this device and optimal implementation of neuroscience-inspired plasticity learning rules, and highlight performance on a suite of tasks. Our energy analysis confirms the value of the approach, as the learning budget stays below 20µJ even for large tasks used typically in machine learning.
Ceramic to metal brazing is a common bonding process usedin many advanced systems such as automotive engines, aircraftengines, and electronics. In this study, we use optimizationtechniques and finite element analysis utilizing viscoplastic andthermo-elastic material models to find an optimum thermalprofile for a Kovar® washer bonded to an alumina button that istypical of a tension pull test. Several active braze filler materialsare included in this work. Cooling rates, annealing times, aging,and thermal profile shapes are related to specific materialbehaviors. Viscoplastic material models are used to represent thecreep and plasticity behavior in the Kovar® and braze materialswhile a thermo-elastic material model is used on the alumina.The Kovar® is particularly interesting because it has a Curiepoint at 435°C that creates a nonlinearity in its thermal strain andstiffness profiles. This complex behavior incentivizes theoptimizer to maximize the stress above the Curie point with afast cooling rate and then favors slow cooling rates below theCurie point to anneal the material. It is assumed that if failureoccurs in these joints, it will occur in the ceramic material.Consequently, the maximum principle stress of the ceramic isminimized in the objective function. Specific details of the stressstate are considered and discussed.
Following the ASME codes, the design of pipelines and pressure vessels for transportation or storage of high-pressure hydrogen gas requires measurements of fatigue crack growth rates at design pressure. However, performing tests in high pressure hydrogen gas can be very costly as only a few laboratories have the unique capabilities. Recently, Code Case 2938 was accepted in ASME Boiler and Pressure Vessel Code (BPVC) VIII-3 allowing for design curves to be used in lieu of performing fatigue crack growth rate (da/dN vs. ?K) and fracture threshold (KIH) testing in hydrogen gas. The design curves were based on data generated at 100 MPa H2 on SA-372 and SA-723 grade steels; however, the data used to generate the design curves are limited to measurements of ?K values greater than 6 MPa m1/2. The design curves can be extrapolated to lower ?K (<6 MPa m1/2), but the threshold stress intensity factor (?Kth) has not been measured in hydrogen gas. In this work, decreasing ?K tests were performed at select hydrogen pressures to explore threshold (?Kth) for ferritic-based structural steels (e.g. pipelines and pressure vessels). The results were compared to decreasing ?K tests in air, showing that the fatigue crack growth rates in hydrogen gas appear to yield similar or even slightly lower da/dN values compared to the curves in air at low ?K values when tests were performed at stress ratios of 0.5 and 0.7. Correction for crack closure was implemented, which resulted in better agreement with the design curves and provide an upper bound throughout the entire ?K range, even as the crack growth rates approach ?Kth. This work gives further evidence of the utility of the design curves described in Code Case 2938 of the ASME BPVC VIII-3 for construction of high pressure hydrogen vessels.
3D Particle-In-Cell Direct Simulation Monte Carlo (PIC-DSMC) simulations of cm-sized devices cannot resolve atomic-scale (nm) surface features and thus one must generate micron-scale models for an effective “local” work function, field enhancement factor, and emission area. Here we report on development of a stochastic effective model based on atomic-scale characterization of as-built electrode surfaces. Representative probability density distributions of the work function and geometric field enhancement factor (beta) for a sputter-deposited Pt surface are generated from atomic-scale surface characterization using Scanning Tunneling Microscopy (STM), Atomic Force Microscopy (AFM), and Photoemission Electron Microscopy (PEEM). In the micron-scale model every simulated PIC-DSMC surface element draws work functions and betas for many independent “atomic emitters”. During the simulation the field emitted current from an element is computed by summing each “atomic emitter's” current. This model has reasonable agreement with measured micron-scale emitted currents across a range of electric field values.
Bentz, Brian Z.; Lin, Dergan; Patel, Justin A.; Webb, Kevin J.
A super-resolution optical imaging method is presented that relies on the distinct temporal information associated with each fluorescent optical reporter to determine its spatial position to high precision with measurements of heavily scattered light. This multiple-emitter localization approach uses a diffusion equation forward model in a cost function, and has the potential to achieve micron-scale spatial resolution through centimeters of tissue. Utilizing some degree of temporal separation for the reporter emissions, position and emission strength are determined using a computationally efficient temporal-scanning multiresolution algorithm. The approach circumvents the spatial resolution challenges faced by earlier optical imaging approaches by using a diffusion equation forward model, and is promising for in vivo applications. For example, in principle, the method could be used to localize individual neurons firing throughout a rodent brain, enabling the direct imaging of neural network activity.
The Box Assembly with Removable Component (BARC) structure was developed as a challenge problem for those investigating boundary conditions and their effect on structural dynamic tests. To investigate the effects of boundary conditions on the dynamic response of the Removable Component, it was tested in three configurations, each with a different fixture and thus a different boundary condition. A “truth” configuration test with the component attached to its next-level assembly (the Box) was first performed to provide data that multi-axis tests of the component would aim to replicate. The following two tests aimed to reproduce the component responses of the first test through multi-axis testing. The first of these tests is a more “traditional” vibration test with the removable component attached to a “rigid” plate fixture. A second set of these tests replaces the fixture plate with flexible fixtures designed using topology optimization and created using additive manufacturing. These two test approaches are compared back to the truth test to determine how much improvement can be obtained in a laboratory test by using a fixture that is more representative of the compliance of the component’s assembly.
Historically the qualification process for vehicles carrying vulnerable components has centered around the Shock Response Spectrum (SRS) and qualification consisted of devising a collection of tests whose collective SRS enveloped the qualification SRS. This involves selecting whatever tests are convenient that will envelope the qualification SRS over at least part of its spectrum; this selection is without any consideration of the details of structural response or the nature of anticipated failure of its components. It is asserted that this approach often leads to over-testing, however, as has been pointed out several times in the literature, this approach may not even be conservative. Given the advances in computational and experimental technology in the last several decades, it would be appropriate to seek some strategy of test selection that does account for structural response and failure mechanism and that pushes against the vulnerabilities of that specific structure. A strategy for such a zemblanic (zemblanity is the opposite of serendipity, the faculty of making unhappy, unlucky and expected discoveries by design) approach is presented.
In the past year, resonant plate tests designed to excite all three axes simultaneously have become increasingly popular at Sandia National Labs. Historically, only one axis was tested at a time, but unintended off axis responses were generated. In order to control the off-axis motion so that off-axis responses were created which satisfy appropriate test specifications, the test setup has to be iteratively modified so that the coupling between axes was desired. The iterative modifications were done with modeling and simulation. To model the resonant plate test, an accurate forcing function must be specified. For resonant plate shock experiments, the input force of the projectile impacting the plate is prohibitively difficult to measure in situ. To improve on current simulation results, a method to use contact forces from an explicit simulation as an input load was implemented. This work covers an overview and background of three axes resonant plate shock tests, their design, their value in experiments, and the difficulties faced in simulating them. The work also covers a summary of contact force implementation in an explicit dynamics code and how it is used to evaluate an input force for a three axes resonant plate simulation. The results from the work show 3D finite element projectile and impact block interactions as well as simulation shock response data compared to experimental shock response data.
In this study, we present Balancing Domain Decomposition by Constraints (BDDC) preconditioners for three-dimensional scalar elliptic and linear elasticity problems in which the direct solution of the coarse problem is replaced by a preconditioner based on a smaller vertex-based coarse space.
This article describes a parallel implementation of a two-level overlapping Schwarz preconditioner with the GDSW (Generalized Dryja–Smith–Widlund) coarse space described in previous work [12, 10, 15] into the Trilinos framework; cf. [16]. The software is a significant improvement of a previous implementation [12]; see Sec. 4 for results on the improved performance.
Miner, T.; Dalton, D.; Romero, D.; Heine, M.; Todd, S.
Detonation velocity as a function of charge diameter is reported for Alliant Bullseye powder. Results are compared to those of mixtures of ammonium nitrate mixed with aluminum and ammonium nitrate mixed with fuel oil. Additionally, measurements of free surface velocity of flyers in contact with detonating Bullseye are presented and results are compared to those of hydrocode calculations using a Jones–Wilkins–Lee equation of state generated in a thermochemical code. Comparison to the experimental results shows that both the free surface and terminal velocities were under-predicted.
Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018
Schiavazzi, Daniele E.; Fleeter, Casey M.; Geraci, Gianluca G.; Marsden, Alison L.
Predictions from numerical hemodynamics are increasingly adopted and trusted in the diagnosis and treatment of cardiovascular disease. However, the predictive abilities of deterministic numerical models are limited due to the large number of possible sources of uncertainty including boundary conditions, vessel wall material properties, and patient specific model anatomy. Stochastic approaches have been proposed as a possible improvement, but are penalized by the large computational cost associated with repeated solutions of the underlying deterministic model. We propose a stochastic framework which leverages three cardiovascular model fidelities, i.e., three-, one- and zero-dimensional representations of cardiovascular blood flow. Specifically, we employ multilevel and multifidelity estimators from Sandia's open-source Dakota toolkit to reduce the variance in our estimated quantities of interest, while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for both global and local hemodynamic indicators.
Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018
SNOWPAC (Stochastic Nonlinear Optimization With Path-Augmented Constraints) is a method for stochastic nonlinear constrained derivative-free optimization. For such problems, it extends the path-augmented constraints framework introduced by the deterministic optimization method NOWPAC and uses a noise-adapted trust region approach and Gaussian processes for noise reduction. In recent developments, SNOWPAC is available in the DAKOTA framework which offers a highly flexible interface to couple the optimizer with different sampling strategies or surrogate models. In this paper we discuss details of SNOWPAC and demonstrate the coupling with DAKOTA. We showcase the approach by presenting design optimization results of a shape in a 2D supersonic duct. This simulation is supposed to imitate the behavior of the flow in a SCRAMJET simulation but at a much lower computational cost. Additionally different mesh or model fidelities can be tested. Thus, it serves as a convenient test case before moving to costly SCRAMJET computations. Here, we study deterministic results and results obtained by introducing uncertainty on inflow parameters. As sampling strategies we compare classical Monte Carlo sampling with multilevel Monte Carlo approaches for which we developed new error estimators. All approaches show a reasonable optimization of the design over the objective while maintaining or seeking feasibility. Furthermore, we achieve significant reductions in computational cost by using multilevel approaches that combine solutions from different grid resolutions.
This study explores the use of a Krylov iterative method (GMRES) as a smoother for an algebraic multigrid (AMG) preconditioned Newton–Krylov iterative solution approach for a fully-implicit variational multiscale (VMS) finite element (FE) resistive magnetohydrodynamics (MHD) formulation. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play an essential role. Krylov smoothers are considered an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. This brief study presents three time dependent resistive MHD test cases to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.
Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018
As the mean time between failures on the future high-performance computing platforms is expected to decrease to just a few minutes, the development of “smart”, property-preserving checkpointing schemes becomes imperative to avoid dramatic decreases in application utilization. In this paper we formulate a generic optimization-based approach for fault-tolerant computations, which separates property preservation from the compression and recovery stages of the checkpointing processes. We then specialize the approach to obtain a fault recovery procedure for a model scalar transport equation, which preserves local solution bounds and total mass. Numerical examples showing solution recovery from a corrupted application state for three different failure modes illustrate the potential of the approach.
Proceedings of the 6th European Conference on Computational Mechanics: Solids, Structures and Coupled Problems, ECCM 2018 and 7th European Conference on Computational Fluid Dynamics, ECFD 2018
Wind energy is stochastic in nature; the prediction of aerodynamic quantities and loads relevant to wind energy applications involves modeling the interaction of a range of physics over many scales for many different cases. These predictions require a range of model fidelity, as predictive models that include the interaction of atmospheric and wind turbine wake physics can take weeks to solve on institutional high performance computing systems. In order to quantify the uncertainty in predictions of wind energy quantities with multiple models, researchers at Sandia National Laboratories have applied Multilevel-Multifidelity methods. A demonstration study was completed using simulations of a NREL 5MW rotor in an atmospheric boundary layer with wake interaction. The flow was simulated with two models of disparate fidelity; an actuator line wind plant large-eddy scale model, Nalu, using several mesh resolutions in combination with a lower fidelity model, OpenFAST. Uncertainties in the flow conditions and actuator forces were propagated through the model using Monte Carlo sampling to estimate the velocity defect in the wake and forces on the rotor. Coarse-mesh simulations were leveraged along with the lower-fidelity flow model to reduce the variance of the estimator, and the resulting Multilevel-Multifidelity strategy demonstrated a substantial improvement in estimator efficiency compared to the standard Monte Carlo method.
Stereolithography (SL) is a process that uses photosensitive polymer solutions to create 3D parts in a layer by layer approach. Sandia National Labs is interested in using SL for the printing of ceramic loaded resins, namely alumina, that we are formulating here at the labs. One of the most important aspects for SL printing of ceramics is the properties of the slurry itself. The work presented here will focus on the use of a novel commercially available low viscosity resin provided by Colorado Photopolymer Solutions, CPS 2030, and a Hypermer KD1 dispersant from Croda. Two types of a commercially available alumina powder, Almatis A16 SG and Almatis A15 SG, are compared to determine the effects that the size and the distribution of the powder have on the loading of the solution using rheology. The choice of a low viscosity resin allows for a high particle loading, which is necessary for the printing of high density parts using a commercial SL printer. The Krieger-Dougherty equation was used to evaluate the maximum particle loading for the system. This study found that a bimodal distribution of micron sized powder (A15 SG) reduced the shear thickening effects caused by hydroclusters, and allows for the highest alumina powder loading. A final sintered density of 90% of the theoretical density of alumina was achieved based on the optimized formulation and printing conditions.
Lecture Notes in Computational Science and Engineering
Maljaars, Jakob M.; Labeur, Robert J.; Trask, Nathaniel A.; Sulsky, Deborah L.
A particle-mesh strategy is presented for scalar transport problems which provides diffusion-free advection, conserves mass locally (i.e. cellwise) and exhibits optimal convergence on arbitrary polyhedral meshes. This is achieved by expressing the convective field naturally located on the Lagrangian particles as a mesh quantity by formulating a dedicated particle-mesh projection based via a PDE-constrained optimization problem. Optimal convergence and local conservation are demonstrated for a benchmark test, and the application of the scheme to mass conservative density tracking is illustrated for the Rayleigh–Taylor instability.