As CMOS technology approaches the end of its scaling, oxide-based memristors have become one of the leading candidates for post-CMOS memory and logic devices. To facilitate the understanding of physical switching mechanisms and accelerate experimental development of memristors, we have developed a three-dimensional fully-coupled electrical and thermal transport model, which captures all the important processes that drive memristive switching and is applicable for simulating a wide range of memristors. The model is applied to simulate the RESET and SET switching in a 3D filamentary TaOx memristor. Extensive simulations show that the switching dynamics of the bipolar device is determined by thermally-activated field-dominant processes: with Joule heating, the raised temperature enables the movement of oxygen vacancies, and the field drift dominates the overall motion of vacancies. Simulated current-voltage hysteresis and device resistance profiles as a function of time and voltage during RESET and SET switching show good agreement with experimental measurement.
Electrical conductivity is key to the performance of thermal battery cathodes. In this work we present the effects of manufacturing and processing conditions on the electrical conductivity of Li/FeS2 thermal battery cathodes. We use finite element simulations to compute the conductivity of three-dimensional microcomputed tomography cathode microstructures and compare results to experimental impedance spectroscopy measurements. A regression analysis reveals a predictive relationship between composition, processing conditions, and electrical conductivity; a trend which is largely erased after thermally-induced deformation. The trend applies to both experimental and simulation results, although is not as apparent in simulations. This research is a step toward a more fundamental understanding of the effects of processing and composition on thermal battery component microstructure, properties, and performance.
Synthetic aperture radar (SAR) images contain a grainy pattern, called speckle, that is a consequence of a coherent imaging system. For fine resolution SAR images speckle can obscure subtle features and reduce visual appeal. Many speckle reduction methods result in a loss of image resolution and reduce visual appeal which can obscure subtle features. Another approach to maintain resolution while reducing speckle is to register and combine multiple images. For persistent surveillance applications it is more efficient for an airborne platform to fly circles around the particular area of interest. In these cases, it would be beneficial to combine multiple circle mode SAR images, however the image registration process is not so straightforward because the layover angle changes in each image. This paper develops a SAR image registration process for combining multiple circle mode SAR images to reduce speckle while preserving resolution. The registration first uses a feature matching algorithm for a coarse rotation and alignment, and then uses a fine registration and warp. Ku band SAR data from a circle mode SAR collection is used to show the effectiveness of the registration and enhanced visual appeal from multi-looking.
We present a detailed set of measurements from a piloted, sooting, turbulent C2H4-fueled jet flame. Hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (CARS) is used to monitor temperature and oxygen, while laser-induced incandescence (LII) is applied for imaging of the soot volume fraction in the challenging jet-flame environment at Reynolds number, Re = 20,000. A new dual-detection channel CARS instrument provides the enhanced dynamic range required in this highly intermittent and turbulent environment. LII measurements are made across a wide field of view requiring us to account for spatial variation in the soot-volume-fraction response of the instrument. Single-laser-shot results are used to illustrate the mean and rms statistics, as well as probability densities of all three measured quantities. LII data from the soot-growth region of the jet are used to benchmark the soot source term for one-dimensional turbulence (ODT) modeling of this turbulent flame. The ODT code is then used to predict temperature, oxygen and soot fluctuations within the soot oxidation region higher in the flame.
In this paper we report on a transmission-line model for calculating the shielding effectiveness of multiple-shield cables with arbitrary terminations. Since the shields are not perfect conductors and apertures in the shields permit external magnetic and electric fields to penetrate into the interior regions of the cable, we use this model to estimate the effects of the outer shield current and voltage (associated with the external excitation and boundary conditions associated with the external conductor) on the inner conductor current and voltage. It is commonly believed that increasing the number of shields of a cable will improve the shielding performance. However, this is not always the case, and a cable with multiple shields may perform similar to or in some cases worse than a cable with a single shield. We want to shed more light on these situations, which represent the main focus of this paper.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
Flux distributions from solar field collectors are typically evaluated using a beam characterization system, which consists of a digital camera with neutral density filters, flux gauge or calorimeter, and water-cooled Lambertian target panel. The pixels in camera image of the flux distribution are scaled by the flux peak value measured with the flux gauge or the total power value measured with the calorimeter. An alternative method, called PHLUX developed at Sandia National Laboratories, can serve the same purpose using a digital camera but without auxiliary instrumentation. The only additional information required besides the digital images recorded from the camera are the direct normal irradiance, an image of the sun using the same camera, and the reflectivity of the receiver or target panel surface. The PHLUX method was evaluated using two digital cameras (Nikon D90 and D3300) at different flux levels on a target panel. The performances of the two cameras were compared to each other and to measurements from a Kendall radiometer. For consistency in comparison of the two cameras, the same focal length lenses and same number of neutral density filters were used. Other camera settings (e.g., shutter speed, f-stop, etc.) were set based on the aperture size and performance of the cameras. The Nikon D3300 has twice the number of pixels as the D90. D3300 provided higher resolution, however, due to the smaller pixel sizes the images were noisier, and the D90 with larger pixels had better response to low light levels. The noise in the D3300, if not corrected, could result in gross overestimation of the irradiance calculations. After corrections to the D3300 flux images, the PHLUX results from the two cameras showed they agreed to within 8% for a peak flux level of 1000 suns on the target, and less than 10% error in the peak flux when compared to the Kendall radiometer.
Channeled spectropolarimeters (CSP) measure the polarization state of light as a function of wavelength. Conventional Fourier reconstruction suffers from noise, assumes the channels are band-limited, and requires uniformly spaced samples. To address these problems, we propose an iterative reconstruction algorithm. We develop a mathematical model of CSP measurements and minimize a cost function based on this model. We simulate a measured spectrum using example Stokes parameters, from which we compare conventional Fourier reconstruction and iterative reconstruction. Importantly, our iterative approach can reconstruct signals that contain more bandwidth, an advancement over Fourier reconstruction. Our results also show that iterative reconstruction mitigates noise effects, processes non-uniformly spaced samples without interpolation, and more faithfully recovers the ground truth Stokes parameters. This work offers a significant improvement to Fourier reconstruction for channeled spectropolarimetry.
ASME 2016 Heat Transfer Summer Conference, HT 2016, collocated with the ASME 2016 Fluids Engineering Division Summer Meeting and the ASME 2016 14th International Conference on Nanochannels, Microchannels, and Minichannels
Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat transfer applications, a quasi-steady assumption is valid. The dependence on wavelength is often treated through a weighted sum of gray gases type approach. The discrete ordinates method is the most common method for approximating the angular dependence. In the discrete ordinates method, the intensity is solved exactly for a finite number of discrete directions, and integrals over the angular space are accomplished through a quadrature rule. In this work, a projection-based model reduction approach is applied to the discrete ordinates method. A small number or ordinate directions are used to construct the reduced basis. The reduced model is then queried at the quadrature points for a high order quadrature in order to inexpensively approximate this highly accurate solution. This results in a much more accurate solution than can be achieved by the low-order quadrature alone. One-, two-, and three-dimensional test problems are presented.
The evolution of exquisitely sensitive Synthetic Aperture Radar (SAR) systems is positioning this technology for use in time-critical environments, such as search-and-rescue missions and improvised explosive device (IED) detection. SAR systems should be playing a keystone role in the United States' Intelligence, Surveillance, and Reconnaissance activities. Yet many in the SAR community see missed opportunities for incorporating SAR into existing remote sensing data collection and analysis challenges. Drawing on several years' of field research with SAR engineering and operational teams, this paper examines the human and organizational factors that mitigate against the adoption and use of SAR for tactical ISR and operational support. We suggest that SAR has a design problem, and that context-sensitive, human and organizational design frameworks are required if the community is to realize SAR's tactical potential.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
The high-temperature particle - supercritical carbon dioxide (sCO2) Brayton power system is a promising option for concentrating solar power (CSP) plants to achieve SunShot metrics for high-temperature operation, efficiency, and cost. This system includes a falling particle receiver to collect solar thermal radiation, a dry-cooled sCO2 Brayton power block to produce electricity, and a particle to sCO2 heat exchanger to couple the previous two. While both falling particle receivers and sCO2 Brayton cycles have been demonstrated previously, a high temperature, high pressure particle/sCO2 heat exchanger has never before been demonstrated. Industry experience with similar heat exchangers is limited to lower pressures, lower temperatures, or alternative fluids such as steam. Sandia is partnering with three experienced heat exchanger manufacturers to develop and down-select several designs for the unit that achieves both high performance and low specific cost to retire risks associated with a solar thermal particle/sCO2 power system. This paper describes plans for the construction of a particle sCO2 heat exchanger testbed at Sandia operating above 700 °C and 20 MPa, with the ability to couple directly with a previously-developed falling particle receiver for on-sun testing at the National Solar Thermal Test Facility (NSTTF).
We investigate a novel application of Fŕechet derivatives for time-lapse mapping of deep, electrically-enhanced fracture systems with a borehole to surface DC resistivity array. The simulations are evaluated for a cased horizontal wellbore embedded in a homogeneous halfspace, where measurements are evaluated near, mid-range, and far from the well head. We show that, in all cases, measurements are sensitive to perturbations centered on the borehole axis and that the sensitivity volume decreases as a function of increased measurement offset from the well head. The sensitivity analysis also illustrates that careful consideration must be taken when developing an electrical survey design for these scenarios. Specifically, we show that positive perturbations in earth conductivity near the wellbore can manifest as both positive and negative measurement perturbations, depending on where the measurement is taken. Furthermore, we show that the transition between the regions along the wellbore of positive and negative contribution results in a "pinch point", representing a region along the wellbore where a given surface measurement is blind to any changes or enhancement of electrical conductivity.
We present a general technique to solve Partial Differential Equations, called robust stencils, which make them tolerant to soft faults, i.e. bit flips arising in memory or CPU calculations. We show how it can be applied to a two-dimensional Lax-Wendroff solver. The resulting 2D robust stencils are derived using an orthogonal application of their 1D counterparts. Combinations of 3 to 5 base stencils can then be created. We describe how these are then implemented in a parallel advection solver. Various robust stencil combinations are explored, representing tradeoff between performance and robustness. The results indicate that the 3-stencil robust combinations are slightly faster on large parallel workloads than Triple Modular Redundancy (TMR). They also have one third of the memory footprint. We expect the improvement to be significant if suitable optimizations are performed. Because faults are avoided each time new points are computed, the proposed stencils are also comparably robust to faults as TMR for a large range of error rates. The technique can be generalized to 3D (or higher dimensions) with similar benefits.
Nanostructured metal thin films have been shown to have unique thermal, mechanical, and electrical properties when the internal structure can be maintained. However, this far-from-equilibrium structure has been shown in many cases to be unstable at elevated temperatures. This work investigates the role of surface ledges, large nickel inclusions, electron beam exposure and film thickness on the evolution of high purity, pulsed-laser deposited, free-standing, nickel films via in situ transmission electron microscopy annealing. Grain growth appeared enhanced in a limited temperature range near surface ledges present in the film, but was not affected by large nickel inclusions. In addition, extended exposure to the electron beam resulted in abnormal grain growth. This was hypothesized to be a result of enhanced nickel oxide growth on the surfaces. Finally, increasing film thickness was observed to accelerate the onset of abnormal grain growth and increased the size and number of larger grains. These observations should provide warning that the initial and dynamic surface present in thin films should be taken under consideration during any annealing study, as it may significantly impact the final crystalline structure.
Fracture and fragmentation are extremely nonlinear multiscale processes in which microscale damage mechanisms emerge at the macroscale as new fracture surfaces. Numerous numerical methods have been developed for simulating fracture initiation, propagation, and coalescence. Here, we present a computational approach for modeling pervasive fracture in quasi-brittle materials based on random close-packed Voronoi tessellations. Each Voronoi cell is formulated as a polyhedral finite element containing an arbitrary number of vertices and faces. Fracture surfaces are allowed to nucleate only at the intercell faces. Cohesive softening tractions are applied to new fracture surfaces in order to model the energy dissipated during fracture growth. The randomly seeded Voronoi cells provide a regularized discrete random network for representing fracture surfaces. The potential crack paths within the random network are viewed as instances of realizable crack paths within the continuum material. Mesh convergence of fracture simulations is viewed in a weak, or distributional, sense. The explicit facet representation of fractures within this approach is advantageous for modeling contact on new fracture surfaces and fluid flow within the evolving fracture network. Applications of interest include fracture and fragmentation in quasi-brittle materials and geomechanical applications such as hydraulic fracturing, engineered geothermal systems, compressed-air energy storage, and carbon sequestration.
This paper demonstrates that another class of three-dimensional integrated circuits (3D-ICs) exists, distinct from through silicon via centric and monolithic 3D-ICs. Furthermore, it is possible to create devices that are 3D at the device level (i.e. with active channels oriented in each of the three coordinate axes), by performing standard CMOS fabrication operations at an angle with respect to the wafer surface into high aspect ratio silicon substrates using membrane projection lithography (MPL). MPL requires only minimal fixturing changes to standard CMOS equipment, and no change to current state-of-the-art lithography. Eliminating the constraint of 2D planar device architecture enables a wide range of new interconnect topologies which could help reduce interconnect resistance/capacitance, and potentially improve performance.
A 4-color imaging pyrometer was developed to investigate the thermal behavior of laser-based metal processes, specifically laser welding and laser additive manufacturing of stainless steel. The new instrument, coined a 2x pyrometer, consists of four, high-sensitivity silicon CMOS cameras configured as two independent 2-color pyrometers combined in a common hardware assembly. This coupling of pyrometers permitted low and high temperature regions to be targeted within the silicon response curve, thereby broadening the useable temperature range of the instrument. Also, by utilizing the high dynamic range features of the CMOS cameras, the response gap between the two wavelength bands can be bridged. Together these hardware and software enhancements are predicted to expand the real-time (60 fps) temperature response of the 2x pyrometer from 600 °C to 3500 °C. Initial results from a calibrated tungsten lamp confirm this increased response, thus making it attractive for measuring absolute temperatures of steel forming processes.
This paper presents the findings of using convolutional neural networks (CNNs) to classify human activity from micro-Doppler features. An emphasis on activities involving potential security threats such as holding a gun are explored. An automotive 24 GHz radar on chip was used to collect the data and a CNN (normally applied to image classification) was trained on the resulting spectrograms. The CNN achieves an error rate of 1.65 % on classifying running vs. walking, 17.3 % error on armed walking vs. unarmed walking, and 22 % on classifying six different actions.
This contribution is the second part of three papers on Adaptive Multigrid Methods for the eXtended Fluid-Structure Interaction (eXFSI) Problem, where we introduce a monolithic variational formulation and solution techniques. To the best of our knowledge, such a model is new in the literature. This model is used to design an on-line structural health monitoring (SHM) system in order to determine the coupled acoustic and elastic wave propagation in moving domains and optimum locations for SHM sensors. In a monolithic nonlinear fluid-structure interaction (FSI), the fluid and structure models are formulated in different coordinate systems. This makes the FSI setup of a common variational description difficult and challenging. This article presents the state-of-the-art in the finite element approximation of FSI problem based on monolithic variational formulation in the well-established arbitrary Lagrangian Eulerian (ALE) framework. This research focuses on the newly developed mathematical model of a new FSI problem, which is referred to as extended Fluid-Structure Interaction (eXFSI) problem in the ALE framework. The eXFSI is a strongly coupled problem of typical FSI with a coupled wave propagation problem on the fluid-solid interface (WpFSI). The WpFSI is a strongly coupled problem of acoustic and elastic wave equations, where wave propagation problems automatically adopts the boundary conditions from the FSI problem at each time step. The ALE approach provides a simple but powerful procedure to couple solid deformations with fluid flows by a monolithic solution algorithm. In such a setting, the fluid problems are transformed to a fixed reference configuration by the ALE mapping. The goal of this work is the development of concepts for the efficient numerical solution of eXFSI problem, the analysis of various fluid-solid mesh motion techniques and comparison of different second-order time-stepping schemes. This work consists of the investigation of different time stepping scheme formulations for a nonlinear FSI problem coupling the acoustic/elastic wave propagation on the fluid-structure interface. Temporal discretization is based on finite differences and is formulated as a one step-θ scheme, from which we can consider the following particular cases: the implicit Euler, Crank-Nicolson, shifted Crank-Nicolson and the Fractional-Step-θ schemes. The nonlinear problem is solved with a Newton-like method where the discretization is done with a Galerkin finite element scheme. The implementation is accomplished via the software library package DOPELIB based on the deal. II finite element library for the computation of different eXFSI configurations.
Fiber Bragg gratings (FBGs) are well-suited for embedded sensing of interfacial phenomena in materials systems, due to the sensitivity of their spectral response to locally non-uniform strain fields. Over the last 15 years, FBGs have been successfully employed to sense delamination at interfaces, with a clear emphasis on planar events induced by transverse cracks in fiber-reinforced plastic laminates. We have built upon this work by utilizing FBGs to detect circular delamination events at the interface between epoxy films and alumina substrates. Two different delamination processes are examined, based on stress relief induced by indentation of the epoxy film or by cooling to low temperature. We have characterized the spectral response pre-and post-delamination for both simple and chirped FBGs as a function of delamination size. We show that delamination is readily detected by the evolution of a non-uniform strain distribution along the fiber axis that persists after the stressing condition is removed. These residual strain distributions differ substantially between the delamination processes, with indentation and cooling producing predominantly tensile and compressive strain, respectively, that are well-captured by Gaussian profiles. More importantly, we observe a strong correlation between spectrally-derived measurements, such as spectral widths, and delamination size. Our results further highlight the unique capabilities of FBGs as diagnostic tools for sensing delamination in materials systems.
A 4-color imaging pyrometer was developed to investigate the thermal behavior of laser-based metal processes, specifically laser welding and laser additive manufacturing of stainless steel. The new instrument, coined a 2x pyrometer, consists of four, high-sensitivity silicon CMOS cameras configured as two independent 2-color pyrometers combined in a common hardware assembly. This coupling of pyrometers permitted low and high temperature regions to be targeted within the silicon response curve, thereby broadening the useable temperature range of the instrument. Also, by utilizing the high dynamic range features of the CMOS cameras, the response gap between the two wavelength bands can be bridged. Together these hardware and software enhancements are predicted to expand the real-time (60 fps) temperature response of the 2x pyrometer from 600 °C to 3500 °C. Initial results from a calibrated tungsten lamp confirm this increased response, thus making it attractive for measuring absolute temperatures of steel forming processes.
We present experimental and simulation results for a laboratory-based forward-scattering environment, where 1 μm diameter polystyrene spheres are suspended in water to model the optical scattering properties of fog. Circular polarization maintains its degree of polarization better than linear polarization as the optical thickness of the scattering environment increases. Both simulation and experiment quantify circular polarization's superior persistence, compared to that of linear polarization, and show that it is much less affected by variations in the field of view and collection area of the optical system. Our experimental environment's lateral extent was physically finite, causing a significant difference between measured and simulated degree of polarization values for incident linearly polarized light, but not for circularly polarized light. Through simulation we demonstrate that circular polarization is less susceptible to the finite environmental extent as well as the collection optic's limiting configuration.
Proceedings of SPIE - The International Society for Optical Engineering
Klem, John F.; Lotfi, Hossein; Li, Lu; Lei, Lin; Ye, Hao; Rassel, Sm S.; Jiang, Yuchao; Yang, Rui Q.; Mishima, Tetsuya D.; Santos, Michael B.; Johnson, Matthew B.; Gupta, James A.
We investigate high-temperature and high-frequency operation of interband cascade infrared photodetectors (ICIPs)-two critical properties. Short-wavelength ICIPs with a cutoff wavelength of 2.9 μm had Johnson-noise limited detectivity of 5.8×109 cmHz1/2/W at 300 K, comparable to the commercial Hg1-xCdxTe photodetectors of similar wavelengths. A simple but effective method to estimate the minority carrier diffusion length in short-wavelength ICIPs is introduced. Using this approach, the diffusion length was estimated to be significantly shorter than 1 μm at high temperatures, indicating the importance of a multiple-stage photodetector (e.g., ICIPs) at high temperatures. Recent investigations on the high-frequency operation of mid-wavelength ICIPs (λc=4.3 μm) are discussed. These photodetectors had 3-dB bandwidths up to 1.3 GHz with detectivities exceeding 1x109 cmHz1/2/W at room temperature. These results validate the ability of ICIPs to achieve high bandwidths with large sensitivity and demonstrate the great potential for applications such as: heterodyne detection, and free-space optical communication.
The IRDFF cross section library provides the highest fidelity cross section characterization and is the recommended data library to be used for dosimetry in support of reactor pressure vessel surveillance programs. In order to support this critical application, quantified validation evidence is required for the cross section library. Results are reported here on the use of various advanced approaches to uncertainty quantification using metrics relevant to spectrum characterization applications. The use of a quantified least squares approach, combining a consistent treatment of uncertainty from the spectral characterizations, the dosimetry cross sections, and measured activation products, is identified as one of the most sensitive metrics by which to report validation evidence. Using this metric the status of the validation of the IRDFF library was investigated. This analysis began with a consideration of the best characterized 252Cf spontaneous fission standard neutron benchmark field. Good validation evidence is found for 39 of the 79 IRDFF reactions. The 235U thermal fission reference neutron field was then investigated, and found to yield good validation evidence for an additional 10 of the IRDFF reactions. Extending the analysis further to include four different reactor-based reference neutron benchmark fields, ranging from fast burst reactors to well-moderated pool-type reactors, yielded good validation evidence for an additional 6 IRDFF reactions. In total, evidence is reported here for 55 of the 79 reactions in the IRDFF library.
This work examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a transient dynamics finite element model of a ductile steel can being slowly crushed. An elastic-plastic constitutive model is employed in the large-deformation simulations. The present work assigns the same material to all the can parts: lids, walls, and weld. Time histories of 18 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) at several locations on the can and various points in time are monitored in the simulations. Each response quantity's behavior varies according to the particular stressstrain curves used for the materials in the model. We estimate response variability due to variability of the input material curves. When only a few stress-strain curves are available from material testing, response variance will usually be significantly underestimated. This is undesirable for many engineering purposes. This paper describes the can-crush model and simulations used to evaluate a simple classical statistical method, Tolerance Intervals (TIs), for effectively compensating for sparse stress-strain curve data in the can-crush problem. Using the simulation results presented here, the accuracy and reliability of the TI method are being evaluated on the highly nonlinear inputto- output response mappings and non-standard response distributions in the can-crush UQ problem.
The influence of compressibility on the shear layer over a rectangular cavity of variable width has been studied at a freestream Mach number range of 0.6 to 2.5 using particle image velocimetry data in the streamwise center plane. As the Mach number increases, the vertical component of the turbulence intensity diminishes modestly in the widest cavity, but the two narrower cavities show a more substantial drop in all three components as well as the turbulent shear stress. This contrasts with canonical free shear layers, which show significant reductions in only the vertical component and the turbulent shear stress due to compressibility. The vorticity thickness of the cavity shear layer grows rapidly as it initially develops, then transitions to a slower growth rate once its instability saturates. When normalized by their estimated incompressible values, the growth rates prior to saturation display the classic compressibility effect of suppression as the convective Mach number rises, in excellent agreement with comparable free shear layer data. The specific trend of the reduction in growth rate due to compressibility is modified by the cavity width.
Performing experiments in the laboratory that mimic conditions in the field is challenging. In an attempt to understand hydraulic fracture in the field, and provide laboratory flow results for model verification, an effort to duplicate the typical fracture pattern for long horizontal wells has been made. The typical "disks on a string" fracture formation is caused by properly orienting the long horizontal well such that it is parallel to the minimum principal stress direction, then fracturing the rock. In order to replicate this feature in the laboratory with a traditional cylindrical specimen the test must be performed under extensile stress conditions and the specimen must have been cored parallel to bedding in order to avoid failure along a bedding plane, and replicate bedding orientation in the field. Testing has shown that it is possible to form failure features of this type in the laboratory. A novel method for jacketing is employed to allow fluid to flow out of the fracture and leave the specimen without risking the integrity of the jacket; this allows proppant to be injected into the fracture, simulating loss of fracturing fluids to the formation, and allowing a solid proppant pack to be developed.
Ward, Daniel R.; Kim, Dohun; Savage, Donald E.; Lagally, Max G.; Foote, Ryan H.; Friesen, Mark; Coppersmith, Susan N.; Eriksson, Mark A.
Universal quantum computation requires high-fidelity single-qubit rotations and controlled two-qubit gates. Along with high-fidelity single-qubit gates, strong efforts have been made in developing robust two-qubit logic gates in electrically gated quantum dot systems to realise a compact and nanofabrication-compatible architecture. Here we perform measurements of state-conditional coherent oscillations of a charge qubit. Using a quadruple quantum dot formed in a Si/SiGe heterostructure, we show the first demonstration of coherent two-axis control of a double quantum dot charge qubit in undoped Si/SiGe, performing Larmor and Ramsey oscillation measurements. We extract the strength of the capacitive coupling between a pair of double quantum dots by measuring the detuning energy shift (≈75 μeV) of one double dot depending on the excess charge configuration of the other double dot. We further demonstrate that the strong capacitive coupling allows fast, state-conditional Landau–Zener–Stückelberg oscillations with a conditional π phase flip time of about 80 ps, showing a promising pathway towards multi-qubit entanglement and control in semiconductor quantum dots.
Remote detection of a surface-bound chemical relies on the recognition of a pattern, or "signature," that is distinct from the background. Such signatures are a function of a chemical's fundamental optical properties, but also depend upon its specific morphology. Importantly, the same chemical can exhibit vastly different signatures depending on the size of particles composing the deposit. We present a parameterized model to account for such morphological effects on surface-deposited chemical signatures. This model leverages computational tools developed within the planetary and atmospheric science communities, beginning with T-matrix and ray-tracing approaches for evaluating the scattering and extinction properties of individual particles based on their size and shape, and the complex refractive index of the material itself. These individual-particle properties then serve as input to the Ambartsumian invariant imbedding solution for the reflectance of a particulate surface composed of these particles. The inputs to the model include parameters associated with a functionalized form of the particle size distribution (PSD) as well as parameters associated with the particle packing density and surface roughness. The model is numerically inverted via Sandia's Dakota package, optimizing agreement between modeled and measured reflectance spectra, which we demonstrate on data acquired on five size-selected silica powders over the 4-16 μm wavelength range. Agreements between modeled and measured reflectance spectra are assessed, while the optimized PSDs resulting from the spectral fitting are then compared to PSD data acquired from independent particle size measurements.
We have developed high damage threshold filters to modify the spatial profile of a high energy laser beam. The filters are formed by laser ablation of a transmissive window. The ablation sites constitute scattering centers which can be filtered in a subsequent spatial filter. By creating the filters in dielectric materials, we see an increased laser-induced damage threshold from previous filters created using 'metal on glass' lithography.
The Z-backlighter laser facility primarily consists of two high energy, high-power laser systems. Z-Beamlet laser (ZBL) (Rambo et al., Appl. Opt. 44, 2421 (2005)) is a multi-kJ-class, nanosecond laser operating at 1054 nm which is frequency doubled to 527 nm in order to provide x-ray backlighting of high energy density events on the Z-machine. Z-Petawatt (ZPW) (Schwarz et al., J. Phys.: Conf. Ser. 112, 032020 (2008)) is a petawatt-class system operating at 1054 nm delivering up to 500 J in 500 fs for backlighting and various short-pulse laser experiments (see also Figure 10 for a facility overview). With the development of the magnetized liner inertial fusion (MagLIF) concept on the Z-machine, the primary backlighting missions of ZBL and ZPW have been adjusted accordingly. As a result, we have focused our recent efforts on increasing the output energy of ZBL from 2 to 4 kJ at 527 nm by modifying the fiber front end to now include extra bandwidth (for stimulated Brillouin scattering suppression). The MagLIF concept requires a well-defined/behaved beam for interaction with the pressurized fuel. Hence we have made great efforts to implement an adaptive optics system on ZBL and have explored the use of phase plates. We are also exploring concepts to use ZPW as a backlighter for ZBL driven MagLIF experiments. Alternatively, ZPW could be used as an additional fusion fuel pre-heater or as a temporally flexible high energy pre-pulse. All of these concepts require the ability to operate the ZPW in a nanosecond long-pulse mode, in which the beam can co-propagate with ZBL. Some of the proposed modifications are complete and most of them are well on their way.
Low temperature cofired ceramic (LTCC) technology has proven itself in military/space electronics, wireless communication, microsystems, medical and automotive electronics, and sensors. The use of LTCC for high frequency applications is appealing due to its low losses, design flexibility and packaging and integration capability. The LTCC thick film process is summarized including some unconventional process steps such as feature machining in the unfired state and thin film definition of outer layer conductors. The LTCC thick film process was characterized to optimize process yields by focusing on these factors: 1) Print location, 2) Print thickness, 3) Drying of tapes and panels, 4) Shrinkage upon firing, and 5) Via topography. Statistical methods were used to analyze critical process and product characteristics in the determination towards that optimization goal.
Over 25 years, scientists and engineers designed engineered features to complement attributes of the natural barrier of volcanic tuff at Yucca Mountain in southern Nevada such that a proposed repository in the unsaturated zone would safely isolate spent nuclear fuel and highlevel radioactive waste over 106 years. Initially in 1983, an engineered barrier design applicable to several geologic media was used. With the Congressional direction to characterize Yucca Mountain, the engineered design gradually adapted to conditions in unsaturated tuff in the 1990s. The repository switched from floor emplacement of waste in small, single-walled stainless steel canisters to in-drift emplacement in large, double-layered containers. By 2000, the outer layer was high-nickel alloy to resist corrosion and the inner layer was stainless steel for strength. To avoid localized corrosion during the ∼1000-yr thermal period, titanium drip shields were also added above the containers. By 2008, a modular design of the repository was used for flexibility. In general, flexibility in accommodating various waste forms has been an intended attribute of geologic disposal system designs, rather than tuning the disposal system to specific characteristics of waste durability. The degradation rate of the radioactive waste matrix was an important parameter of the source-term in early modeling analysis. However, by the mid 1990s, analyses used fairly rapid degradation rates within the oxygenated environment of the unsaturated zone. Other components of the multiple barrier disposal system compensated for high degradation rates.
Sandia engineers use the Temporal Logic of Actions (TLA) early in the design process for digital systems where safety considerations are critical. TLA allows us to easily build models of interactive systems and prove (in the mathematical sense) that those models can never violate safety requirements, all in a single formal language. TLA models can also be refined, that is, extended by adding details in a carefully prescribed way, such that the additional details do not break the original model. Our experience suggests that engineers using refinement can build, maintain, and prove safety for designs that are significantly more complex than they otherwise could. We illustrate the way in which we have used TLA, including refinement, with a case study drawn from a real safety-critical system. This case exposes a need for refinement by composition, which is not currently provided by TLA. We have extended TLA to support this kind of refinement by building a specialized version of it in the Coq theorem prover. Taking advantage of Coq’s features, our version of TLA exhibits other benefits over stock TLA: we can prove certain difficult kinds of safety properties using mathematical induction, and we can certify the correctness of our proofs.
Volumetric measurements of the flow within four open cavities were made using stereoscopic particle image velocimetry at a freestream Mach number of 0.8. The cavities nominally had a length-to-diameter ratio, L/D = 7, along with an aspect ratio, b/L = 0.5. The three complex cavity geometries were selected to model features representative of real aircraft bays and compare them to a finite-span rectangular cavity: these included features such as leading edge and side ramps, a scooped leading edge ramp, and a jagged leading edge. Flow is drawn into the cavity at the edges due to a lack of pressure recovery within the cavity. Due to the influence of the leading edge shape and side edges, three-dimensionalities are formed within the cavities that influence the development of the Rossiter tones. In the rectangular cavity, these three-dimensionalities lead to the formation of a set of counter-rotating streamwise-oriented vortices, which create a nearly-sinusoidal, spanwise waviness within its mixing layer. The addition of leading edge and side ramps disrupt the formation of these vortical structures and displace the mixing layer vertically, reducing Rossiter modal amplitudes. The leading edge ramp accelerates the oncoming flow, resulting in a shift in the Rossiter frequencies. A scooped leading edge reintroduced streamwise vorticity, increasing cavity turbulence, whereas an overhanging jagged leading edge reduced cavity velocity fluctuations while increasing the strength of the second Rossiter mode.
Time-resolved PIV has been accomplished in three high-speed flows using a pulse-burst laser: a supersonic jet exhausting into a transonic crossflow, a transonic flow over a rectangular cavity, and a shock-induced transient onset to cylinder vortex shedding. Temporal supersampling converts spatial information into temporal information by employing Taylor’s frozen turbulence hypothesis along local streamlines, providing frequency content until about 150 kHz where the noise floor is reached. The spectra consistently reveal two regions exhibiting power-law dependence describing the turbulent decay. One is the well-known inertial subrange with a slope of-5/3 at high frequencies. The other displays a-1 power-law dependence for as much as a decade of mid-range frequencies lying between the inertial subrange and the integral length scale. The evidence for the-1 power law is most convincing in the jet-in-crossflow experiment, which is dominated by in-plane convection and the vector spatial resolution does not impose an additional frequency constraint. Data from the transonic cavity flow that are least likely to be subject to attenuation due to limited spatial resolution or out-of-plane motion exhibit the strongest agreement with the-1 and-5/3 power laws. The cylinder wake data also appear to show the-1 regime and the inertial subrange in the near-wake, but farther downstream the frozen-turbulence assumption may deteriorate as large-scale vortices interact with one another in the von Kármán vortex street.
Best-estimate fuel performance codes such as BISON currently under development at the Idaho National Laboratory, utilize empirical and mechanistic lower-length-scale informed correlations to predict fuel behavior under normal operating and accident reactor conditions. Traditionally, best-estimate results are presented using the correlations with no quantification of the uncertainty in the output metrics of interest. However, there are associated uncertainties in the input parameters and correlations used to determine the behavior of the fuel and cladding under irradiation. Therefore, it is important to perform uncertainty quantification and include confidence bounds on the output metrics that take into account the uncertainties in the inputs. In addition, sensitivity analyses can be performed to determine which input parameters have the greatest influence on the outputs. In this paper we couple the BISON fuel performance code to the DAKOTA uncertainty analysis software to analyze a representative fuel performance problem. The case studied in this paper is based upon rod 1 from the IFA-432 integral experiment performed at the Halden Reactor in Norway. The rodlet is representative of a BWR fuel rod. The input parameters uncertainties are broken into three separate categories including boundary condition uncertainties (e.g., power, coolant flow rate), manufacturing uncertainties (e.g., pellet diameter, cladding thickness), and model uncertainties (e.g., fuel thermal conductivity, fuel swelling). Utilizing DAKOTA, a variety of statistical analysis techniques are applied to quantify the uncertainty and sensitivity of the output metrics of interest. Specifically, we demonstrate the use of sampling methods, polynomial chaos expansions, surrogate models, and variance-based decomposition. The output metrics investigated in this study are the fuel centerline temperature, cladding surface temperature, fission gas released, and fuel rod diameter. The results highlight the importance of quantifying the uncertainty and sensitivity in fuel performance modeling predictions and the need for additional research into improving the material models that are currently available.
Here, we present the draft genome sequence of Burkholderia pseudomallei PHLS 6, a virulent clinical strain isolated from a melioidosis patient in Bangladesh in 1960. The draft genome consists of 39 contigs and is 7,322,181 bp long.
Time-resolved PIV has been accomplished in three high-speed flows using a pulse-burst laser: a supersonic jet exhausting into a transonic crossflow, a transonic flow over a rectangular cavity, and a shock-induced transient onset to cylinder vortex shedding. Temporal supersampling converts spatial information into temporal information by employing Taylor’s frozen turbulence hypothesis along local streamlines, providing frequency content until about 150 kHz where the noise floor is reached. The spectra consistently reveal two regions exhibiting power-law dependence describing the turbulent decay. One is the well-known inertial subrange with a slope of-5/3 at high frequencies. The other displays a-1 power-law dependence for as much as a decade of mid-range frequencies lying between the inertial subrange and the integral length scale. The evidence for the-1 power law is most convincing in the jet-in-crossflow experiment, which is dominated by in-plane convection and the vector spatial resolution does not impose an additional frequency constraint. Data from the transonic cavity flow that are least likely to be subject to attenuation due to limited spatial resolution or out-of-plane motion exhibit the strongest agreement with the-1 and-5/3 power laws. The cylinder wake data also appear to show the-1 regime and the inertial subrange in the near-wake, but farther downstream the frozen-turbulence assumption may deteriorate as large-scale vortices interact with one another in the von Kármán vortex street.
In previous studies, complex cavity geometries showed higher amplitude and more three- dimensional pressure fields than simple rectangular cavities. However, those studies relied on twenty point measurements within the cavity. To further understand the development of the pressure field within complex bays, high-frequency pressure-sensitive paint (PSP) was applied to the floor of an L/D = 7 complex cavity at Mach 0.9; unsteady pressure measurements were obtained at 10 kHz. Power spectra of the PSP measurements have a spatial distribution at each cavity resonance frequency with an oscillatory pattern; additional maxima and minima appear as the mode number is increased. This behavior was tied to the superposition of a downstream propagating shear-layer disturbance and an upstream propagating acoustic wave of different amplitudes, consistent with the classical Rossiter model. Complex geometries added spanwise asymmetries to the spatial pattern and amplified specific modes. These spatially dependent features of the pressure field might be missed by point measurements of the pressure field.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
Thermochemical energy storage (TCES) offers the potential for greatly increased storage density relative to sensible-only energy storage. Moreover, heat may be stored indefinitely in the form of chemical bonds via TCES, accessed upon demand, and converted to heat at temperatures significantly higher than current solar thermal electricity production technology and is therefore well-suited to more efficient high-temperature power cycles. The PROMOTES effort seeks to advance both materials and systems for TCES through the development and demonstration of an innovative storage approach for solarized Air-Brayton power cycles and that is based on newly-developed redox-active metal oxides that are mixed ionic-electronic conductors (MIEC). In this paper we summarize the system concept and review our work to date towards developing materials and individual components.
Time-resolved PIV has been accomplished in three high-speed flows using a pulse-burst laser: a supersonic jet exhausting into a transonic crossflow, a transonic flow over a rectangular cavity, and a shock-induced transient onset to cylinder vortex shedding. Temporal supersampling converts spatial information into temporal information by employing Taylor’s frozen turbulence hypothesis along local streamlines, providing frequency content until about 150 kHz where the noise floor is reached. The spectra consistently reveal two regions exhibiting power-law dependence describing the turbulent decay. One is the well-known inertial subrange with a slope of-5/3 at high frequencies. The other displays a-1 power-law dependence for as much as a decade of mid-range frequencies lying between the inertial subrange and the integral length scale. The evidence for the-1 power law is most convincing in the jet-in-crossflow experiment, which is dominated by in-plane convection and the vector spatial resolution does not impose an additional frequency constraint. Data from the transonic cavity flow that are least likely to be subject to attenuation due to limited spatial resolution or out-of-plane motion exhibit the strongest agreement with the-1 and-5/3 power laws. The cylinder wake data also appear to show the-1 regime and the inertial subrange in the near-wake, but farther downstream the frozen-turbulence assumption may deteriorate as large-scale vortices interact with one another in the von Kármán vortex street.
Solution verification is the process of verifying the solution of a finite element analysis by performing a series of analyses on meshes of increasing mesh densities, to determine if the solution is converging. Solution verification has historically been too expensive, relying upon refinement templates resulting in an 8X multiplier in the number of elements. For even simple convergence studies, the 8X and 64X meshes must be solved, quickly exhausting computational resources. In this paper, we introduce Mesh Scaling, a new global mesh refinement technique for building series of all-hexahedral meshes for solution verification, without the 8X multiplier. Mesh Scaling reverse engineers the block decomposition of existing all-hexahedral meshes followed by remeshing the block decomposition using the original mesh as the sizing function multiplied by any positive floating number (e.g. 0.5X, 2X, 4X, 6X, etc.), enabling larger series of meshes to be constructed with fewer elements, making solution verification tractable.
Process induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials' coefficients of thermal expansion as well as the shrinkage upon cure exhibited by most thermoset polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials' curing parameters, it is possible that these residual stresses can result in interlaminar delamination and fracture within the composite as well as plastic deformation in the structure's metallic materials. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be prohibitive, both in terms of financial and temporal costs. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objective of this study is the development of a simplistic method for simulating the residual stresses formed in polymer matrix composite structures. Specifically, a simplified approach accounting for both coefficient of thermal expansion mismatch and polymer shrinkage is implemented within the Sandia National Laboratories' developed solid mechanics code, SIERRA. This approach is then used to model the manufacturing of two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum: a flat rectangular plate and cylinders. Concurrent with the computational efforts, structures similar to those modeled are fabricated and the residual stresses are quantified through the measurement of deformation. The simulations' results are compared to the experimentally observed behaviors for model validation, as well as a more complex modeling approach. The results of the comparisons indicate that the proposed finite element modeling approach is capable of accurately simulating the formation of residual stresses in composite structures.
Dispersion engineering enables phase matching for nonlinear down conversion from 775nm to the telecom c-band in lithium niobite microdisk resonators without periodic poling. High rates of spontaneous creation of entangled photon pairs is observed.
We demonstrate doubly resonant second harmonic generation from 1550 to 775 nm in microdisks fabricated from lithium niobate on insulator wafers. We use a novel phase matching technique to achieve a conversion efficiency of 0.167%/mW.
This study examined whether coating system contamination, indicated by poor base pressure, has a negative impact on the laser-induced damage threshold of HfO2/SiO2 high reflection coatings for 527 nm.
The electroreduction of Er3+ in propylene carbonate, N,N-dimethylformamide, or a variety of quaternary ammonium ionic liquids (ILs) was investigated using [Er(OTf)3] and [Er(NTf2)3]. Systematic variation of the ILs' cation and anion, Er3+ salt, and electrode material revealed a disparity in electrochemical interactions not previously seen. For most ILs at a platinum electrode, cyclic voltammetry exhibits irreversible interactions between Er3+ salts and the electrode at potentials significantly less than the theoretical reduction potential for Er3+. Throughout all solvent-salt systems tested, a deposit could be formed on the electrode, though obtaining a high purity, crystalline Er0 deposit is challenging due to the extreme reactivity of the deposit and resulting chemical interactions, often resulting in the formation of a complex, amorphous solid-electrolyte interface that slowed deposition rates. Comparison of platinum, gold, nickel, and glassy carbon (GC) working electrodes revealed oxidation processes unique to the platinum surface. While no appreciable reduction current was observed on GC at the potentials investigated, deposits were seen on platinum, gold, and nickel electrodes.
We present a resilient domain-decomposition preconditioner for partial differential equations (PDEs). The algorithm reformulates the PDE as a sampling problem, followed by a solution update through data manipulation that is resilient to both soft and hard faults. We discuss an implementation based on a server-client model where all state information is held by the servers, while clients are designed solely as computational units. Servers are assumed to be “sandboxed”, while no assumption is made on the reliability of the clients. We explore the scalability of the algorithm up to ∼12k cores, build an SST/macro skeleton to extrapolate to∼50k cores, and show the resilience under simulated hard and soft faults for a 2D linear Poisson equation.
The Wave Energy Converter Simulator (WEC-Sim) is an open-source code jointly developed by Sandia National Laboratories and the National Renewable Energy Laboratory. It is used to model wave energy converters subjected to operational and extreme waves. In order for the WEC-Sim code to be beneficial to the wave energy community, code verification and physical model validation is necessary. This paper describes numerical modeling of the wave tank testing for the 1:33-scale experimental testing of the floating oscillating surge wave energy converter. The comparison between WEC-Sim and the Phase 1 experimental data set serves as code validation. This paper is a follow-up to the WEC-Sim paper on experimental testing, and describes the WEC-Sim numerical simulations for the floating oscillating surge wave energy converter.
As the population ages, prediction of falls risk is becoming an increasingly important research area. Due to built-in inertial sensors and ubiquity, smartphones provide an at- tractive data collection and computing platform for falls risk prediction and continuous gait monitoring. One challenge in continuous gait monitoring is that significant signal variability exists between individuals with a high falls risk and those with low-risk. This variability increases the difficultly in building a universal system which segments and labels changes in signal state. This paper presents a method which uses unsu- pervised learning techniques to automatically segment a gait signal by computing the dissimilarity between two consecutive windows of data, applying an adaptive threshold algorithm to detect changes in signal state, and using a rule-based gait recognition al- gorithm to label the data. Using inertial data,the segmentation algorithm is compared against manually segmented data and is capable of achieving recognition rates greater than 71.8%.
Sutherland, John W.; Richter, Justin S.; Hutchins, Margot J.; Dornfeld, David; Dzombak, Rachel; Mangold, Jennifer; Robinson, Stefanie; Hauschild, Michael Z.; Bonou, Alexandra; Schonsleben, Paul; Friemann, Felix
Manufacturing affects all three dimensions of sustainability: economy, environment, and society. This paper addresses the last of these dimensions. It explores social impacts identified by national level social indicators, frameworks, and principles. The effects of manufacturing on social performance are framed for different stakeholder groups with associated social needs. Methodology development as well as various challenges for social life cycle assessment (S-LCA) are further examined. Efforts to integrate social and another dimension of sustainability are considered, with attention to globalization challenges, including offshoring and reshoring. The paper concludes with a summary of key takeaways and promising directions for future work.
An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximate weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. An adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.
Understanding the connectivity of fracture networks in a reservoir and obtaining an accurate chemical characterization of the geothermal fluid are vital for the successful operation of a geothermal power plant. Tracer experiments can be used to elucidate fracture connectivity and in most cases are conducted by injecting the tracer at the injection well, manually collecting liquid samples at the wellhead of the production well, and sending the samples off for laboratory analysis. This method does not identify which specific fractures are the ones producing the tracer; it is only a depth-averaged value over the entire wellbore. Sandia is developing a high-temperature wireline tool capable of measuring ionic tracer concentrations and pH downhole using electrochemical sensors. The goal of this effort is to collect real-time pH and ionic tracer concentration data at temperatures up to 225 °C and pressures up to 3000 psi.
Microstructural evolution during the devitrification of amorphous tantalum thin films synthesized via pulsed laser deposition was investigated using in situ transmission electron microscopy (TEM) combined with ex situ isothermal annealing, bright-field imaging, and electron-diffraction analysis. The phases formed during crystallization and their stability were characterized as a function of the chamber pressure during deposition, devitrification temperature, and annealing time. A range of metastable nanocrystalline tantalum oxides were identified following devitrification including multiple orthorhombic oxide phases, which often were present with, or evolved to, the tetragonal TaO2 phase. While the appearance of these phases indicated the films were evolving to the stable form of tantalum oxide—monoclinic tantalum pentoxide—it was likely not achieved for the conditions considered due to an insufficient amount of oxygen present in the films following deposition. Nevertheless, the collective in situ and ex situ TEM analysis applied to thin film samples enabled the isolation of a number of metastable tantalum oxides. New insights were gained into the transformation sequence and stability of these nanocrystalline phases, which presents opportunities for the development of advanced tantalum oxide-based dielectric materials for novel memristor designs.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
This paper evaluates novel particle release patterns for high-temperature falling particle receivers. Spatial release patterns resembling triangular and square waves are investigated and compared to the conventional straight-line particle release. A design of experiments was developed, and a simulation matrix was developed that investigated three twolevel factors: amplitude, wavelength, and wave type. Results show that the wave-like patterns increased both the particle temperature rise and thermal efficiency of the receiver relative to the straight-line particle release. Larger amplitudes and smaller wavelengths increased the performance by creating a volumetric heating effect that increased light absorption and reduced heat loss. Experiments are also being designed to investigate the hydraulic and thermal performance of these new particle release configurations.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
This paper evaluates the on-sun performance of a 1 MW falling particle receiver. Two particle receiver designs were investigated: obstructed flow particle receiver vs.free-falling particle receiver. The intent of the tests was to investigate the impact of particle mass flow rate, irradiance, and particle temperature on the particle temperature rise and thermal efficiency of the receiver for each design. Results indicate that the obstructed flow design increased the residence time of the particles in the concentrated flux, thereby increasing the particle temperature and thermal efficiency for a given mass flow rate. The obstructions, a staggered array of chevronshaped mesh structures, also provided more stability to the falling particles, which were prone to instabilities caused by convective currents in the free-fall design. Challenges encountered during the tests included non-uniform mass flow rates, wind impacts, and oxidation/deterioration of the mesh structures. Alternative materials, designs, and methods are presented to overcome these challenges.
Ebeida, Mohamed; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong M.; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.
During the initial phase of this Department of Energy (DOE) Geothermal Technologies Office (GTO) SubTER project, we conducted a series of high-energy stimulations in shallow wells, the effects of which were evaluated with high resolution seismic imaging campaigns designed to characterize induced fractures. The high-energy stimulations use a novel explosive source that limits damage to the borehole, which was paramount for change detection seismic imaging and re-fracturing experiments. This work provided evidence that the high-energy stimulations were generating self-propping fractures and that these fracture locations could be imaged at inch scales using high-frequency seismic tomography. While the seismic testing certainly provided valuable feedback on fracture generation for the suite of explosives, it left many fracture properties (i.e. permeability) unresolved. We present here the methodology for the second phase of the project, where we are developing and demonstrating emerging seismic and electrical geophysical imaging technologies that have been designed to characterize 1) the 3D extent and distribution of fractures stimulated from the explosive source, 2) 3D fluid transport within the stimulated fracture network through use of a contrasting tracer, and 3) fracture attributes through advanced data analysis. Focus is being placed upon advancing these technologies toward near real-time acquisition and processing in order to help provide the feedback mechanism necessary to understand and control fracture stimulation and fluid flow.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
Multiple receiver designs have been evaluated for improved optics and efficiency gains including flat panel, vertical-finned flat panel, horizontal-finned flat panel, and radially finned. Ray tracing using SolTrace was performed to understand the light-trapping effects of the finned receivers. Re-reflections of the fins to other fins on the receiver were captured to give an overall effective solar absorptance. The ray tracing, finite element analysis, and previous computational fluid dynamics showed that the horizontalfinned flat panel produced the most efficient receiver with increased light-trapping and lower overall heat loss. The effective solar absorptance was shown to increase from an intrinsic absorptance of 0.86 to 0.96 with ray trace models. The predicted thermal efficiency was shown in CFD models to be over 95%. The horizontal panels produce a re-circulating hot zone between the panel fins reducing convective loss resulting in a more efficient receiver. The analysis and design of these panels are described with additional engineering details on testing a flat panel receiver and the horizontal-finned receiver at the National Solar Thermal Test Facility. Design considerations include the structure for receiver testing, tube sizing, surrounding heat shielding, and machinery for cooling the receiver tubes.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
Direct solar power receivers consist of tubular arrays, or panels, which are typically tubes arranged side by side and connected to an inlet and outlet manifold. The tubes absorb the heat incident on the surface and transfer it to the fluid contained inside them. To increase the solar absorptance, high temperature black paint or a solar selective coating is applied to the surface of the tubes. However, current solar selective coatings degrade over the lifetime of the receiver and must be reapplied, which reduces the receiver thermal efficiency and increases the maintenance costs. This work presents an evaluation of several novel receiver shapes which have been denominated as fractal like geometries (FLGs). The FLGs are geometries that create a light-trapping effect, thus, increasing the effective solar absorptance and potentially increasing the thermal efficiency of the receiver. Five FLG prototypes were fabricated out of Inconel 718 and tested in Sandia's solar furnace at two irradiance levels of ∼15 and 30 W/cm2 and two fluid flow rates. Photographic methods were used to capture the irradiance distribution on the receiver surfaces and compared to results from ray-tracing models. This methods provided the irradiance distribution and the thermal input on the FLGs. Air at nearly atmospheric pressure was used as heat transfer fluid. The air inlet and outlet temperatures were recorded, using a data acquisition system, until steady state was achieved. Computational fluid dynamics (CFD) models, using the Discrete Ordinates (DO) radiation and the k-? Shear Stress Transport (SST) equations, were developed and calibrated, using the test data, to predict the performance of the five FLGs at different air flow rates and irradiance levels. The results showed that relative to a flat plate (base case), the new FLGs exhibited an increase in the effective solar absorptance from 0.86 to 0.92 for an intrinsic material absorptance of 0.86. Peak surface temperatures of ∼1000°C and maximum air temperature increases of ∼200°C were observed. Compared to the base case, the new FLGs showed a clear air outlet temperature increase. Thermal efficiency increases of ∼15%, with respect to the base case, were observed. Several tests, in different days, were performed to assess the repeatability of the results. The results obtained, so far, are very encouraging and display a very strong potential for incorporation in future solar power receivers.
Temperature monitoring is essential in automation, mechatronics, robotics and other dynamic systems. Wireless methods which can sense multiple temperatures at the same time without the use of cables or slip-rings can enable many new applications. A novel method utilizing small permanent magnets is presented for wirelessly measuring the temperature of multiple points moving in repeatable motions. The technique utilizes linear least squares inversion to separate the magnetic field contributions of each magnet as it changes temperature. The experimental setup and calibration methods are discussed. Initial experiments show that temperatures from 5 to 50 °C can be accurately tracked for three neodymium iron boron magnets in a stationary configuration and while traversing in arbitrary, repeatable trajectories. This work presents a new sensing capability that can be extended to tracking multiple temperatures inside opaque vessels, on rotating bearings, within batteries, or at the tip of complex endeffectors.
The polymer-composite binder used in lithium-ion battery electrodes must both hold the electrodes together and augment their electrical conductivity while subjected to mechanical stresses caused by active material volume changes due to lithiation and delithiation. We have discovered that cyclic mechanical stresses cause significant degradation in the binder electrical conductivity. After just 160 mechanical cycles, the conductivity of polyvinylidene fluoride (PVDF):carbon black binder dropped between 45-75%. This degradation in binder conductivity has been shown to be quite general, occurring over a range of carbon black concentrations, with and without absorbed electrolyte solvent and for different polymer manufacturers. Mechanical cycling of lithium cobalt oxide (LiCoO2 ) cathodes caused a similar degradation, reducing the effective electrical conductivity by 30-40%. Mesoscale simulations on a reconstructed experimental cathode geometry predicted the binder conductivity degradation will have a proportional impact on cathode electrical conductivity, in qualitative agreement with the experimental measurements. Finally, ohmic resistance measurements were made on complete batteries. Direct comparisons between electrochemical cycling and mechanical cycling show consistent trends in the conductivity decline. This evidence supports a new mechanism for performance decline of rechargeable lithium-ion batteries during operation - electrochemically-induced mechanical stresses that degrade binder conductivity, increasing the internal resistance of the battery with cycling.
Preparation of sodium zirconium silicate phosphate (NaSICon), Na1+ xZr2SixP3− xO12(0.25 ≤ x ≤ 1.0), thin films has been investigated via a chemical solution approach on platinized silicon substrates. Increasing the silicon content resulted in a reduction in the crystallite size and a reduction in the measured ionic conductivity. Processing temperature was also found to affect microstructure and ionic conductivity with higher processing temperatures resulting in larger crystallite sizes and higher ionic conductivities. The highest room temperature sodium ion conductivity was measured for an x = 0.25 composition at 2.3 × 10−5 S/cm. The decreasing ionic conductivity trends with increasing silicon content and decreasing processing temperature are consistent with grain boundary and defect scattering of conducting ions.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Terboven, Christian; Hahnfeld, Jonas; Teruel, Xavier; Mateo, Sergi; Duran, Alejandro; Klemm, Michael; Olivier, Stephen L.; De Supinski, Bronis R.
OpenMP tasking supports parallelization of irregular algorithms. Recent OpenMP specifications extended tasking to increase functionality and to support optimizations, for instance with the taskloop construct. However, task scheduling remains opaque, which leads to inconsistent performance on NUMA architectures. We assess design issues for task affinity and explore several approaches to enable it. We evaluate these proposals with implementations in the Nanos++ and LLVM OpenMP runtimes that improve performance up to 40% and significantly reduce execution time variation.
Recent budget reductions have posed tremendous challenges to the U.S. Army in managing its portfolio of ground combat systems (tanks and other fighting vehicles), thus placing many important programs at risk. To address these challenges, the Army and a supporting team developed and applied the Capability Portfolio Analysis Tool (CPAT) to optimally invest in ground combat modernization over the next 25-35 years. CPAT provides the Army with the analytical rigor needed to help senior Army decision makers allocate scarce modernization dollars to protect soldiers and maintain capability overmatch. CPAT delivers unparalleled insight into multiple-decade modernization planning using a novel multiphase mixed-integer linear programming technique and illustrates a cultural shift toward analytics in the Army's acquisition thinking and processes. CPAT analysis helped shape decisions to continue modernization of the $10 billion Stryker family of vehicles (originally slated for cancellation) and to strategically reallocate over $20 billion to existing modernization programs by not pursuing the Ground Combat Vehicle program as originally envisioned. More than 40 studies have been completed using CPAT, applying operations research methods to optimally prioritize billions of taxpayer dollars and allowing Army acquisition executives to base investment decisions on analytically rigorous evaluations of portfolio trade-offs.
Fatigue crack growth rate (da/dN) versus stress intensity factor range (AK) relationships were measured for various grades of pipeline steel along with their respective welds in high pressure hydrogen. Tests were conducted in both 21 MPa hydrogen gas and a reference environment (e.g. air) at room temperature. Girth welds fabricated by arc welding and friction stir welding processes were examined in X65 and X52 pipeline grades, respectively. Results showed accelerated fatigue crack growth rates for all tests in hydrogen as compared to tests in air. Modestly higher hydrogen-assisted crack growth rates were observed in the welds as compared to their respective base metals. The arc weld and friction stir weld exhibited similar fatigue crack growth behavior suggesting similar sensitivity to hydrogen. A detailed study of microstructure and fractography was performed to identify relationships between microstructure constituents and hydrogen accelerated fatigue crack growth.
ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)
Sahm, Aaron; Burnham, Laurie; Boehm, Robert; Betemedhin, Adam; Wood, Gary
Total lifetime costs of photovoltaic (PV) systems are important determinants of profitability. But such costs are not always accurately measured and compared against fluctuating electricity costs, which can be an important contributor to longterm profitability. In this paper, we consider the economics of concentrated photovoltaics (CPV), which offer significantly higher efficiency and greater energy production over traditional fixed flat-plate PV installations in high-irradiance regions, but are perceived to be risky investments. Working with two models, one a simple annual model that uses only direct normal solar insolation; the other a more complex hourly model that uses direct normal solar insolation, ambient temperature, and wind speed to predict energy yield, we calculated the energy production and corresponding revenue generation for a 28 kW CPV unit and a comparable single-axis tracker field in Nevada. Our resulting cost matrix shows how much revenue a CPV system can reasonably be expected to generate under different pricing schemes and time periods. While the values vary depending on the assumptions made, the matrix provides an index of profitability, enabling prospective buyers to compare the costs of purchasing, installing and maintaining a system against likely revenue. As a result of our calculations, we anticipate that CPV systems will still be viable in high flux areas because they offer the promise of profitability now and continued or increased profitability as cell costs decrease and/or overall efficiency increases. Nonetheless, other factors, such as long-term reliability and O&M costs, must be addressed if CPV is to compete with other simpler technologies, such as single-axis PV trackers, which have lower upfront costs and are therefore becoming more attractive to potential customers.
ASME International Mechanical Engineering Congress and Exposition, Proceedings (IMECE)
Chattopadhyay, Ashesh; Kotteda, V.M.K.; Kumar, Vinod; Spotz, William S.
A framework is developed to integrate the existing MFiX (Multiphase Flow with Interphase eXchanges) flow solver with state-of-the-art linear equation solver packages in Trilinos. The integrated solver is tested on various flow problems. The performance of the solver is evaluated on fluidized bed problems and observed that the integrated flow solver performs better compared to the native solver.
We present a resilient domain-decomposition preconditioner for partial differential equations (PDEs). The algorithm reformulates the PDE as a sampling problem, followed by a solution update through data manipulation that is resilient to both soft and hard faults. We discuss an implementation based on a server-client model where all state information is held by the servers, while clients are designed solely as computational units. Servers are assumed to be “sandboxed”, while no assumption is made on the reliability of the clients. We explore the scalability of the algorithm up to ∼12k cores, build an SST/macro skeleton to extrapolate to∼50k cores, and show the resilience under simulated hard and soft faults for a 2D linear Poisson equation.
Inferring the cognitive state of an individual in real time during task performance allows for implementation of corrective measures prior to the occurrence of an error. Current technology allows for real time cognitive state assessment based on objective physiological data though techniques such as neuroimaging and eye tracking. Although early results indicate effective construction of classifiers that distinguish between cognitive states in real time is a possibility in some settings, implementation of these classifiers into real world settings poses a number of challenges. Cognitive states of interest must be sufficiently distinct to allow for continuous discrimination in the operational environment using technology that is currently available as well as practical to implement.
A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.
Research, the manufacture of knowledge, is currently practiced largely as an “art,” not a “science.” Just as science (understanding) and technology (tools) have revolutionized the manufacture of other goods and services, it is natural, perhaps inevitable, that they will ultimately also be applied to the manufacture of knowledge. In this article, we present an emerging perspective on opportunities for such application, at three different levels of the research enterprise. At the cognitive science level of the individual researcher, opportunities include: overcoming idea fixation and sloppy thinking, and balancing divergent and convergent thinking. At the social network level of the research team, opportunities include: overcoming strong links and groupthink, and optimally distributing divergent and convergent thinking between individuals and teams. At the research ecosystem level of the research institution and the larger national and international community of researchers, opportunities include: overcoming performance fixation, overcoming narrow measures of research impact, and overcoming (or harnessing) existential/social stress.
Cognitive science is an interdisciplinary science which studies the human dimension, drawing from academic disciplines such as psychology, linguistics, philosophy, and computer modeling. Business management is controlling, leading, monitoring, organizing, and planning critical information to bring useful resources and capabilities to a viable market. Finally, the government sector has many roles, but one primary goal is to bring innovative solutions to maintain and enhance national security. There currently is a gap in the government sector between applied research and solutions applicable to the national security field. This is a deep problem since a critical element to many national security issues is the human dimension and requires cognitive science approaches. One major cause to this gap is the separation between business management and cognitive science: scientific research is either not being tailored to the mission need or deployed at a time when it can best be absorbed by national security concerns. This paper addresses three major themes: (1) how cognitive science and business management benefits the government sector, (2) the current gaps that exist between cognitive science and business management, and (3) how cognitive science and business management may work to address government sector, national security needs.
Mycobacterium tuberculosis associated granuloma formation can be viewed as a structural immune response that can contain and halt the spread of the pathogen. In several mammalian hosts, including non-human primates, Mtb granulomas are often hypoxic, although this has not been observed in wild type murine infection models. While a presumed consequence, the structural contribution of the granuloma to oxygen limitation and the concomitant impact on Mtb metabolic viability and persistence remains to be fully explored. We develop a multiscale computational model to test to what extent in vivo Mtb granulomas become hypoxic, and investigate the effects of hypoxia on host immune response efficacy and mycobacterial persistence. Our study integrates a physiological model of oxygen dynamics in the extracellular space of alveolar tissue, an agent-based model of cellular immune response, and a systems biology-based model of Mtb metabolic dynamics. Our theoretical studies suggest that the dynamics of granuloma organization mediates oxygen availability and illustrates the immunological contribution of this structural host response to infection outcome. Furthermore, our integrated model demonstrates the link between structural immune response and mechanistic drivers influencing Mtbs adaptation to its changing microenvironment and the qualitative infection outcome scenarios of clearance, containment, dissemination, and a newly observed theoretical outcome of transient containment. We observed hypoxic regions in the containment granuloma similar in size to granulomas found in mammalian in vivo models of Mtb infection. In the case of the containment outcome, our model uniquely demonstrates that immune response mediated hypoxic conditions help foster the shift down of bacteria through two stages of adaptation similar to thein vitro non-replicating persistence (NRP) observed in the Wayne model of Mtb dormancy. The adaptation in part contributes to the ability of Mtb to remain dormant for years after initial infection.
Jagannathan, Kaushik; Benson, David M.; Robinson, David; Stickney, John L.
Nanofilms of Pd were grown using an electrochemical form of atomic layer deposition (E-ALD) on 100 nm evaporated Au films on glass. Multiple cycles of surface-limited redox replacement (SLRR) were used to grow deposits. Each SLRR involved the underpotential deposition (UPD) of a Cu atomic layer, followed by open circuit replacement via redox exchange with tetrachloropalladate, forming a Pd atomic layer: one E-ALD deposition cycle. That cycle was repeated in order to grow deposits of a desired thickness. 5 cycles of Pd deposition were performed on the Au on glass substrates, resulting in the formation of 2.5 monolayers of Pd. Those Pd films were then modified with varying coverages of Pt, also formed using SLRR. The amount of Pt was controlled by changing the potential for Cu UPD, and by increasing the number of Pt deposition cycles. Hydrogen absorption was studied using coulometry and cyclic voltammetry in 0.1 M H2SO4 as a function of Pt coverage. The presence of even a small fraction of a Pt monolayer dramatically increased the rate of hydrogen desorption. However, this did not reduce the films' hydrogen storage capacity. The increase in desorption rate in the presence of Pt was over an order of magnitude.
A new model of electrodeposition and electrodissolution is developed and applied to the evolution of Mg deposits during anode cycling. The model captures Butler-Volmer kinetics, facet evolution, the spatially varying potential in the electrolyte, and the time-dependent electrolyte concentration. The model utilizes a diffuse interface approach, employing the phase field and smoothed boundary methods. Scanning electron microscope (SEM) images of magnesium deposited on a gold substrate show the formation of faceted deposits, often in the form of hexagonal prisms. Orientation-dependent reaction rate coefficients were parameterized using the experimental SEM images. Three-dimensional simulations of the growth of magnesium deposits yield deposit morphologies consistent with the experimental results. The simulations predict that the deposits become narrower and taller as the current density increases due to the depletion of the electrolyte concentration near the sides of the deposits. Increasing the distance between the deposits leads to increased depletion of the electrolyte surrounding the deposit. Two models relating the orientation-dependence of the deposition and dissolution reactions are presented. The morphology of the Mg deposit after one deposition-dissolution cycle is significantly different between the two orientation-dependence models, providing testable predictions that suggest the underlying physical mechanisms governing morphology evolution during deposition and dissolution.
The peridynamic theory of solid mechanics provides a natural framework for modeling constitutive response and simulating dynamic crack propagation, pervasive damage, and fragmentation. In the case of a fragmenting body, the principal quantities of interest include the number of fragments, and the masses and velocities of the fragments. We present a method for identifying individual fragments in a peridynamic simulation. We restrict ourselves to the meshfree approach of Silling and Askari, in which nodal volumes are used to discretize the computational domain. Nodal volumes, which are connected by peridynamic bonds, may separate as a result of material damage and form groups that represent fragments. Nodes within each fragment have similar velocities and their collective motion resembles that of a rigid body. The identification of fragments is achieved through inspection of the peridynamic bonds, established at the onset of the simulation, and the evolving damage value associated with each bond. An iterative approach allows for the identification of isolated groups of nodal volumes by traversing the network of bonds present in a body. The process of identifying fragments may be carried out at specified times during the simulation, revealing the progression of damage and the creation of fragments. Incorporating the fragment identification algorithm directly within the simulation code avoids the need to write bond data to disk, which is often prohibitively expensive. Results are recorded using fragment identification numbers. The identification number for each fragment is stored at each node within the fragment and written to disk, allowing for any number of post-processing operations, for example the construction of cumulative distribution functions for quantities of interest. Care is taken with regard to very small clusters of isolated nodes, including individual nodes for which all bonds have failed. Small clusters of nodes may be treated as tiny fragments, or may be omitted from the fragment identification process. The fragment identification algorithm is demonstrated using the Sierra/SolidMechanics analysis code. It is applied to a simulation of pervasive damage resulting from a spherical projectile impacting a brittle disk, and to a simulation of fragmentation of an expanding ductile ring.
Additive manufacturing (AM) has reached a critical point which enables production of complex, high resolution, custom parts from robust materials. However, traditional fasteners are still use to join these complex parts together. Integrating fasteners into additively manufactured parts is beneficial for part production but there is uncertainty in their design. To understand how the fasteners fit and function, mechanical property data was collected on the prototypes. This data along with insights gained while building and testing the prototypes increased the knowledge base of design for additive manufacturing and build-to-build variability in selective laser sintering (SLS).
Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.
Additive manufacturing (AM) has reached a critical point which enables production of complex, high resolution, custom parts from robust materials. However, traditional fasteners are still use to join these complex parts together. Integrating fasteners into additively manufactured parts is beneficial for part production but there is uncertainty in their design. To understand how the fasteners fit and function, mechanical property data was collected on the prototypes. This data along with insights gained while building and testing the prototypes increased the knowledge base of design for additive manufacturing and build-to-build variability in selective laser sintering (SLS).
Computational Science and Engineering (CSE) software can benefit substantially from an explicit focus on quality improvement. This is especially true as we face increased demands in both modeling and software complexities. At the same time, just desiring improved quality is not sufficient. We must work with the entities that provide CSE research teams with publication venues, funding, and professional recognition in order to increase incentives for improved software quality. In fact, software quality is precisely calibrated to the expectations, explicit and implicit, set by these entities. We will see broad improvements in sustainability and productivity only when publishers, funding agencies and employers raise their expectations for software quality. CSE software community leaders, those who are in a position to inform and influence these entities, have a unique opportunity to broadly and positively impact software quality by working to establish incentives that will spur creative and novel approaches to improve developer productivity and software sustainability.
Extensive all-atom molecular dynamics calculations on the water-squalane interface for nine different loadings with sorbitan monooleate (SPAN80), at T = 300K, are analyzed for the surface tension equation of state, desorption free energy profiles as they depend on loading, and to evaluate escape times for absorbed SPAN80 into the bulk phases. These results suggest that loading only weakly affects accommodation of a SPAN80 molecule by this squalane-water interface. Specifically, the surface tension equation of state is simple through the range of high tension to high loading studied, and the desorption free energy profiles are weakly dependent on loading here. The perpendicular motion of the centroid of the SPAN80 head-group ring is well-described by a diffusional model near the minimum of the desorption free energy profile. Lateral diffusional motion is weakly dependent on loading. Escape times evaluated on the basis of a diffusional model and the desorption free energies are 7 × 10-2 s (into the squalane) and 3 × 102 h (into the water). The latter value is consistent with irreversible absorption observed by related experimental work.
Enteric and diarrheal diseases are a major cause of childhood illness and death in countries with developing economies. Each year, more than half of a million children under the age of five die from these diseases. We have developed a portable, microfluidic platform capable of simultaneous, multiplexed detection of several of the bacterial pathogens that cause these diseases. This platform can perform fast, sensitive immunoassays directly from relevant, complex clinical matrices such as stool without extensive sample cleanup or preparation. Using only 1 μL of sample per assay, we demonstrate simultaneous multiplexed detection of four bacterial pathogens implicated in diarrheal and enteric diseases in less than 20 min.
Solid freeform fabrication has the potential to affect both financial and environmental concerns for manufacturing enterprises. However, when planning for installation of a new machine tool, accurate energy usage estimation relies heavily on the data and model selections of the estimator. This project used a variety data sources and model decision options to examine the spread of energy consumption and global warming potential estimates for a fused deposition modeling machine. In addition to primary and secondary data sources, the use of similar machines was explored as proxy estimates for the target machine. A Monte Carlo simulation was constructed to vary the model selections, machine utilization, and data sources. The results indicated data sources and model decisions had large effects on the output and that most model estimates were low.
Previous work has shown that conventional diesel ignition improvers, 2-ethylhexyl nitrate (EHN) and di-tert-butyl peroxide (DTBP), can be used to enhance the autoignition of a regular-grade E10 gasoline in a well premixed low-temperature gasoline combustion (LTGC) engine, hereafter termed an HCCI engine, at naturally aspirated and moderately boosted conditions (up to 180 kPa absolute) with a constant engine speed of 1200 rpm and a 14:1 compression ratio. In the current work the effect of EHN on boosted HCCI combustion is further investigated with a higher compression ratio (16:1) piston and over a range of engine speeds (up to 2400 rpm). The results show that the higher compression ratio and engine speeds can make the combustion of a regular-grade E10 gasoline somewhat less stable. The addition of EHN improves the combustion stability by allowing combustion phasing to be more advanced for the same ringing intensity. The high-load limits of both the straight (unadditized) and additized fuels are determined, and the additized fuel is found to achieve a higher maximum load at all engine speeds and intake pressures tested, if it is not limited by lack of oxygen. The results reveal that the higher loads with EHN are the result of either reduced intake temperature requirements at naturally aspirated conditions or a reduction in heat release rate at higher intake pressures. Such effects are also found to increase the thermal efficiency, and a maximum indicated thermal efficiency of 50.1% is found for 0.15% EHN additized fuel at 1800 rpm and 180 kPa intake pressure. Similar to previous studies, the nitrogen in EHN increases NOx emissions, but they remain well below US-2010 standards. Higher engine speeds are found to have slightly lower NOx emissions for additized fuel at intake boosted conditions.
Influence spread is an important phenomenon that occurs in many social networks. Influence maximization is the corresponding problem of finding the most influential nodes in these networks. In this paper, we present a new influence diffusion model, based on pairwise factor graphs, that captures dependencies and directions of influence among neighboring nodes.We use an augmented belief propagation algorithm to efficiently compute influence spread on this model so that the direction of influence is preserved. Due to its simplicity, the model can be used on large graphs with high-degree nodes, making the influence maximization problem practical on large, real-world graphs. Using large Flixster and Epinions datasets, we provide experimental results showing that our model predictions match well with ground-truth influence spreads, far better than other techniques. Furthermore, we show that the influential nodes identified by our model achieve significantly higher influence spread compared to other popular models. The model parameters can easily be learned from basic, readily available training data. In the absence of training, our approach can still be used to identify influential seed nodes.
The Hippogriff camera developed at Sandia National Laboratories as part of the Ultra-Fast X-ray Imager (UXI) program is a high-speed, multi-frame, time-gated imager for use on a wide variety of High Energy Density (HED) physics experiments on both Sandia's Z-Machine and the National Ignition Facility. The camera is a 1024 x 448 pixel array with 25 μm spatial resolution, containing 2 frames per pixel natively and has achieved 2 ns minimum integration time. It is sensitive to both optical photons as well as soft X-rays up to ∼6 keV. The Hippogriff camera is the second generation UXI camera that contains circuitry to trade spatial resolution for additional frames of temporal coverage. The user can reduce the row-wise spatial resolution from the native 25 μm to increase the number of frames in a data set to 4 frames at 50 μm or 8 frames at 100 μm spatial resolution. This feature, along with both optical and X-ray sensitivity, facilitates additional experimental flexibility. Minimum signal is 1500 erms and full well is 1.5 million e-.