It is often prohibitively expensive to integrate the response of a high order nonlinear system, such as a finite element model of a nonlinear structure, so a set of linear eigenvectors is often used as a basis in order to create a reduced order model (ROM). By augmenting the linear basis with a small set of discontinuous basis functions, ROMs of systems with local nonlinearities have been shown to compare well with the corresponding full order models.When evaluating the quality of a ROM, it is common to compare the time response of the model to that of the full order system, but the time response is a complicated function that depends on a predetermined set of initial conditions or external force. This is difficult to use as a metric to measure convergence of a ROM, particularly for systems with strong, non-smooth nonlinearities, for two reasons: (1) the accuracy of the response depends directly on the amplitude of the load/initial conditions, and (2) small differences between two signals can become large over time. Here, a validation metric is proposed that is based solely on the ROM’s equations of motion. The nonlinear normalmodes (NNMs) of the ROMs are computed and tracked as modes are added to the basis set. The NNMs are expected to converge to the true NNMs of the full order system with a sufficient set of basis vectors. This comparison captures the effect of the nonlinearity through a range of amplitudes of the system, and is akin to comparing natural frequencies and mode shapes for a linear structure. In this research, the convergencemetric is evaluated on a simply supported beam with a contacting nonlinearity modeled as a unilateral piecewise-linear function. Various time responses are compared to show that the NNMs provide a good measure of the accuracy of the ROM. The results suggest the feasibility of using NNMs as a convergencemetric for reduced order modeling of systems with various types of nonlinearities.
The reflection of an optical wave from metal, arising from strong interactions between the optical electric field and the free carriers of the metal, is accompanied by a phase reversal of the reflected electric field. A far less common route to achieving high reflectivity exploits strong interactions between the material and the optical magnetic field to produce a “magnetic mirror” that does not reverse the phase of the reflected electric field. At optical frequencies, the magnetic properties required for strong interaction can be achieved only by using artificially tailored materials. Here, we experimentally demonstrate, for the first time to the best of our knowledge, the magnetic mirror behavior of a low-loss all-dielectric metasurface at infrared optical frequencies through direct measurements of the phase and amplitude of the reflected optical wave. The enhanced absorption and emission of transverse-electric dipoles placed close to magnetic mirrors can lead to exciting new advances in sensors, photodetectors, and light sources.
The voltage on a single-turn loop inside an enclosure characterizes the enclosure shielding effectiveness against a lightning insult. In this paper, the maximum induced voltage on a single-turn loop inside an enclosure from lightning coupling to a metal enclosure wall is expressed in terms of two multiplicative factors: (A) the normalized enclosure wall peak penetration ratio (i.e., ratio of the peak interior electric field multiplied by the sheet conductance to the exterior magnetic field) and (B) the DC voltage on an ideal optimum coupling loop assuming the ideal penetration ratio of one. As a result of the decomposition, the variation of the peak penetration ratio (A) for different coupling mechanisms is found to be small; the difference in the maximum voltage hence arises from the DC voltage on the optimum coupling loop (B). Maximum voltages on an optimum coupling loop inside a finite cylinder enclosure for direct attachment and a lightning line source at different distances from the enclosure are given in Table 3.
This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing and mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.
Several tensor eigenpair definitions have been put forth in the past decade, but these can all be unified under generalized tensor eigenpair framework, introduced by Chang, Pearson, and Zhang [J. Math. Anal. Appl., 350 (2009), pp. 416-422]. Given mth-order, n-dimensional realvalued symmetric tensors A and B, the goal is to find λ ε ℝ and x ε ℝn, x ≠= 0 such that Axm-1 = λBxm-1. Different choices for B yield different versions of the tensor eigenvalue problem. We present our generalized eigenproblem adaptive power (GEAP) method for solving the problem, which is an extension of the shifted symmetric higher-order power method (SS-HOPM) for finding Z-eigenpairs. A major drawback of SS-HOPM is that its performance depended on choosing an appropriate shift, but our GEAP method also includes an adaptive method for choosing the shift automatically.
During an environment, it is desirable to know the forces or inputs on the system of interest. With the inputs, one can directly use a finite element or experimental model to predict responses not measured in a field test. One can attempt to measure point forces using force gauges, however, these gauges are insufficient due to the inability to place a gauge at a forcing interface or to measure a force applied over an area. SWAT (Sum of weighted acceleration technique) is a method that uses mode shapes as a modal filter with measured accelerations and to solve the inverse problem and calculate the forces and moments on the system. This paper will examine an application where the use of a force gauge is impossible due to the external forces being applied over an area. The paper will calculate the sum of the forces and moments imparted on the system and will use a finite element model to check the plausibility of the calculated forces.
Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image square centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited. imaging time is proportional to the number of measurements taken of each sample; in a traditional SEM. large collections can lead to weeks of around-the-clock imaging time. We previously reported a single-beam sparse sampling approach that we have demonstrated on an operational SEM for collecting "smooth" images. In this paper, we analyze how measurements from a hypothetical multi-beam system would compare to the single-beam approach in a compressed sensing framework. To that end. multi-beam measurements are synthesized on a single-beam SEM. and fidelity of reconstructed images are compared to the previously demonstrated approach. Since taking fewer measurements comes at the cost of reduced SNR, image fidelity as a function of undersampling ratio is reported.
We have demonstrated single-mode lasing in a single gallium nitride nanowire using distributed feedback by external coupling to a dielectric grating. By adjusting the nanowire grating alignment we achieved a mode suppression ratio of 17dB.
Transient responses of high-Q nano-optomechanical modes are characterized with Interleaved-ASOPS, where pump-induced transients are interrogated with multiple probe pulses. Temporal resolution increases linearly with probe-pulse-number beyond conventional ASOPS, achieving sub-ps resolution over μs durations.
Biaxially oriented polyethylene terephthalate (BO-PET/Mylar®) polymer film is commonly used as the dielectric in high-voltage, pulse-discharge capacitors because of its high dielectric strength and insulation resistance over a wide temperature range [1]. This study focuses on the use of a systematic physics of failure (PoF) approach to assess possible design and fabrication problems in BO-PET capacitors. A destructive physical analysis (DPA) procedure, which is an essential technique in understanding the failure modes and mechanisms in capacitors, has been developed through this research. Short-term breakdown (STB) testing was performed on capacitors from two independent development builds and the results are compared. It was identified that the two primary failure mechanisms occurring in these capacitors under high voltage conditions were edge margin arc-over and dielectric punch-through. Evaluation of the electrical parameters after accelerated voltage testing revealed that the combination of lower than expected voltage breakdown values (near the voltage rating of 3.6 kV) and in-spec capacitance and dissipation factor (C/DF) values indicated an arc-over failure, while high voltage breakdown values (greater than 2.5 times the voltage rating) and out-of-spec C/DF values indicated a dielectric punch-through failure. Thick buried edges, creasing, high curvature, insufficient inactive wraps, arc spray, and inadequate edge margin were some of the modes that led to arc-over and punch-through failures. Many of these failure modes were traced back to unsuitably designed capacitors or issues with the process control during manufacturing.
Sanders, Charlotte E.; Zhang, Chendong; Kellogg, Gary L.; Shih, Chih K.
Epitaxially grown silver (Ag) film on silicon (Si) is an optimal plasmonic device platform, but its technological utility has been limited by its tendency to dewet rapidly under ambient conditions (standard temperature and pressure). The mechanisms driving this dewetting have not heretofore been determined. In this study, scanning probe microscopy and low-energy electron microscopy are used to compare the morphological evolution of epitaxial Ag(111)/Si(111) under ambient conditions with that of similarly prepared films heated under ultra-high vacuum (UHV) conditions. Dewetting in both cases is seen to be initiated with the formation of pinholes, which might function to relieve strain in the film. We find that in the UHV environment, dewetting is determined by thermal processes, while under ambient conditions, thermal processes are not required. We conclude that dewetting in ambient conditions is triggered by some chemical process, most likely oxidation.
The use of computational models to simulate the behavior of complex mechanical systems is ubiquitous in many high consequence applications such as aerospace systems. Results from these simulations are being used, among other things, to inform decisions regarding system reliability and margin assessment. In order to properly support these decisions, uncertainty needs to be accounted for. To this end, it is necessary to identify, quantify and propagate different sources of uncertainty as they relate to these modeling efforts. Some sources of uncertainty arise from the following: (1) modeling assumptions and approximations, (2) solution convergence, (3) differences between model predictions and experiments, (4) physical variability, (5) the coupling of various components and (6) and unknown unknowns. An additional aspect of the problem is the limited information available at the full system level in the application space. This is offset, in some instances, by information on individual components at testable conditions. In this paper, we focus on the quantification of uncertainty due to differences in model prediction and experiments, and present a technique to aggregate and propagate uncertainty from the component level to the full system in the applications space. A numerical example based on a structural dynamics application is used to demonstrate the technique.
Recent experimental investigations show that most models are not able to capture the ductile behavior of metal alloys in the entire triaxiality range, especially at low triaxiality. Modelers are moving beyond stress triaxiality as the dominant indicator of material failure and developing constitutive models that incorporate shear into the evolution of the failure model. Available data that cover low triaxiality range are rare and a series of critical experiments is needed. Here, experiments of smooth thin as well as notched tubular specimens of Al6061-T651 under combined tension-torsion loading were conducted. This provides a very basic set of data for phenomenological models. A full-field deformation technique, digital image correlation (DIC), was applied to these tests to allow measurement of the field deformation, including the notched area. The microstructural features of the tested specimens were characterized to better understand the different failure mechanisms which led to ductility variation in the aluminum alloy.
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission's Advisory Committee on Reactor Safeguards, a 'high' source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1, 000 each using LHS and another set of three replicates of size 1, 000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of "zero" results).
Wafer-level step-stress experiments on high voltage Npn InGaP/GaAs HBTs are presented. A methodology utilizing brief, monotonically increasing stresses and periodic, interrupted parametric characterization is presented. The method and various examples of step-stressed HBTs illustrate the value of the technique for screening the reliability of HBT wafers. Degradation modes observed in these InGaP/GaAs HBTs closely correspond to a subset of those in other, longer types of reliability experiments and can be relevant in a reliability screen. A statistical sampling of HBT wafers reveals a consistently realized critical destructive limit over a very narrow power range, which indicates that thermal stress is the main cause of degradation. When stepped just shy of the destructive limit, electrical characteristics are capable of revealing gradual degradation. The end state of stressing typically involves shorting of both the base-emitter and base-collector junctions. Interrupted characterization revealed cases where baseemitter shorts preceded base-collector shorts and other cases where base-collector shorts occurred first. Examples of degradation include reductions in reverse breakdown voltage, increases in the offset voltage, and drops in current gain. These wafer-level stepstress techniques show promise for reducing the large time lag between wafer fabrication and useful reliability screening in HBTs.
Within large organizations, the defense of cyber assets generally involves the use of various mechanisms, such as intrusion detection systems, to alert cyber security personnel to suspicious network activity. Resulting alerts are reviewed by the organization's cyber security personnel to investigate and assess the threat and initiate appropriate actions to defend the organization's network assets. While automated software routines are essential to cope with the massive volumes of data transmitted across data networks, the ultimate success of an organization's efforts to resist adversarial attacks upon their cyber assets relies on the effectiveness of individuals and teams. This paper reports research to understand the factors that impact the effectiveness of Cyber Security Incidence Response Teams (CSIRTs). Specifically, a simulation is described that captures the workflow within a CSIRT. The simulation is then demonstrated in a study comparing the differential response time to threats that vary with respect to key characteristics (attack trajectory, targeted asset and perpetrator). It is shown that the results of the simulation correlate with data from the actual incident response times of a professional CSIRT.
As compact and light weight power sources with reliable, long lives, Radioisotope Power Systems (RPSs) have made space missions to explore the solar system possible. Due to the hazardous material that can be released during a launch accident, the potential health risk of an accident must be quantified, so that appropriate launch approval decisions can be made. One part of the risk estimation involves modeling the response of the RPS to potential accident environments. Due to the complexity of modeling the full RPS response deterministically on dynamic variables, the evaluation is performed in a stochastic manner with a Monte Carlo simulation. The potential consequences can be determined by modeling the transport of the hazardous material in the environment and in human biological pathways. The consequence analysis results are summed and weighted by appropriate likelihood values to give a collection of probabilistic results for the estimation of the potential health risk. This information is used to guide RPS designs, spacecraft designs, mission architecture, or launch procedures to potentially reduce the risk, as well as to inform decision makers of the potential health risks resulting from the use of RPSs for space missions.
This paper presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex and hexdominant meshes of 3D assembly models. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D are similar to tunnels with no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the result section. The paper also describes how this algorithm can be extended to produce all-hex meshes in general geometries.
Supercritical Carbon Dioxide (S-CO2) is an efficient and flexible working fluid for power production. Research to interface S-CO2 systems with nuclear, thermal solar, and fossil energy sources are currently underway. To proceed, we must address concerns regarding high temperature compatibility of materials and compatibility between significantly different heat transfer fluids. Dry, pure S-CO2 is thought to be relatively inert [1], while ppm levels of water and oxygen result in formation of a protective chromia layer and iron oxide [2] Thin oxides are favorable as diffusion barriers, and for their minimal impact on heat transfer. Chromia, however, is soluble in molten salt systems (nitrate, chloride, and fluoride based salts) [3-8]. Fluoride anion based systems required the development of the alloy INOR-8 (Hastelloy N, base nickel, 17%Mo) [9] to ensure that chromium diffusion is minimized, thereby maximizing the life of containment vessels. This paper reviews the thermodynamic and kinetic considerations for promising, industrially available materials for both salt and S-CO2 systems.
A primary concern with dry storage of spent nuclear fuel is chloride-induced stress corrosion cracking, caused by deliquescence of salts deposited on the stainless steel canisters. However, limited access through the ventilated overpacks and high surface radiation fields impede direct examination of cask surfaces for CISCC, or sampling of surface deposits. Predictive models for CISCC must be able to predict the occurrence of a corrosive chemical environment (a chloride-rich brine formed by dust deliquescence) at specific locations (e.g. weld zones) on the canister surface. The presence of a deliquescent brine is controlled by the relative humidity (RH), which is a function of absolute humidity and cask surface temperature. This requires a thermal model that includes the canister and overpack design, canister-specific waste heat load, and passive cooling by ventilation. Brine compositions vary with initially-deposited salt assemblage, reactions with atmospheric gases, temperature, and the relative rates of salt deposition and reaction; predicting brine composition requires site-specific compositional data for atmospheric aerosols and acid gases. Aerosol particle transport through the overpack and deposition onto the canister must also be assessed. Initial field data show complex variability in the amount and composition of deposited salts as a function of canister surface location.
Lithium-ion battery electrodes rely on a percolated network of solid particles and binder that must maintain a high electronic conductivity in order to function. Coupled mechanical and electrochemical simulations may be able to elucidate the mechanisms for capacity fade. We present a framework for coupled simulations of electrode mechanics that includes swelling, deformation, and stress generation driven by lithium intercalation. These simulations are performed at the mesoscale, which requires 3D reconstruction of the electrode microstructure from experimental imaging or particle size distributions. We present a novel approach for utilizing these complex reconstructions within a finite element code. A mechanical model that involves anisotropic swelling in response to lithium intercalation drives the deformation. Stresses arise from small-scale particle features and lithium concentration gradients. However, we demonstrate, for the first time, that the largest stresses arise from particle-to-particle contacts, making it important to accurately represent the electrode microstructure on the multi-particle scale. Including anisotropy in the swelling mechanics adds considerably more complexity to the stresses and can significantly enhance peak particle stresses. Shear forces arise at contacts due to the misorientation of the lattice structure. These simulations will be used to study mechanical degradation of the electrode structure through charge/discharge cycles.
The US Human Reliability Analysis (HRA) Empirical Study (referred to as the US Study in the article) was conducted to confirm and expand on the insights developed from the International HRA Empirical Study (referred to as the International Study). Similar to the International Study, the US Study evaluated the performance of different HRA methods by comparing method predictions to actual crew performance in simulated accident scenarios conducted in a US nuclear power plant (NPP) simulator. In addition to identification of some new HRA and method related issues, the study design of the US Study allowed insights to be obtained on some issues that were not addressed in the International Study. In particular, because multiple HRA teams applied each method in the US Study, comparing their analyses and predictions allowed separation of analyst effects from method effects and allowed conclusions to be drawn on aspects of methods that are susceptible to different application or usage by different analysts that may lead to differences in results. The findings serve as a strong basis for improving the consistency and robustness of HRA, which in turn facilitates identification of mechanisms for improving operating crew performance in NPPs.
ASME 2014 8th International Conference on Energy Sustainability, ES 2014 Collocated with the ASME 2014 12th International Conference on Fuel Cell Science, Engineering and Technology
Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of flux profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to determine reflected solar rays on the receiver surfaces and construct flux profiles. We found the raytracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries. We provide an example of using SolTrace for modeling non-conventional receiver geometries. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs.
Bottom hole assembly (BHA) designs were assessed in field trials for their ability to achieve critical low inclination requirements, while simultaneously enabling high drill rates. Because angle has historically been controlled by reducing weight on bit (WOB), these are often competing priorities. The use of real time surveillance of mechanical specific energy (MSE) provided unique insights into the bit dysfunction that occurs with many practices used to control angle. These quantitative insights supported the development of BHA and operating practices that maintained low angle while also achieving major gains in drilling performance. The McGinness Hills field in Lander County Nevada is a geothermal operation with wells drilled in hard metamorphic and crystalline formations. Wellbore inclinations must be maintained below 2.0 degrees in the critical 20 inch interval in order to allow use of lineshaft pumps, which is challenging in the required hole sizes and rock hardness. Formation strengths are similar to petroleum operations in the Rockies and West Texas. Pendulum and packed-hole assemblies were tested, and straight motors and slick assemblies were used for corrections. Well build rates were assumed to be controlled by the three-point curvature in the lower assembly and stabilizer placement was modified to control this curvature. The effectiveness of the curvature control as WOB was increased was evaluated from inclination measurements. Real time MSE analysis was used to manage bit operating performance and to determine the root causes of bit dysfunction. The results demonstrated that packed-hole assemblies could be designed that controlled inclination while enabling 2-3 times higher WOB, and that the use of pendulum assemblies should be eliminated. Packed assemblies drilled 87% faster. The increased WOB resulted in higher drill rates, major reduction in whirl and extended bit life, which are equally important performance objectives in hard rock drilling. The use of MSE surveillance allowed the physical processes to be understood deterministically, so that the philosophical design principles can be applied in other petroleum and geothermal operations.
We consider the question of predicting solar adoption using demographic, economic, peer effect and predicted system characteristic features. We use data from San Diego county to evaluate both discrete and continuous models. Additionally, we consider three types of sensitivity analysis to identify which features seem to have the greatest effect on prediction accuracy.
ASME 2014 8th International Conference on Energy Sustainability, ES 2014 Collocated with the ASME 2014 12th International Conference on Fuel Cell Science, Engineering and Technology
The use of an air curtain blowing across the aperture of a falling-particle receiver has been proposed to mitigate convective heat losses and to protect the flow of particles from external winds. This paper presents experimental and numerical studies that evaluate the impact of an air curtain on the performance of a falling particle receiver. Unheated experimental studies were performed to evaluate the impact of various factors (particle size, particle mass flow rate, particle release location, air-curtain flow rate, and external wind) on particle flow, stability, and loss through the aperture. Numerical simulations were performed to evaluate the impact of an air curtain on the thermal efficiency of a falling particle receiver at different operating temperatures. Results showed that the air curtain reduced particle loss when particles were released near the aperture in the presence of external wind, but the presence of the air curtain did not generally improve the flow characteristics and loss of the particles for other scenarios. Numerical results showed that the presence of an air curtain could reduce the convective heat losses, but only at higher temperatures (>600°C) when buoyant hot air leaving the aperture was significant.
ASME 2014 8th International Conference on Energy Sustainability, ES 2014 Collocated with the ASME 2014 12th International Conference on Fuel Cell Science, Engineering and Technology
High-temperature receiver designs for solar powered supercritical CO2Brayton cycles that can produce ∼1 MW of electricity are being investigated. Advantages of a supercritical CO2closed-loop Brayton cycle with recuperation include high efficiency (∼50%) and a small footprint relative to equivalent systems employing steam Rankine power cycles. Heating for the supercritical CO2system occurs in a high-temperature solar receiver that can produce temperatures of at least 700 °C. Depending on whether the CO2is heated directly or indirectly, the receiver may need to withstand pressures up to 20 MPa (200 bar). This paper reviews several high-temperature receiver designs that have been investigated as part of the SERIIUS program. Designs for direct heating of CO2include volumetric receivers and tubular receivers, while designs for indirect heating include volumetric air receivers, molten-salt and liquid-metal tubular receivers, and falling particle receivers. Indirect receiver designs also allow storage of thermal energy for dispatchable electricity generation. Advantages and disadvantages of alternative designs are presented. Current results show that the most viable options include tubular receiver designs for direct and indirect heating of CO2and falling particle receiver designs for indirect heating and storage.
ASME 2014 8th International Conference on Energy Sustainability, ES 2014 Collocated with the ASME 2014 12th International Conference on Fuel Cell Science, Engineering and Technology
Cavity receivers have been an integral part of Concentrated Solar Power (CSP) plants for many years. However, falling solid particle receivers (SPR) which employ a cavity design are only in the beginning stages of on-sun testing and evaluation. A prototype SPR has been developed which will be fully integrated into a complete system to demonstrate the effectiveness of this technology in the CSP sector. The receiver is a rectangular cavity with an aperture on the north side, open bottom (for particle collection), and a slot in the top (particle curtain injection). The solid particles fall from the top of the cavity through the solar flux and are collected after leaving the receiver. There are inherent design challenges with this type of receiver including particle curtain opacity, high wall fluxes, high wall temperatures, and high heat losses. CFD calculations using ANSYS FLUENT were performed to evaluate the effectiveness of the current receiver design. The particle curtain mass flow rate needed to be carefully regulated such that the curtain opacity is high (to intercept as much solar radiation as possible), but also low enough to increase the average particle temperature by 200°C. Wall temperatures were shown to be less than 1200°C when the particle curtain mass flow rate is 2.7 kg/s/m which is critical for the receiver insulation. The size of the cavity was shown to decrease the incident flux on the cavity walls and also reduced the wall temperatures. A thermal efficiency of 92% was achieved, but was obtained with a higher particle mass flow rate resulting in a lower average particle temperature rise. A final prototype receiver design has been completed utilizing the computational evaluation and past CSP project experiences.
Fiber-reinforced composite materials offer light-weight solutions to many structural challenges. In the development of high-performance composite structures, a thorough understanding is required of the composite materials themselves as well as methods for the analysis and failure prediction of the relevant composite structures. However, the mechanical properties required for the complete constitutive definition of a composite material can be difficult to determine through experimentation. Therefore, efficient methods are necessary that can be used to determine which properties are relevant to the analysis of a specific structure and to establish a structure's response to a material parameter that can only be defined through estimation. The objectives of this study deal with the computational examination of the four-point flexural characterization of a carbon fiber composite material. Utilizing a novel, orthotropic material model that is capable of predicting progressive composite damage and failure, a sensitivity analysis is completed to establish which material parameters are truly relevant to a simulation's outcome. Then, a parameter study is completed to determine the effect of the relevant material properties' expected variations on the simulated four-point flexural behavior. Lastly, the results of the parameter study are combined with the orthotropic material model to estimate any relevant material properties that could not be determined through experimentation (e.g., in-plane compressive strength). Results indicate that a sensitivity analysis and parameter study can be used to optimize the material definition process. Furthermore, the discussed techniques are validated with experimental data provided for the flexural characterization of the described carbon fiber composite material.
This paper describes an experiment to characterize ions generated by a pulsed vacuum arc by using a microwave resonant cavity (MRC) as a transient diagnostic. Specific information is desired on the various species which can drift into the beam during repetitive operations of arc plasma generation. The arc source reference voltage is elevated above ground (∼200V), which results in a separation of ion species in the beam due to the acceleration experienced by the ions. The cylindrical MRC used in this study has a resonant frequency of ∼2.8 GHz when excited by a continuous RF source in the TM01 mode of operation. When the neutralized ion beam propagates through the MRC located downstream from the arc source, the resonant frequency of the MRC is shifted by the local disturbance in electric field inside the cavity due to the presence of the electron space charge in the beam. Coupled with the time-of-flight separation of various ion masses, the MRC resonance shift provides a temporally resolved measurement of beam species and density downstream from the vacuum ion source without the use of a potentially invasive diagnostic such as charge collector plates within the beam cross-section. This diagnostic technique should prove useful in a variety of pulsed ion beam studies and applications in research and industrial environments.
Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.
Solar thermal receivers absorb concentrated sunlight and can operate at high temperatures exceeding 600°C for production of heat and electricity. New fractal-like designs employing light-trapping structures and geometries at multiple length scales are proposed to increase the effective solar absorptance and efficiency of these receivers. Radial and linear structures at the micro (surface coatings and depositions), meso (tube shape and geometry), and macro (total receiver geometry and configuration) scales redirect reflected solar radiation toward the interior of the receiver for increased absorptance. Hotter regions within the interior of the receiver also reduce thermal emittance due to reduced local view factors in the interior regions, and higher concentration ratios can be employed with similar surface irradiances to reduce the effective optical aperture and thermal losses. Coupled optical/fluid/thermal models have been developed to evaluate the performance of these designs relative to conventional designs. Results show that fractal-like structures and geometries can reduce total radiative losses by up to 50% and increase the thermal efficiency by up to 10%. The impact was more pronounced for materials with lower inherent solar absorptances (< 0.9). Meso-scale tests were conducted and confirmed model results that showed increased light-trapping from corrugated surfaces relative to flat surfaces.
Recent experimental investigations show that most models are not able to capture the ductile behavior of metal alloys in the entire triaxiality range, especially at low triaxiality. Modelers are moving beyond stress triaxiality as the dominant indicator of material failure and developing constitutive models that incorporate shear into the evolution of the failure model. Available data that cover low triaxiality range are rare and a series of critical experiments is needed. Here, experiments of smooth thin as well as notched tubular specimens of Al6061-T651 under combined tension-torsion loading were conducted. This provides a very basic set of data for phenomenological models. A full-field deformation technique, digital image correlation (DIC), was applied to these tests to allow measurement of the field deformation, including the notched area. The microstructural features of the tested specimens were characterized to better understand the different failure mechanisms which led to ductility variation in the aluminum alloy.
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performanceper- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
Solar optical modeling tools are valuable for modeling and predicting the performance of solar technology systems. Four optical modeling tools were evaluated using the National Solar Thermal Test Facility heliostat field combined with flat plate receiver geometry as a benchmark. The four optical modeling tools evaluated were DELSOL, HELIOS, SolTrace, and Tonatiuh. All are available for free from their respective developers. DELSOL and HELIOS both use a convolution of the sunshape and optical errors for rapid calculation of the incident irradiance profiles on the receiver surfaces. SolTrace and Tonatiuh use ray-tracing methods to intersect the reflected solar rays with the receiver surfaces and construct irradiance profiles. We found the ray-tracing tools, although slower in computation speed, to be more flexible for modeling complex receiver geometries, whereas DELSOL and HELIOS were limited to standard receiver geometries such as flat plate, cylinder, and cavity receivers. We also list the strengths and deficiencies of the tools to show tool preference depending on the modeling and design needs. We provide an example of using SolTrace for modeling nonconventional receiver geometries. The goal is to transfer the irradiance profiles on the receiver surfaces calculated in an optical code to a computational fluid dynamics code such as ANSYS Fluent. This approach eliminates the need for using discrete ordinance or discrete radiation transfer models, which are computationally intensive, within the CFD code. The irradiance profiles on the receiver surfaces then allows for thermal and fluid analysis on the receiver.
Microbial free fatty acids (FFAs) have been proposed as a potential feedstock for renewable energy. The ability to directly convert carbon dioxide into FFAs makes cyanobacteria ideal hosts for renewable FFA production. Previous metabolic engineering efforts using the cyanobacterial hosts Synechocystis sp. PCC 6803 and Synechococcus elongatus PCC 7942 have demonstrated this direct conversion of carbon dioxide into FFAs; however, FFA yields in these hosts are limited by the negative impact of FFA production on the host cell physiology. This work investigates the use of Synechococcus sp. PCC 7002 as a cyanobacterial host for FFA production. In comparison to S. elongatus PCC 7942, Synechococcus sp. PCC 7002 strains produced and excreted FFAs at similar concentrations but without the detrimental effects on host physiology. The enhanced tolerance to FFA production with Synechococcus sp. PCC 7002 was found to be temperature-dependent, with physiological effects such as reduced photosynthetic yield and decreased photosynthetic pigments observed at higher temperatures. Additional genetic manipulations were targeted for increased FFA production, including thioesterases and ribulose-1,5-bisphosphate carboxylase/oxygenase (RuBisCO). Overexpression of non-native RuBisCO subunits (rbcLS) from a psbAI promoter resulted in more than a threefold increase in FFA production, with excreted FFA concentrations reaching >130 mg/L. This work illustrates the importance of host strain selection for cyanobacterial biofuel production and demonstrates that the FFA tolerance of Synechococcus sp. PCC 7002 can allow for high yields of excreted FFA.
Because of relatively recent decisions by the current administration and its renewed assessment of the nuclear life- cycle, the various deep geologic disposal medium options are once again open for consideration. This paper focuses on addressing the favorable creep properties and behavior of rock salt, from the computational modeling perspective, as it relates to its potential use as a disposal medium for a deep geologic repository. The various components that make up a computational modeling capability to address the thermo-mechanical behavior of rock salt over a wide range of time and space are presented here. Several example rock salt calculations are also presented to demonstrate the applicability and validity of the modeling capability described herein to address repository-scale problems. The evidence shown points to a mature computational capability that can generate results relevant to the design and assessment of a potential rock salt HLW repository. The computational capability described here can be used to help enable fuel cycle sustainability by appropriately vetting the use of geologic rock salt for use as a deep geologic disposal medium.
Blast waves from an explosion in air can cause significant structural damage. As an example, cylindrically-shaped charges have been used for over a century as dynamite sticks for mining, excavation, and demolition. Near the charge, the effects of geometry, standoff from the ground, the proximity to other objects, confinement (tamping), and location of the detonator can significantly affect blast wave characteristics. Furthermore, nonuniformity in the surface characteristics and the density of the charge can affect fireball and shockwave structure. Currently, the best method for predicting the shock structure near a charge and the dynamic loading on nearby structures is to use a multidimensional, multimaterial shock physics code. However, no single numerical technique currently exists for predicting secondary combustion, especially when particulates from the charge are propelled through the fireball and ahead of the leading shock lens. Furthermore, the air within the thin shocked layer can dissociate and ionize. Hence, an appropriate equation of state for air is needed in these extreme environments. As a step towards predicting this complex phenomenon, a technique was developed to provide the equilibrium species composition at every computational cell in an air blast simulation as an initial condition for hand-off to other analysis codes for combustion fluid dynamics or radiation transport. Here, a bare cylindrical charge of TNT detonated in air is simulated using CTH, an Eulerian, finite volume, shock propagation code developed and maintained at Sandia National Laboratories. The shock front propagation is computed at early times, including the detonation wave structure in the explosive and the subsequent air shock up to 100 microseconds, where ambient air entrainment is not significant. At each computational cell, which could have TNT detonation products, air, or both TNT and air, the equilibrium species concentration at the density-energy state is computed using the JCZS2i database in the thermochemical code TIGER. This extensive database of 1267 gas (including 189 ionized species) and 490 condensed species can predict thermodynamic states up to 20,000 K. The results of these calculations provide the detailed three-dimensional structure of a thin shock front, and spatial species concentrations including free radicals and ions. Furthermore, air shock predictions are compared with experimental pressure gage data from a right circular cylinder of pressed TNT, detonated at one end. These complimentary predictions show excellent agreement with the data for the primary wave structure.