The objective of this work is to develop models to predict radiation effects in non- volatile memory: flash memory and ferroelectric RAM. In flash memory experiments have found that the internal high-voltage generators (charge pumps) are the most sensitive to radiation damage. Models are presented for radiation effects in charge pumps that demonstrate the experimental results. Floating gate models are developed for the memory cell in two types of flash memory devices by Intel and Samsung. These models utilize Fowler-Nordheim tunneling and hot electron injection to charge and erase the floating gate. Erase times are calculated from the models and compared with experimental results for different radiation doses. FRAM is less sensitive to radiation than flash memory, but measurements show that above 100 Krad FRAM suffers from a large increase in leakage current. A model for this effect is developed which compares closely with the measurements.
Window taper functions of finite apertures are well-known to control undesirable sidelobes, albeit with performance trades. A plethora of various taper functions have been developed over the years to achieve various optimizations. We herein catalog a number of window functions, and com pare principal characteristics.
Safeguards are technical measures implemented by the International Atomic Energy Agency (IAEA) to independently verify that nuclear material is not diverted from peaceful purposes to weapons (IAEA, 2017a). Safeguards implemented at uranium enrichment facilities (facilities hereafter) include enrichment monitors (IAEA, 2011). Figure 1 shows a diagram of how a facility could be monitored. The use of a system for monitoring within centrifuge cascades is proposed.
The Federal Radiological Monitoring and Assessment Center (FRMAC) relies on accurate and defensible analytical laboratory data to support its mission. Therefore, FRMAC must ensure that the environmental analytical laboratories providing analytical services maintain an ongoing capability to provide accurate analytical results to DOE. It is undeniable that the more Quality Assurance (QA) and Quality Control (QC) measures required of the laboratory, the less resources that are available for analysis of response samples. Being that QA and QC measures in general are understood to comprise a major effort related to a laboratory’s operations, requirements should only be considered if they are deemed “value-added” for the FRMAC mission. This report provides observations of areas for improvement and potential interoperability opportunities in the areas of Batch Quality Control Requirements, Written Communications, Data Review Processes, Data Reporting Processes, along with the lessons learned as they apply to items in the early phase of a response that will be critical for developing a more efficient, integrated response for future interactions between the FRMAC and EPA assets.
This report uses the CMIP5 series of climate model simulations to produce country- level uncertainty distributions for use in socioeconomic risk assessments of climate change impacts. It provides appropriate probability distributions, by month, for 169 countries and autonomous-areas on temperature, precipitation, maximum temperature, maximum wind speed, humidity, runoff, soil moisture and evaporation for the historical period (1976-2005), and for decadal time periods to 2100. It also provides historical and future distributions for the Arctic region on ice concentration, ice thickness, age of ice, and ice ridging in 15-degree longitude arc segments from the Arctic Circle to 80 degrees latitude, plus two polar semicircular regions from 80 to 90 degrees latitude. The uncertainty is meant to describe the lack of knowledge rather than imprecision in the physical simulation because the emphasis is on unfalsified risk and its use to determine potential socioeconomic impacts. The full report is contained in 27 volumes.
The theory of signal propagation in lossy coaxial transmission lines is revisited and new approximate analytic formulas for the line impedance and attenuation are derived. The accuracy of these formulas from DC to 100 GHz is demonstrated by comparison to numerical solutions of the exact field equations. Based on this analysis, a new circuit model is described which accurately reproduces the line response over the entire frequency range. Circuit model calculations are in excellent agreement with the numerical and analytic results, and with finite-difference-time-domain simulations which resolve the skindepths of the conducting walls.
The FY17Q2 milestone of the ECP/VTK-m project, which is the first milestone, includes the completion of design documents for the introduction of virtual methods into the VTK-m framework. Specifically, the ability from within the code of a device (e.g. GPU or Xeon Phi) to jump to a virtual method specified at run time. This change will enable us to drastically reduce the compile time and the executable code size for the VTK-m library. Our first design introduced the idea of adding virtual functions to classes that are used during algorithm execution. (Virtual methods were previously banned from the so called execution environment.) The design was straightforward. VTK-m already has the generic concepts of an “array handle” that provides a uniform interface to memory of different structures and an “array portal” that provides generic access to said memory. These array handles and portals use C++ templating to adjust them to different memory structures. This composition provides a powerful ability to adapt to data sources, but requires knowing static types. The proposed design creates a template specialization of an array portal that decorates another array handle while hiding its type. In this way we can wrap any type of static array handle and then feed it to a single compiled instance of a function. The second design focused on the mechanics of implementing virtual methods on parallel devices with a focus on CUDA. Our initial experiments on CUDA showed a very large overhead for using virtual C++ classes with virtual methods, the standard approach. Instead, we are using an alternate method provided by C that uses function pointers. With the completion of this milestone, we are able to move to the implementation of objects with virtual (like) methods. The upshot will be much faster compile times and much smaller library/executable sizes.
This report gives a brief discussion and examples on the topic of state estimation for wave energy converters (WECs). These methods are intended for use to enable real-time closed loop control of WECs.
All major program milestones have been met and the program is executing within budget. The ALT 370 program achieved Phase 6.4 authorization in February of this year. Five component Final Design Reviews (FDRs) have been completed, indicating progress in finalizing the design and development phase of the program. A series of ground-based qualification activities have demonstrated that designs are meeting functional requirements. The first fully functional flight test, FCET-53, demonstrated end-to-end performance in normal flight environments in February. Similarly, groundbased nuclear safety and hostile environments testing indicates that the design meets requirements in these stringent environments. The first in a series of hostile blast tests was successfully conducted in April.
The growth of a cylindrical s park discharge channel in water and Lexan is studied using a series of one - dimensional simulations with the finite - element radiation - magnetohydrodynamics code ALEGRA. Computed solutions are analyzed in order to characterize the rate of growth and dynamics of the spark c hannels during the rising - current phase of the drive pulse. The current ramp rate is varied between 0.2 and 3.0 kA/ns, and values of the mechanical coupling coefficient K p are extracted for each case. The simulations predict spark channel expansion veloc ities primarily in the range of 2000 to 3500 m/s, channel pressures primarily in the range 10 - 40 GPa, and K p values primarily between 1.1 and 1.4. When Lexan is preheated, slightly larger expansion velocities and smaller K p values are predicted , but the o verall behavior is unchanged.
The B61-12 LEP is currently executing Phase 6.4 Production Engineering with a focus on qualification and preproduction activities. All major milestones have been successfully completed to date. Component Final Design Reviews (FDRs) continue in FY17, with 19 of 38 complete as of April 28. A series of normal and abnormal environments tests occurred in the first half of FY17, and the first qualification flight test on an F-16 was executed in March. Two F-15 qualification flight tests are planned in August. To support Pantex readiness, the first all-up-round (AUR) trainer builds were completed in December 2016. Progress is ongoing toward closure of Air Force Nuclear Weapons Center (tailkit) and Los Alamos National Laboratory interface gaps, and resolution of producibility challenges with the Kansas City National Security Campus (KCNSC).
The technology performance level (TPL) assessments can be applied at all technology development stages and associated technology readiness levels (TRLs). Even, and particularly, at low TRLs the TPL assessment is very effective as it, holistically, considers a wide range of WEC attributes that determine the techno-economic performance potential of the WEC farm when fully developed for commercial operation. The TPL assessment also highlights potential showstoppers at the earliest possible stage of the WEC technology development. Hence, the TPL assessment identifies the technology independent “performance requirements.” In order to achieve a successful solution, the entirety of the performance requirements within the TPL must be considered because, in the end, all the stakeholder needs must be achieved. The basis for performing a TPL assessment comes from the information provided in a dedicated format, the Technical Submission Form (TSF). The TSF requests information from the WEC developer that is required to answer the questions posed in the TPL assessment document.
The Wave - SPARC project developed the Technology Performance Level (TPL) assessment procedure based on a rigorous Systems Engineering exercise. The TPL assessment allows a whole system evaluation of Wave Energy Conversion Technology by measuring it against the requirements determined through the Systems Engineering exercise. The TPL assessment is intended to be useful in technology evaluation; in technology innovation; in allocation of public or priva te investment, and; in making equipment purchasing decisions. This Technical Submission Form (TSF) serves the purpose of collecting relevant and complete information, in a technology agnostic way, to allow TPL assessment s to be made by third party assessor s. The intended usage of this document is that the organization or people that are performing the role of developers or promoters of a particular technology will use this form to provide the information necessary for the organization or people who are perf orming the assessor role to use the TPL assessment.
A motivation for undertaking this stakeholder requirements analysis and Systems Engineering exercise is to document the requirements for successful wave energy farms to facilitate better design and better design assessments. A difficulty in wave energy technology development is the absence to date of a verifiable minimum viable product against which the merits of new products might be measured. A consequence of this absence is that technology development progress, technology value, and technology funding have largely been measured, associated with, and driven by technology readiness, measured in technology readiness levels (TRLs). Originating primarily from the space and defense industries, TRLs focus on procedural implementation of technology developments of large and complex engineering projects, where cost is neither mission critical nor a key design driver. The key deficiency with the TRL approach in the context of wave energy conversion is that WEC technology development has been too focused on commercial readiness and not enough on the stakeholder requirements and particularly economic viability required for market entry.
We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.
A challenge in computer architecture is that processors often cannot be fed data from DRAM as fast as CPUs can consume it. Therefore, many applications are memory-bandwidth bound. With this motivation and the realization that traditional architectures (with all DRAM reachable only via bus) are insufficient to feed groups of modern processing units, vendors have introduced a variety of non-DDR 3D memory technologies (Hybrid Memory Cube (HMC),Wide I/O 2, High Bandwidth Memory (HBM)). These offer higher bandwidth and lower power by stacking DRAM chips on the processor or nearby on a silicon interposer. We will call these solutions “near-memory,” and if user-addressable, “scratchpad.” High-performance systems on the market now offer two levels of main memory: near-memory on package and traditional DRAM further away. In the near term we expect the latencies near-memory and DRAM to be similar. Thus, it is natural to think of near-memory as another module on the DRAM level of the memory hierarchy. Vendors are expected to offer modes in which the near memory is used as cache, but we believe that this will be inefficient. In this paper, we explore the design space for a user-controlled multi-level main memory. Our work identifies situations in which rewriting application kernels can provide significant performance gains when using near-memory. We present algorithms designed for two-level main memory, using divide-and-conquer to partition computations and streaming to exploit data locality. We consider algorithms for the fundamental application of sorting and for the data analysis kernel k-means. Our algorithms asymptotically reduce memory-block transfers under certain architectural parameter settings. We use and extend Sandia National Laboratories’ SST simulation capability to demonstrate the relationship between increased bandwidth and improved algorithmic performance. Memory access counts from simulations corroborate predicted performance improvements for our sorting algorithm. In contrast, the k-means algorithm is generally CPU bound and does not improve when using near-memory except under extreme conditions. These conditions require large instances that rule out SST simulation, but we demonstrate improvements by running on a customized machine with high and low bandwidth memory. These case studies in co-design serve as positive and cautionary templates, respectively, for the major task of optimizing the computational kernels of many fundamental applications for two-level main memory systems.
This paper describes how parallel elastic elements can be used to reduce energy consumption in the electric-motor-driven, fully actuated, Sandia Transmission-Efficient Prototype Promoting Research (STEPPR) bipedal walking robot without compromising or significantly limiting locomotive behaviors. A physically motivated approach is used to illustrate how selectively engaging springs for hip adduction and ankle flexion predict benefits for three different flat-ground walking gaits: human walking, human-like robot walking, and crouched robot walking. Based on locomotion data, springs are designed and substantial reductions in power consumption are demonstrated using a bench dynamometer. These lessons are then applied to STEPPR, a fully actuated bipedal robot designed to explore the impact of tailored joint mechanisms on walking efficiency. Featuring high-Torque brushless DC motors, efficient low-ratio transmissions, and high-fidelity torque control, STEPPR provides the ability to incorporate novel joint-level mechanisms without dramatically altering high-level control. Unique parallel elastic designs are incorporated into STEPPR, and walking data show that hip adduction and ankle flexion springs significantly reduce the required actuator energy at those joints for several gaits. These results suggest that parallel joint springs offer a promising means of supporting quasi-static joint torques due to body mass during walking, relieving motors of the need to support these torques and substantially improving locomotive energy efficiency.
Ultrafast optical microscopy of metal z-pinch rods pulsed with megaampere current is contributing new data and critical insight into what provides the fundamental seed for the magneto-Rayleigh-Taylor (MRT) instability. A two-frame near infrared/visible intensified-charge-coupled device gated imager with 2-ns temporal resolution and 3-μm spatial resolution captured emissions from the nonuniformly Joule heated surfaces of ultrasmooth aluminum (Al) rods. Nonuniform surface emissions are consistently first observed from discrete, 10-μm scale, subelectronvolt spots. Aluminum 6061 alloy, with micrometer-scale nonmetallic resistive inclusions, forms several times more spots than 99.999% pure Al 5N; 5-10 ns later, azimuthally stretched elliptical spots and distinct strata (40-100μm wide by 10μm tall) are observed on Al 6061, but not on Al 5N. Such overheat strata, which are aligned parallel to the magnetic field, are highly effective seeds for MRT instability growth. These data give credence to the hypothesis that early nonuniform Joule heating, such as the electrothermal instability, may provide the dominant seed for MRT.
This paper describes the design and performance of a synthetic rope on sheave drive system. This system uses synthetic ropes instead of steel cables to achieve low weight and a compact form factor. We demonstrate how this system is capable of 28-Hz torque control bandwidth, 95% efficiency, and quiet operation, making it ideal for use on legged robots and other dynamic physically interactive systems. Component geometry and tailored maintenance procedures are used to achieve high endurance. Endurance tests based on walking data predict that the ropes will survive roughly 247,000 cycles when used on large (90 kg), fully actuated bipedal robot systems. The drive systems have been incorporated into two novel bipedal robots capable of three-dimensional unsupported walking. Robot data illustrate effective torque tracking and nearly silent operation. Finally, comparisons with alternative transmission designs illustrate the size, weight, and endurance advantages of using this type of synthetic rope drive system.
We provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3 prior to smoothing.
Missing samples within synthetic aperture radar data result in image distortions. For coherent data products, such as coherent change detection and interferometric processing, the image distortion can be devastating to these second-order products, resulting in missed detections, and inaccurate height maps. Previous approaches to repair the coherent data products focus upon reconstructing the missing data samples. This paper demonstrates that reconstruction is not necessary to restore the quality of the coherent data products.
Scanning calorimetry of a confined, reversible hydrogen sorbent material has been previously proposed as a method to determine compositions of unknown mixtures of diatomic hydrogen isotopologues and helium. Application of this concept could result in greater process knowledge during the handling of these gases. Previously published studies have focused on mixtures that do not include tritium. This paper focuses on modeling to predict the effect of tritium in mixtures of the isotopologues on a calorimetry scan. The model predicts that tritium can be measured with a sensitivity comparable to that observed for hydrogen-deuterium mixtures, and that under some conditions, it may be possible to determine the atomic fractions of all three isotopes in a gas mixture.
We present an all-quad meshing algorithm for general domains. We start with a strongly balanced quadtree. In contrast to snapping the quadtree corners onto the geometric domain boundaries, we move them away from the geometry. Then we intersect the moved grid with the geometry. The resulting polygons are converted into quads with midpoint subdivision. Moving away avoids creating any flat angles, either at a quadtree corner or at a geometry–quadtree intersection. We are able to handle two-sided domains, and more complex topologies than prior methods. The algorithm is provably correct and robust in practice. It is cleanup-free, meaning we have angle and edge length bounds without the use of any pillowing, swapping, or smoothing. Thus, our simple algorithm is fast and predictable. This paper has better quality bounds, and the algorithm is demonstrated over more complex domains, than our prior version.
Matteo, Edward N.; Douba, A.; Genedy, M.; Stormont, J.; Reda Taha, M.M.
Polymer concrete (PC) is a commonly used material in construction due to its improved durability and good bond strength to steel substrate. PC has been suggested as a repair and seal material to restore the bond between the cement annulus and the steel casing in wells that penetrate formations under consideration for CO2 sequestration. Nanoparticles including Multi-Walled Carbon Nano Tubes (MWCNTs), Aluminum Nanoparticles (ANPs) and Silica Nano particles (SNPs) were added to an epoxy-based PC to examine how the nanoparticles affect the bond strength of PC to a steel substrate. Slant shear tests were used to determine the bond strength of PC incorporating nanomaterials to steel; results reveal that PC incorporating nanomaterials has an improved bond strength to steel substrate compared with neat PC. In particular, ANPs improve the bond strength by 51% over neat PC. Local shear stresses, extracted from Finite Element (FE) analysis of the slant shear test, were found to be as much as twice the apparent/average shear/bond strength. These results suggest that the impact of nanomaterials is higher than that shown by the apparent strength. Fourier Transform Infrared (FTIR) measurements of epoxy with and without nanomaterials showed ANPs to influence curing of epoxy, which might explain the improved bond strength of PC incorporating ANPs.
A new collimated filtered thermoluminescent dosimeter (TLD) array has been developed at the Z facility to characterize warm x-rays (hν > 10 keV) produced by Z pinch radiation sources. This array includes a Kapton debris shield assembly to protect the TLDs from the source debris, a collimator array to limit the field of view of the TLDs to the source region, a filter wheel containing filters of aluminum, copper and tungsten up to 3 mm thick to independently filter each TLD, and a hermetically sealed cassette containing the TLDs as well as tungsten shielding on the sides and back of the array to minimize scattered radiation reaching the TLDs. Experimental results from a krypton gas puff and silver wire array shot are analyzed using two different functional forms of the energy spectrum to demonstrate the ability of this diagnostic to consistently extend the upper end of the x-ray spectrum characterization from ∼50 keV to >1 MeV.
To meet regulatory needs for sodium fast reactors’ future development, including licensing requirements, Sandia National Laboratories is modernizing MELCOR, a severe accident analysis computer code developed for the U.S. Nuclear Regulatory Commission (NRC). Specifically, Sandia is modernizing MELCOR to include the capability to model sodium reactors. However, Sandia’s modernization effort primarily focuses on the containment response aspects of the sodium reactor accidents. Sandia began modernizing MELCOR in 2013 to allow a sodium coolant, rather than water, for conventional light water reactors. In the past three years, Sandia has been implementing the sodium chemistry containment models in CONTAIN-LMR, a legacy NRC code, into MELCOR. These chemistry models include spray fire, pool fire and atmosphere chemistry models. Only the first two chemistry models have been implemented though it is intended to implement all these models into MELCOR. A new package called “NAC” has been created to manage the sodium chemistry model more efficiently. In 2017 Sandia began validating the implemented models in MELCOR by simulating available experiments. The CONTAIN-LMR sodium models include sodium atmosphere chemistry and sodium-concrete interaction models. This paper presents sodium property models, the implemented models, implementation issues, and a path towards validation against existing experimental data.
This monthly report is intended to communicate the status of North Slope ARM facilities managed by Sandia National Labs. The report includes: budget, safety, instrument status, and North Slope facilities.
The physical mechanisms of energy dissipation in foam to metal interfaces must be understood in order to develop predictive models of systems with foam packaging common to many aerospace and aeronautical applications. Experimental data was obtained from hardware termed Ministack, which has large, unbonded interfaces held under compressive preload. This setup has a solid aluminum mass placed into two foam cups which are then inserted into an aluminum can and fastened with a known preload. Ministack was tested on a shaker using upward sine sweep base acceleration excitations to estimate the linearized natural frequency and energy dissipation of the first axial mode. The experimental system was disassembled and reassembled before each series of tests in order to observe the effects of the assembly to assembly variability on the dynamics. Additionally, Ministack was subjected to upward and downward sweeps to gain some understanding of the nonlinearities. Finally, Ministack was tested using a transient input, and the ring down was analyzed to find the effective stiffness and damping. There are some important findings in the measured data: there is significant assembly to assembly variability, the order in which the sine sweeps are performed influence s the dynamic response, and the system exhibits nontrivial damping and stiffness nonlinearities that must be accounted for in modeling efforts .
There are many statistical challenges in the design and analysis of margin testing for product qualification. To further complicate issues, there are multiple types of margins that can be considered and there are often competing experimental designs to evaluate the various types of margin. There are two major variants of margin that must be addressed for engineered components: performance margin and design margin. They can be differentiated by the specific regions of the requirements space that they address. Performance margin are evaluated within the region where all inputs and environments are within requirements, and it expresses the difference between actual performance and the required performance of the system or component. Design margin expresses the difference between the maximum (or minimum) inputs and environments where the component continues to operate as intended (i.e. all performance requirements are still met), and the required inputs and conditions. The model Performance = f(Inputs, Environments? + ϵ (1) can be used to help frame the overall set of margin questions. The interdependence of inputs, environments, and outputs should be considered during the course of development in order to identify a complete test program that addresses both performance margin and design margin questions. Statistical methods can be utilized to produce a holistic and efficient program, both for qualitative activities that are designed to reveal margin limiters and for activities where margin quantification is desired. This paper discusses a holistic framework and taxonomy for margin testing and identifies key statistical challenges that may arise in developing such a program.
The Bernoulli CUSUM (BC) provides a moving window of process performance and is the quickest control chart to detect small increases in fraction defective. The Bernoulli CUSUM designs presented here require 2, 3, or 4 failures in a moving window to produce a signal. The run length distribution provides insight into the properties of the BC beyond the Average or Median Run length. A retrospective analysis of electronic component pass/fail data using the BC suggested that a problem may have been present during previous production. Subsequent production used the BC for real time process performance feedback.
This document outlines a data-driven probabilistic approach to setting product acceptance testing limits. Product Specification (PS) limits are testing requirements for assuring that the product meets the product requirements. After identifying key manufacturing and performance parameters for acceptance testing, PS limits should be specified for these parameters, with the limits selected to assure that the unit will have a very high likelihood of meeting product requirements (barring any quality defects that would not be detected in acceptance testing). Because the settings for which the product requirements must be met is typically broader than the production acceptance testing space, PS limits should account for the difference between the acceptance testing setting relative to the worst-case setting. We propose an approach to setting PS limits that is based on demonstrating margin to the product requirement in the worst-case setting in which the requirement must be met. PS limits are then determined by considering the overall margin and uncertainty associated with a component requirement and then balancing this margin and uncertainty between the designer and producer. Specifically, after identifying parameters critical to component performance, we propose setting PS limits using a three step procedure: 1. Specify the acceptance testing and worst-case use-settings, the performance characteristic distributions in these two settings, and the mapping between these distributions. 2. Determine the PS limit in the worst-case use-setting by considering margin to the requirement and additional (epistemic) uncertainties. This step controls designer risk, namely the risk of producing product that violates requirements. 3. Define the PS limit for product acceptance testing by transforming the PS limit from the worst-case setting to the acceptance testing setting using the mapping between these distributions. Following this step, the producer risk is quantified by estimating the product scrap rate based on the projected acceptance testing distribution. The approach proposed here provides a framework for documenting the procedure and assumptions used to determine PS limits. This transparency in procedure will help inform what actions should occur when a unit violates a PS limit and how limits should change over time.
In-cylinder reforming of injected fuel during an auxiliary negative valve overlap (NVO) period can be used to optimize main-cycle auto-ignition phasing for low-load Low-Temperature Gasoline Combustion (LTGC), where highly dilute mixtures can lead to poor combustion stability. When mixed with fresh intake charge and fuel, these reformate streams can alter overall charge reactivity characteristics. The central issue remains large parasitic heat losses from the retention and compression of hot exhaust gases along with modest pumping losses that result from mixing hot NVO-period gases with the cooler intake charge. Accurate determination of total cycle energy utilization is complicated by the fact that NVO-period retained fuel energy is consumed during the subsequent main combustion period. For the present study, a full-cycle energy analysis was performed for a single-cylinder research engine undergoing LTGC with varying NVO auxiliary fueling rates and injection timing. A custom alternate-fire sequence with 9 pre-conditioning cycles was used to generate a common exhaust temperature and composition boundary condition for a cycle-of-interest, with performance metrics recorded for each custom cycle. The NVO-period reformate stream and main-period exhaust stream of the cycles-of-interest were separately collected, with sample analysis by gas chromatography used to identify the retained and exhausted fuel energy in the respective periods. To facilitate gas sample analysis, experiments were performed using a 5-component gasoline surrogate (iso-octane, n-heptane, ethanol, 1-hexene, and toluene) that matched the molecular composition, 50% boiling point, and ignition characteristics of a research gasoline. The highest total cycle thermodynamic efficiencies occurred when auxiliary injection timings were early enough to allow sufficient residence time for slow reforming reactions to take place, but late enough to prevent significant fuel spray crevice quench. Increasing the fraction of total fuel energy injected into the NVO-period was also found to increase total cycle thermal efficiencies, in part due to a modest reduction in NVO-period heat loss from a combination of fuel-spray charge cooling and endothermic fuel decomposition by pyrolysis. The effect was most pronounced at the lowest loads where larger charge mass reformate fractions increased overall specific heat ratios and main-period combustion phasing advanced closer to top dead center. These effects improved both expansion efficiency and combustion stability.
Experiments conducted with a set of reference diesel fuels in an optically accessible, compression-ignition engine have revealed a strong correlation between hydrocarbon (HC) emissions and the flame lift-off length at the end of the premixed burn (EOPMB), with increasing HC emissions associated with longer lift-off lengths. The correlation is largely independent of fuel properties and charge-gas O2 mole fraction, but varies with fuel-injection pressure. A transient, one-dimensional jet model was used to investigate three separate mechanisms that could explain the observed impact of lift-off length on HC emissions. Each mechanism relies on the formation of mixtures that are too lean to support combustion, or “overlean.” First, overlean regions can be formed after the start of fuel injection but before the end of the premixed burn. Second, during the mixing-controlled burn phase, longer lift-off lengths could increase the mass of fuel in overlean regions near the radial edge of the spray cone. Third, after the end of injection, a region of increased entrainment and mixing upstream of the lift-off length could cause late-injected fuel to become overlean. The model revealed a correlation between the lift-off length at EOPMB and overlean regions from the mixing-controlled burn that closely matched experimentally observed trends. HC emissions associated with overlean regions produced either before the end of the premixed burn or after the end of injection did not correspond as well to the experimental observations.
This study proposes a DC optimal power flow (DCOPF) with losses formulation (the `-DCOPF+S problem) and uses it to investigate the role of real power losses in OPF based grid-scale storage integration. We derive the `- DCOPF+S problem by augmenting a standard DCOPF with storage (DCOPF+S) problem to include quadratic real power loss approximations. This procedure leads to a multi-period nonconvex quadratically constrained quadratic program, which we prove can be solved to optimality using either a semidefinite or second order cone relaxation. Our approach has some important benefits over existing models. It is more computationally tractable than ACOPF with storage (ACOPF+S) formulations and the provably exact convex relaxations guarantee that an optimal solution can be attained for a feasible problem. Adding loss approximations to a DCOPF+S model leads to a more accurate representation of locational marginal prices, which have been shown to be critical to determining optimal storage dispatch and siting in prior ACOPF+S based studies. Case studies demonstrate the improved accuracy of the `-DCOPF+S model over a DCOPF+S model and the computational advantages over an ACOPF+S formulation.
This article reports an analysis of the first detailed chemistry direct numerical simulation (DNS) of a high Karlovitz number laboratory premixed flame. The DNS results are first compared with those from laser-based diagnostics with good agreement. The subsequent analysis focuses on a detailed investigation of the flame area, its local thickness and their rates of change in isosurface following reference frames, quantities that are intimately connected. The net flame stretch is demonstrated to be a small residual of large competing terms: The positive tangential strain term and the negative curvature stretch term. The latter is found to be driven by flame speed-curvature correlations and dominated in net by low probability highly curved regions. Flame thickening is demonstrated to be substantial on average, while local regions of flame thinning are also observed. The rate of change of the flame thickness (as measured by the scalar gradient magnitude) is demonstrated, analogously to flame stretch, to be a competition between straining tending to increase gradients and flame speed variations in the normal direction tending to decrease them. The flame stretch and flame thickness analyses are connected by the observation that high positive tangential strain rate regions generally correspond with low curvature regions; these regions tend to be positively stretched in net and are relatively thinner compared with other regions. High curvature magnitude regions (both positive and negative) generally correspond with lower tangential strain; these regions are in net negatively stretched and thickened substantially.
Dedicated DIII-D experiments coupled with modeling reveal that the net erosion rate of high-Z materials, i.e. Mo and W, is strongly affected by carbon concentration in the plasma and the magnetic pre-sheath properties. Different methods such as electrical biasing and local gas injection have been investigated to control high-Z material erosion. The net erosion rate of high-Z materials is significantly reduced due to the high local re-deposition ratio. The ERO modeling shows that the local re-deposition ratio is mainly controlled by the electric field and plasma density within the magnetic pre-sheath. The net erosion can be significantly suppressed by reducing the sheath potential drop. A high carbon impurity concentration in the background plasma is also found to reduce the net erosion rate of high-Z materials. Both DIII-D experiments and modeling show that local 13CH4 injection can create a carbon coating on the metal surface. The profile of 13C deposition provides quantitative information on radial transport due to E × B drift and the cross-field diffusion. The deuterium gas injection upstream of the W sample can reduce W net erosion rate by plasma perturbation. In H-mode plasmas, the measured inter-ELM W erosion rates at different radial locations are well reproduced by ERO modeling taking into account charge-state-resolved carbon ion flux in the background plasma calculated using the OEDGE code.
Mechanical serial sectioning is a highly repetitive technique employed in metallography for the rendering of 3D reconstructions of microstructure. While alternate techniques such as ultrasonic detection, micro-computed tomography, and focused ion beam milling have progressed much in recent years, few alternatives provide equivalent opportunities for comparatively high resolutions over significantly sized cross-sectional areas and volumes. To that end, the introduction of automated serial sectioning systems has greatly heightened repeatability and increased data collection rates while diminishing opportunity for mishandling and other user-introduced errors. Unfortunately, even among current, state-of-the-art automated serial sectioning systems, challenges in data collection have not been fully eradicated. Therefore, this paper highlights two specific advances to assist in this area; a non-contact laser triangulation method for assessment of material removal rates and a newly developed graphical user interface providing real-time monitoring of experimental progress. Furthermore, both are shown to be helpful in the rapid identification of anomalies and interruptions, while also providing comparable and less error-prone measures of removal rate over the course of these long-term, challenging, and innately destructive characterization experiments.