Pyrolyzed carbon as a mechanical material is promising for applications in harsh environments. In this work, we characterized the material and developed novel processes for fabricating carbon composite micro-electromechanical systems (CMEMS) structures. A novel method of increasing Young's modulus and the conductivity of pyrolyzed AZ 4330 was demonstrated by loading the films with graphene oxide prior to pyrolysis. By incorporating 2 wt.% graphene stiffeners into the film, a 65% increase in Young's modulus and 11% increase in conductivity were achieved. By reactive ion etching pyrolyzed blanket AZ 50XT thick film photoresist, a high aspect ratio process was demonstrated with films >7.5um thick. Two novel multi-level, volume-scalable CMEMS processes were developed on 6" diameter wafers. Young's modulus of 23 GPa was extracted from nanoindentation measurements of pyrolyzed AZ 50XT films. The temperature-dependent resistance was characterized from room temperature to 500C and found to be nearly linear over this range. By fitting the results of self-heated bridges in an inert ambient, we calculated that the bridges survived to 1000C without failure. Transmission electron microscopy (TEM) results showed the film to be largely amorphous, containing some sub-micrometer sized graphite crystallites. This was consistent with our Raman analysis, which also showed the film to be largely sp2 bonded. The calculated average density of pyrolyzed AZ 4330 films was 1.32 g/cm2. Thin level of disorder and the conductivity of thin film resistors were found to unchanged by 2Mrad gamma irradiation from a Co60 source. Thin film pyrolyzed carbon resistors were hermetically sealed in a nitrogen ambient in 24-pin dual in-line packages (DIP's). The resistance was measured periodically and remained constant over 6 months' time.
Thermodynamic quantities, such as pressure and internal energy, and their derivatives, are used in many applications. Depending on application, a natural set of quantities related to one of four thermodynamic potentials are typically used. For example, hydro-codes use internal energy derived quantities and Equation of State work often uses Helmholtz free energy quantities. When performing work spanning over several fields, transformations between one set of quantities and another set of quantities are often needed. A short, but comprehensive, review of such transformations are given in this report.
Here, we describe a new method to measure the activation energy required to remove a strongly-bound membrane-associated protein from a lipid membrane (anchoring energy). It is based on measuring the rate of release of a liposome-bound protein during centrifugation on a sucrose gradient as a function of time and temperature. The method was used to determine anchoring energy for the soluble dengue virus envelope protein (sE) strongly bound to 80:20 POPC:POPG liposomes at pH 5.5. We also measured the binding energy of sE at the same pH for the initial, predominantly reversible, phase of binding to a 70:30 PC:PG lipid bilayer. The anchoring energy (37 +/- 1.7 kcal/mol, 20% PG) was found to be much larger than the binding energy (7.8 +/- 0.3 kcal/mol for 30% PG, or est. 7.0 kcal/mol for 20% PG). This is consistent with data showing that free sE is a monomer at pH 5.5, but assembles into trimers after associating with membranes. But, trimerization alone is insufficient to account for the observed difference in energies, and we conclude that some energy dissipation occurs during the release process. This new method to determine anchoring energy should be useful to understand the complex interactions of integral monotopic proteins and strongly-bound peripheral membrane proteins with lipid membranes.
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Performance portability on heterogeneous high-performance computing (HPC) systems is a major challenge faced today by code developers: parallel code needs to execute correctly as well as with high performance on machines with different architectures, operating systems, and software libraries. The Finite Element Method (FEM) is a popular and flexible method for discretizing partial differential equations arising in a wide variety of scientific, engineering, and industry applications that require HPC. This paper presents some preliminary results pertaining to our development of a performance portable implementation of the FEM-based Albany code. Performance portability is achieved using the Kokkos library of Trilinos. We present performance results for two different physics simulations modules in Albany: the Aeras global atmosphere dynamical code and the FELIX land-ice solver. As a result, numerical experiments show that our single code implementation gives reasonable performance across two multi-core/many-core architectures: NVIDIA GPUs and multi-core CPUs.
We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.
Geologic carbon storage in deep saline aquifers is a promising technology for reducing anthropogenic emissions into the atmosphere. Dissolution of injected CO2 into resident brines is one of the primary trapping mechanisms generally considered necessary to provide long-term storage security. Given that diffusion of CO2 in brine is woefully slow, convective dissolution, driven by a small increase in brine density with CO2 saturation, is considered to be the primary mechanism of dissolution trapping. Previous studies of convective dissolution have typically only considered the convective process in the single-phase region below the capillary transition zone and have either ignored the overlying two-phase region where dissolution actually takes place or replaced it with a virtual region with reduced or enhanced constant permeability. Our objective is to improve estimates of the long-term dissolution flux of CO2 into brine by including the capillary transition zone in two-phase model simulations. In the fully two-phase model, there is a capillary transition zone above the brine-saturated region over which the brine saturation decreases with increasing elevation. Our two-phase simulations show that the dissolution flux obtained by assuming a brine-saturated, single-phase porous region with a closed upper boundary is recovered in the limit of vanishing entry pressure and capillary transition zone. For typical finite entry pressures and capillary transition zone, however, convection currents penetrate into the two-phase region. As a result, this removes the mass transfer limitation of the diffusive boundary layer and enhances the convective dissolution flux of CO2 more than 3 times above the rate assuming single-phase conditions.
We present a verification and validation analysis of a coordinate-transformation-based numerical solution method for the two-dimensional axisymmetric magnetic diffusion equation, implemented in the finite-element simulation code ALEGRA. The transformation, suggested by Melissen and Simkin, yields an equation set perfectly suited for linear finite elements and for problems with large jumps in material conductivity near the axis. The verification analysis examines transient magnetic diffusion in a rod or wire in a very low conductivity background by first deriving an approximate analytic solution using perturbation theory. This approach for generating a reference solution is shown to be not fully satisfactory. A specialized approach for manufacturing an exact solution is then used to demonstrate second-order convergence under spatial refinement and tem- poral refinement. For this new implementation, a significant improvement relative to previously available formulations is observed. Benefits in accuracy for computed current density and Joule heating are also demonstrated. The validation analysis examines the circuit-driven explosion of a copper wire using resistive magnetohydrodynamics modeling, in comparison to experimental tests. The new implementation matches the accuracy of the existing formulation, with both formulations capturing the experimental burst time and action to within approximately 2%.
Herein we develop a quantitative dye dequenching technique for the measurement of polymersome fusion, using it to characterize the salt mediated, mechanically-induced fusion of polymersomes with polymer, lipid, and so-called stealth lipid vesicles. While dye dequenching has been used to quantitatively explore liposome fusion in the past, this is the first use of dye dequenching to measure polymersome fusion of which we are aware. In addition to providing quantitative results, dye dequenching is ideal for detecting fusion in instances where DLS results would be ambiguous, such as low yield levels and size ranges outside the capabilities of DLS. The dye chosen for this study was a cyanine derivative, 1,1′-dioctadecyl-3,3,3′,3′-tetramethylindotricarbocyanine iodide (DiR), which proved to provide excellent data on the extent of polymersome fusion. Using this technique, we have shown the limited fusion capabilities of polymersome/liposome heterofusion, notably DOPC vesicles fusing with polymersomes at half the efficiency of polymersome homofusion and DPPC vesicles showing virtually no fusion. In addition to these key heterofusion experiments, we determined the broad applicability of dye dequenching in measuring kinetic rates of polymersome fusion; and showed that even at elevated temperatures or over multiple weeks' time, no polymersome fusion occurred without agitation. Stealth liposomes formed from DPPC and PEO-functionalized lipid, however, fused with polymersomes and stealth liposomes with relatively high efficiency, lending support to our hypothesis that the response of the PEO corona to salt is a key factor in the fusion process. This last finding suggests that although the conjugation of PEO to lipids increases vesicle biocompatibility and enables their longer circulation times, it also renders the vesicles subject to destabilization under high salt and shear (e.g. in the circulatory system) that may lead to, in this case, fusion.
With vibrant colours and simple, roomerature processing methods, electrochromic polymers have attracted attention as active materials for flexible, low-power-consuming devices. However, slow switching speeds in devices realized to date, as well as the complexity of having to combine several distinct polymers to achieve a full-colour gamut, have limited electrochromic materials to niche applications. Here we achieve fast, high-contrast electrochromic switching by significantly enhancing the interaction of light-propagating as deep-subwavelength-confined surface plasmon polaritons through arrays of metallic nanoslits, with an electrochromic polymer-present as an ultra-thin coating on the slit sidewalls. The switchable configuration retains the short temporal charge-diffusion characteristics of thin electrochromic films, while maintaining the high optical contrast associated with thicker electrochromic coatings. We further demonstrate that by controlling the pitch of the nanoslit arrays, it is possible to achieve a full-colour response with high contrast and fast switching speeds, while relying on just one electrochromic polymer.
We have investigated the atomic structure of graphene/Ir(111) supported platinum clusters with on average fewer than 40 atoms by means of surface x-ray diffraction (SXRD), grazing incidence small angle x-ray scattering (GISAXS), and normal incidence x-ray standing waves (NIXSW) measurements, in comparison with density functional theory calculations (DFT). GISAXS revealed that the clusters with 1.3 nm diameter form a regular array with domain sizes of 90 nm. SXRD shows that the 1-2 monolayer high, (111) oriented Pt nanoparticles grow epitaxially on the graphene support. From the combined analysis of the SXRD and NIXSW data, a three-dimensional (3D) structural model of the clusters and the graphene support can be deduced which is in line with the DFT results. For the clusters grown in ultrahigh vacuum the lattice parameter is reduced by (4.6±0.1)% compared to bulk platinum. The graphene layer undergoes a strong Pt adsorption induced buckling, caused by a rehybridization of the carbon atoms below the cluster. In situ observation of the Pt clusters in CO and O2 environments revealed a reversible change of the clusters' strain state while successively dosing CO at room temperature and O2 at 575 K, pointing to a CO oxidation activity of the Pt clusters.
The High Optical Access (HOA) trap was designed in collaboration with the Modular Universal Scalable Ion-trap Quantum Computer (MUSIQC) team, funded along with Sandia National Laboratories through IARPA's Multi Qubit Coherent Operations (MQCO) program. The design of version 1 of the HOA trap was completed in September 2012 and initial devices were completed and packaged in February 2013. The second version of the High Optical Access Trap (HOA-2) was completed in September 2014 and is available at IARPA's disposal.
Many ideas for liquid surface PFCs are for divertors. First walls are likely to be more challenging technologically because long flow paths are necessary for fast flowing systems and the first wall must be an integral structure with the blanket. Maximum tolerable heat loads are a critical concern. This paper describes several processes at work in walls with fast-flowing or slow-flowing liquid plasma-facing surfaces, and the considerations imposed by heat transfer and the power balance for the PFC as well as the structure needed for an integrated first wall and blanket, and uses thermal modeling of a generic PFC structure to illustrate the issues and support the conclusions.
In this study, collection of mosquitoes and testing for vector-borne viruses is a key surveillance activity that directly influences the vector control efforts of public health agencies, including determining when and where to apply insecticides. Vector control districts in California routinely monitor for three human pathogenic viruses including West Nile virus (WNV), Western equine encephalitis virus (WEEV), and St. Louis encephalitis virus (SLEV). Reverse transcription quantitative polymerase chain reaction (RT-qPCR) offers highly sensitive and specific detection of these three viruses in a single multiplex reaction, but this technique requires costly, specialized equipment that is generally only available in centralized public health laboratories. We report the use of reverse transcription loop-mediated isothermal amplification (RT-LAMP) to detect WNV, WEEV, and SLEV RNA extracted from pooled mosquito samples collected in California, including novel primer sets for specific detection of WEEV and SLEV, targeting the nonstructural protein 4 (nsP4) gene of WEEV and the 3’ untranslated region (3’-UTR) of SLEV. Our WEEV and SLEV RT-LAMP primers allowed detection of <0.1 PFU/reaction of their respective targets in <30 minutes, and exhibited high specificity without cross reactivity when tested against a panel of alphaviruses and flaviviruses. Furthermore, the SLEV primers do not cross-react with WNV, despite both viruses being closely related members of the Japanese encephalitis virus complex. The SLEV and WEEV primers can also be combined in a single RT-LAMP reaction, with discrimination between amplicons by melt curve analysis. Although RT-qPCR is approximately one order of magnitude more sensitive than RT-LAMP for all three targets, the RT-LAMP technique is less instrumentally intensive than RT-qPCR and provides a more cost-effective method of vector-borne virus surveillance.
In this study, the diffusion of water and ions in the interlayer region of smectite clay minerals represents a direct probe of the type and strength of clay–fluid interactions. Interlayer diffusion also represents an important link between molecular simulation and macroscopic experiments. Here we use molecular dynamics simulation to investigate trends in cation and water diffusion in montmorillonite interlayers, looking specifically at the effects of layer charge, interlayer cation and cation charge (sodium or calcium), water content, and temperature. For Na-montmorillonite, the largest increase in ion and water diffusion coefficients occurs between the one-layer and two-layer hydrates, corresponding to the transition from inner-sphere to outer-sphere surface complexes. Calculated activation energies for ion and water diffusion in Na-montmorillonite are similar to each other and to the water hydrogen bond energy, suggesting the breaking of water–water and water–clay hydrogen bonds as a likely mechanism for interlayer diffusion. A comparison of interlayer diffusion with that of bulk electrolyte solutions reveals a clear trend of decreasing diffusion coefficient with increasing electrolyte concentration, and in most cases the interlayer diffusion results are nearly coincident with the corresponding bulk solutions. Trends in electrical conductivities computed from the ion diffusion coefficients are also compared.
Ternary polymer brushes consisting of polystyrene, poly(methyl methacrylate), and poly(4-vinylpyridine) have been synthesized. These brushes laterally phase separate into several distinct phases and can be tailored by altering the relative polymer composition. Self-consistent field theory has been used to predict the phase diagram and model both the horizontal and vertical phase behavior of the polymer brushes. All phase behaviors observed experimentally correlate well with the theoretical model.
In this study, a mechanical model is introduced for predicting the initiation and evolution of complex fracture patterns without the need for damage variable or law. The model, a continuum variant of Newton's second law, uses integral rather than partial differential operators where the region of integration is over finite domain. The force interaction is derived from a novel nonconvex strain energy density function, resulting in a nonmonotomic material model. The resulting equation of motion is proved to be mathematically well-posed. The model has the capacity to simulate nucleation and growth of multiple, mutually interacting dynamic fractures. In the limit of zero region of integration, the model reproduces the classic Griffith model of brittle fracture. The simplicity of the formulation avoids the need for supplemental kinetic relations that dictate crack growth or the need for an explicit damage evolution law.
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. These modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.
High-temperature Arrhenius ignition delay time correlations are useful for revealing the underlying parameter dependencies of combustion models, for simplifying and optimizing combustion mechanisms for use in engine simulations, for scaling experimental data to new conditions for comparison purposes, and for guiding in experimental design. We have developed a scaling relationship for Fatty Acid Methyl Ester (FAME) ignition time data taken at high temperatures in 4%O2/Ar mixtures behind reflected shocks using an aerosol shock tube:τign[ms]=2.24×10-6[ms](P[atm])-0.41(φ)0.30(Cn)-0.61exp37.1[kcal/mol]Ru[kcal/mol K]T[K]Additionally, we have combined our ignition delay time data for methyl decanoate, methyl palmitate, methyl oleate, and methyl linoleate with other experimental results in the literature in order to derive fuel-specific oxygen-mole-fraction scaling parameters for these surrogates. In this article, we discuss the significance of the parameter values, compare our correlation to others found in the literature for different classes of fuels, and contrast the above expression's performance with correlations obtained using leading FAME kinetic models in 4%O2/Ar mixtures.
Nanosecond pulsed laser irradiation was used to fabricate colored, mechanically robust oxide "tags" on 304L stainless steel. Immersion in simulated seawater solution, salt fog exposure, and anodic polarization in a 3.5% NaCl solution were employed to evaluate the environmental resistance of these oxide tags. Single layer oxides outside a narrow thickness range (~100-150 nm) are susceptible to dissolution in chloride containing environments. The 304L substrates immediately beneath the oxides corrode severely-attributed to Cr-depletion in the melt zone during laser processing. For the first time, multilayered oxides were fabricated with pulsed laser irradiation in an effort to expand the protective thickness range while also increasing the variety of film colors attainable in this range. Layered films grown using a laser scan rate of 475 mm/s are more resistant to both localized and general corrosion than oxides fabricated at 550 mm/s. In the absence of pre-processing to mitigate Cr-depletion, layered films can enhance environmental stability of the system.
Lotfi, Hossein; Li, Lu; Lei, Lin; Jiang, Yuchao; Yang, Rui Q.; Klem, John F.; Johnson, Matthew B.
High temperature operation (250-340 K) of short-wavelength interband cascade infrared photodetectors (ICIPs) with InAs/GaSb/Al0.2In0.8Sb/GaSb superlattice absorbers has been demonstrated with a 50% cutoff wavelength of 2.9 μm at 300 K. Two ICIP structures, one with two and the other with three stages, were designed and grown to explore this multiple-stage architecture. At λ = 2.1 μm, the two- and three-stage ICIPs had Johnson-noise-limited detectivities of 5.1 × 109 and 5.8 × 109cm Hz1/2/W, respectively, at 300 K. The better device performance of the three-stage ICIP over the two-stage ICIP confirmed the advantage of more stages for this cascade architecture. An Arrhenius activation energy of 450 meV is extracted for the bulk resistance-area product, which indicates the dominance of the diffusion current at these high temperatures.
Laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes (LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increased leakage is not present in devices without AVM signals. Transmission electron microscopy analysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].
During the course of their careers, welding engineers and welding metallurgists are often confronted with questions regarding welding process and properties that on the surface appear to be simple and direct, but are in fact quite challenging. These questions generally mask an underlying complexity whose underpinnings in scientific and applied research predate even the founding of the American Welding Society, and previous Comfort A. Adams lectures provide ample and fascinating evidence of the breadth and depth of this complexity. Using these studies or their own experiences and investigations as a basis, most welding and materials engineers have developed engineering tools to provide working approaches to these day-to-day questions and problems. In this article several examples of research into developing working approaches to welding problems are presented.
Zhao, Xin Y.; Bhagatwala, Ankit; Chen, Jacqueline H.; Haworth, Daniel C.; Pope, Stephen B.
In this study, the modeling of mixing by molecular diffusion is a central aspect for transported probability density function (tPDF) methods. In this paper, the newly-proposed shadow position mixing model (SPMM) is examined, using a DNS database for a temporally evolving di-methyl ether slot jet flame. Two methods that invoke different levels of approximation are proposed to extract the shadow displacement (equivalent to shadow position) from the DNS database. An approach for a priori analysis of the mixing-model performance is developed. The shadow displacement is highly correlated with both mixture fraction and velocity, and the peak correlation coefficient of the shadow displacement and mixture fraction is higher than that of the shadow displacement and velocity. This suggests that the composition-space localness is reasonably well enforced by the model, with appropriate choices of model constants. The conditional diffusion of mixture fraction and major species from DNS and from SPMM are then compared, using mixing rates that are derived by matching the mixture fraction scalar dissipation rates. Good qualitative agreement is found, for the prediction of the locations of zero and maximum/minimum conditional diffusion locations for mixture fraction and individual species. Similar comparisons are performed for DNS and the IECM (interaction by exchange with the conditional mean) model. The agreement between SPMM and DNS is better than that between IECM and DNS, in terms of conditional diffusion iso-contour similarities and global normalized residual levels. It is found that a suitable value for the model constant c that controls the mixing frequency can be derived using the local normalized scalar variance, and that the model constant a controls the localness of the model. A higher-Reynolds-number test case is anticipated to be more appropriate to evaluate the mixing models, and stand-alone transported PDF simulations are required to more fully enforce localness and to assess model performance.
Much effort has been made to model hydrogen releases from leaks during potential failures of hydrogen storage systems. A reduced-order jet model can be used to quickly characterize these flows, with low computational cost. Notional nozzle models are often used to avoid modeling the complex shock structures produced by the underexpanded jets by determining an "effective" source to produce the observed downstream trends. In this work, the mean hydrogen concentration fields were measured in a series of subsonic and underexpanded jets using a planar laser Rayleigh scattering system. The experimental data was compared to a reduced order jet model for subsonic flows and a notional nozzle model coupled to the jet model for underexpanded jets. The values of some key model parameters were determined by comparisons with the experimental data. The coupled model was also validated against hydrogen concentrations measurements for 100 and 200 bar hydrogen jets with the predictions agreeing well with data in the literature.
Much effort has been made to model hydrogen releases from leaks during potential failures of hydrogen storage systems. A reduced-order jet model can be used to quickly characterize these flows, with low computational cost. Notional nozzle models are often used to avoid modeling the complex shock structures produced by the underexpanded jets by determining an "effective" source to produce the observed downstream trends. In this work, the mean hydrogen concentration fields were measured in a series of subsonic and underexpanded jets using a planar laser Rayleigh scattering system. The experimental data was compared to a reduced order jet model for subsonic flows and a notional nozzle model coupled to the jet model for underexpanded jets. The values of some key model parameters were determined by comparisons with the experimental data. The coupled model was also validated against hydrogen concentrations measurements for 100 and 200 bar hydrogen jets with the predictions agreeing well with data in the literature.
An associated particle neutron generator is described that employs a negative ion source to produce high neutron flux from a small source size. Negative ions produced in an rf-driven plasma source are extracted through a small aperture to form a beam which bombards a positively biased, high voltage target electrode. Electrons co-extracted with the negative ions are removed by a permanent magnet electron filter. The use of negative ions enables high neutron output (100% atomic ion beam), high quality imaging (small neutron source size), and reliable operation (no high voltage breakdowns). The neutron generator can operate in either pulsed or continuous-wave (cw) mode and has been demonstrated to produce 106 D-D n/s (equivalent to ~108 D-T n/s) from a 1 mm-diameter neutron source size to facilitate high fidelity associated particle imaging.
In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimator is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.
We develop and demonstrate a method to efficiently use density functional calculations to drive classical dynamics of complex atomic and molecular systems. The method has the potential to scale to systems and time scales unreachable with current ab initio molecular dynamics schemes. It relies on an adapting dataset of independently computed Hellmann–Feynman forces for atomic configurations endowed with a distance metric. The metric on configurations enables fast database lookup and robust interpolation of the stored forces. Here, we discuss mechanisms for the database to adapt to the needs of the evolving dynamics, while maintaining accuracy, and other extensions of the basic algorithm.
The 6th US/German Workshop on Salt Repository Research, Design, and Operation was held in Dresden. Germany on September 7-9, 2015. Over seventy participants helped advance the technical basis for salt disposal of radioactive waste. The number of collaborative efforts continues to grow and to produce useful documentation, as well as to define the state of the art for research areas. These Proceedings are divided into Chapters, and a list of authors is included in the Acknowledgement Section. Also in this document are the Technical Agenda, List of Participants, Biographical Information, Abstracts, and Presentations. Proceedings of all workshops and other pertinent information are posted on websites hosted by Sandia National Laboratories and the Nuclear Energy Agency Salt Club. The US/German workshops provide continuity for long-term research, summarize and publish status of mature areas, and develop appropriate research by consensus in a workshop environment. As before, major areas and findings are highlighted, which constitute topical Chapters in these Proceedings. In total, the scientific breadth is substantial and while not all subject matter is elaborated into chapter format, all presentations and abstracts are published in this document. In the following Proceedings, six selected topics are developed in detail.
We report on a new technique for obtaining off-Hugoniot pressure vs. density data for solid metals compressed to extreme pressure by a magnetically driven liner implosion on the Z-machine (Z) at Sandia National Laboratories. In our experiments, the liner comprises inner and outer metal tubes. The inner tube is composed of a sample material (e.g., Ta and Cu) whose compressed state is to be inferred. The outer tube is composed of Al and serves as the current carrying cathode. Another aluminum liner at much larger radius serves as the anode. A shaped current pulse quasi-isentropically compresses the sample as it implodes. The iterative method used to infer pressure vs. density requires two velocity measurements. Photonic Doppler velocimetry probes measure the implosion velocity of the free (inner) surface of the sample material and the explosion velocity of the anode free (outer) surface. These two velocities are used in conjunction with magnetohydrodynamic simulation and mathematical optimization to obtain the current driving the liner implosion, and to infer pressure and density in the sample through maximum compression. This new equation of state calibration technique is illustrated using a simulated experiment with a Cu sample. Monte Carlo uncertainty quantification of synthetic data establishes convergence criteria for experiments. Results are presented from experiments with Al/Ta, Al/Cu, and Al liners. Symmetric liner implosion with quasi-isentropic compression to peak pressure ∼1000 GPa is achieved in all cases. These experiments exhibit unexpectedly softer behavior above 200 GPa, which we conjecture is related to differences in the actual and modeled properties of aluminum.
The ACRR pulse is pneumatically driven by nitrogen in a system of pipes, valves and hoses up to the connection of the pneumatic system and mechanical linkages of the transient rod (TR). The main components of the TR pneumatic system are the regulator, accumulator, solenoid valve and piston-cylinder assembly. The purpose of this analysis is to analyze the flow of nitrogen through the TR pneumatic system in order to develop a motion profile of the piston during the pulse and be able to predict the pressure distributions inside both the cylinder and accumulators. The predicted pressure distributions will be validated against pressure transducer data, while the motion profile will be compared to proximity switch data. By predicting the motion of the piston, pulse timing will be determined and provided to the engineers/operators for verification. The motion profile will provide an acceleration distribution to be used in Razorback to more accurately predict reactivity insertion into the system.
The article provides information about an upcoming conference from the program chair. The Microscopy Society of America (MSA), the Microanalysis Society (MAS), and the International Metallographic Society (IMS) invite participation in Microscopy & Microanalysis 2016 in Columbus, Ohio, July 24 through July 28, 2016.
Controlling the quantum entanglement between parts of a many-body system is key to unlocking the power of quantum technologies such as quantum computation, high-precision sensing, and the simulation of many-body physics. The spin degrees of freedom of ultracold neutral atoms in their ground electronic state provide a natural platform for such applications thanks to their long coherence times and the ability to control them with magneto-optical fields. However, the creation of strong coherent coupling between spins has been challenging. Here we demonstrate a strong and tunable Rydberg-dressed interaction between spins of individually trapped caesium atoms with energy shifts of order 1 MHz in units of Planck's constant. This interaction leads to a ground-state spin-flip blockade, whereby simultaneous hyperfine spin flips of two atoms are inhibited owing to their mutual interaction. We employ this spin-flip blockade to rapidly produce single-step Bell-state entanglement between two atoms with a fidelity 81(2)%.
Localized stress variation in aluminum nitride (AlN) sputtered on patterned metallization has been monitored through the use of UV micro-Raman spectroscopy. This technique utilizing 325 nm laser excitation allows detection of the AlN E2(high) phonon mode in the presence of metal electrodes beneath the AlN layer with a high spatial resolution of less than 400 nm. The AlN film stress shifted 400 MPa from regions where AlN was deposited over a bottom metal electrode versus silicon dioxide. Across wafer stress variations were also investigated showing that wafer level stress metrology, for example using wafer curvature measurements, introduces large uncertainties for predicting the impact of AlN residual stress on the device performance.
2015 IEEE/ACM International Conference on Computer-Aided Design, ICCAD 2015
Debenedictis, Erik; Lu, Yung H.; Kadin, Alan M.; Berg, Alexander C.; Conte, Thomas M.; Garg, Rachit; Gingade, Ganesh; Hoang, Bichlien; Huang, Yongzhen; Li, Boxun; Liu, Jingyu; Liu, Wei; Mao, Huizi; Peng, Junran; Tang, Tianqi; Track, Elie K.; Wang, Jingqiu; Wang, Tao; Wang, Yu; Yao, Jun
Rebooting Computing (RC) is an effort in the IEEE to rethink future computers. RC started in 2012 by the co-chairs, Elie Track (IEEE Council on Superconductivity) and Tom Conte (Computer Society). RC takes a holistic approach, considering revolutionary as well as evolutionary solutions needed to advance computer technologies. Three summits have been held in 2013 and 2014, discussing different technologies, from emerging devices to user interface, from security to energy efficiency, from neuromorphic to reversible computing. The first part of this paper introduces RC to the design automation community and solicits revolutionary ideas from the community for the directions of future computer research. Energy efficiency is identified as one of the most important challenges in future computer technologies. The importance of energy efficiency spans from miniature embedded sensors to wearable computers, from individual desktops to data centers. To gauge the state of the art, the RC Committee organized the first Low Power Image Recognition Challenge (LPIRC). Each image contains one or multiple objects, among 200 categories. A contestant has to provide a working system that can recognize the objects and report the bounding boxes of the objects. The second part of this paper explains LPIRC and the solutions from the top two winners.
We demonstrate that metal-organic frameworks (MOFs) can catalyze hydrogenolysis of aryl ether bonds under mild conditions. Mg-IRMOF-74(I) and Mg-IRMOF-74(II) are stable under reducing conditions and can cleave phenyl ethers containing β-O-4, α-O-4, and 4-O-5 linkages to the corresponding hydrocarbons and phenols. Reaction occurs at 10 bar H2 and 120 °C without added base. DFT-optimized structures and charge transfer analysis suggest that the MOF orients the substrate near Mg2+ ions on the pore walls. Ti and Ni doping further increase conversions to as high as 82% with 96% selectivity for hydrogenolysis versus ring hydrogenation. Repeated cycling induces no loss of activity, making this a promising route for mild aryl-ether bond scission.
Particle image velocimetry measurements have been conducted for a Mach 0.8 flow over a wall-mounted hemisphere with a strongly separated wake. The shock foot was found to typically sit just forward of the apex of the hemisphere and move within a range of about ±10 deg. Conditional averages based upon the shock foot location show that the separation shock is positioned upstream along the hemisphere surface when reverse velocities in the recirculation region are strong and is located downstream when they are weaker. The recirculation region appears smaller when the shock is located farther downstream. No correlation was detected of the incoming boundary layer with the shock position nor with the wake recirculation velocities. These observations are consistent with recent studies concluding that, for large, strong separation regions, the dominant mechanism is the instability of the separated flow rather than a direct influence of the incoming boundary layer.
A thermal rectifier that utilizes thermal expansion to directionally control interfacial conductance between two contacting surfaces is presented. The device consists of two thermal reservoirs contacting a beam with one rough and one smooth end. When the temperature of reservoir in contact with the smooth surface is raised, a similar temperature rise will occur in the beam, causing it to expand, thus increasing the contact pressure at the rough interface and reducing the interfacial contact resistance. However, if the temperature of the reservoir in contact with the rough interface is raised, the large contact resistance will prevent a similar temperature rise in the beam. As a result, the contact pressure will be marginally affected and the contact resistance will not change appreciably. Owing to the decreased contact resistance of the first scenario compared to the second, thermal rectification occurs. A parametric analysis is used to determine optimal device parameters including surface roughness, contact pressure, and device length. Modeling predicts that rectification factors greater than 2 are possible at thermal biases as small as 3 K. Additionally, thin surface coatings are discussed as a method to control the temperature bias at which maximum rectification occurs.
Polymer foam encapsulants provide mechanical, electrical, and thermal isolation in engineered systems. It can be advantageous to surround objects of interest, such as electronics, with foams in a hermetically sealed container in order to protect them from hostile environments or from accidents such as fire. In fire environments, gas pressure from thermal decomposition of foams can cause mechanical failure of sealed systems. In this work, a detailed uncertainty quantification study of polymeric methylene diisocyanate (PMDI)-polyether-polyol based polyurethane foam is presented and compared to experimental results to assess the validity of a 3-D finite element model of the heat transfer and degradation processes. In this series of experiments, 320 kg/m3 PMDI foam in a 0.2 L sealed steel container is heated to 1,073 K at a rate of 150 K/min. The experiment ends when the can breaches due to the buildup of pressure. The temperature at key location is monitored as well as the internal pressure of the can. Both experimental uncertainty and computational uncertainty are examined and compared. The mean value method (MV) and Latin hypercube sampling (LHS) approach are used to propagate the uncertainty through the model. The results of the both the MV method and the LHS approach show that while the model generally can predict the temperature at given locations in the system, it is less successful at predicting the pressure response. Also, these two approaches for propagating uncertainty agree with each other, the importance of each input parameter on the simulation results is also investigated, showing that for the temperature response the conductivity of the steel container and the effective conductivity of the foam, are the most important parameters. For the pressure response, the activation energy, effective conductivity, and specific heat are most important. The comparison to experiments and the identification of the drivers of uncertainty allow for targeted development of the computational model and for definition of the experiments necessary to improve accuracy.
Neely, Jason C.; Cavagnaro, Robert J.; Fay, Franois X.; Mendia, Joseba L.; Rea, Judith A.
Implications of conducting hardware-in-the-loop testing of a specific hydrokinetic turbine on controllable motor-generator sets or electromechanical emulation machines (EEMs) are explored. The emulator control dynamic equations are presented, methods for scaling turbine parameters are developed and evaluated, and experimental results are presented from three EEMs programmed to emulate the same vertical-axis fixed-pitch turbine. Although hardware platforms and control implementations varied, results show that each EEM is successful in emulating the turbine model at different power levels, thus demonstrating the general feasibility of the approach. However, performance of motor control under torque command, current command, or speed command differed. In a demonstration of the intended use of an EEM for evaluating a hydrokinetic turbine implementation, a power takeoff controller tracks the maximum power-point of the turbine in response to turbulence. Utilizing realistic inflow conditions and control laws, the emulator dynamic speed response is shown to agree well at low frequencies with numerical simulation but to deviate at high frequencies.
Verification of tightly coupled multiphysics computational codes is generally significantly more difficult than verification of single-physics codes. The case of coupled heat conduction and thermal radiation in an enclosure is considered, and it is extended to a manufactured solution verification test for enclosure radiation to a fully two-dimensional coupled problem with conduction and thermal radiation. Convergence results are shown using a production thermal analysis code. Convergence rates are optimal with a pairwise view-factor calculation algorithm.
We report on the thermodynamic properties of binary compound mixtures of model groups II-VI semiconductors. We use the recently introduced Stillinger-Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. The potential energy exhibits a positive deviation from ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. It roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.
The breakup of liquids due to aerodynamic forces has been widely studied. However, the literature contains limited quantified data on secondary droplet sizes, particularly as a function of time. Here, a column of liquid water is subjected to a step change in relative gas velocity using a shock tube. A unique digital in-line holography (DIH) configuration is proposed which quantifies the secondary droplets sizes, three-dimensional position, and three-component velocities at 100 kHz. Results quantify the detailed evolution of the characteristic mean diameters and droplet size-velocity correlations as a function of distance downstream from the initial location of the water column. Accuracy of the measurements is confirmed through mass balance. These data give unprecedented detail on the breakup process which will be useful for improved model development and validation.
In this discussion paper, we explore different ways to assess the value of verification and validation (V&V) of engineering models. We first present a literature review on the value of V&V and then use value chains and decision trees to show how value can be assessed from a decision maker’s perspective. In this context, the value is what the decision maker is willing to pay for V&V analysis with the understanding that the V&V results are uncertain. The 2014 Sandia V&V Challenge Workshop is used to illustrate these ideas.
High-speed, time-resolved particle image velocimetry with a pulse-burst laser was used to measure the gas-phase velocity upstream and downstream of a shock wave-particle curtain interaction at three shock Mach numbers (1.19, 1.40, and 1.45), at a sampling rate of 37.5 kHz. The particle curtain, formed from free-falling soda-lime particles with diameters ranging from 300 - 355 μm, had a streamwise thickness of 3.5 mm and volume fraction of 9% at mid-height. Following impingement by a shock wave, a pressure difference was created between the upstream/downstream sides of the curtain, which accelerated flow through the curtain. Jetting of flow through the curtain was observed downstream once deformation of the curtain began, demonstrating a long-term unsteady effect. Using a control volume approach, the unsteady drag on the curtain was determined from velocity and pressure data. Initially, the pressure difference between the upstream and downstream sides of the curtain was the largest contributor to the total drag. The data suggests, however, that as time increases, the change in momentum flux could become the dominant component as the pressure difference decreases.
Pulse-burst particle image velocimetry (PIV) has been used to acquire time-resolved data at 37.5 kHz of the flow over a finite-width rectangular cavity at Mach 0.6, 0.8, and 0.94. Power spectra of the PIV data reveal four resonance modes that match the frequencies detected simultaneously using high-frequency wall pressure sensors. Velocity resonances exhibit spatial dependence in which the lowest-frequency acoustic mode is active within the recirculation region whereas the three higher modes are concentrated within the shear layer. Spatio-temporal cross-correlations were calculated from velocity data first bandpass filtered for specific resonance frequencies. The low-frequency acoustic mode shows properties of a standing wave without spatial correlation. Higher resonance modes are associated with alternating coherent structures whose size and spacing decrease for higher resonance modes and increase as structures convect downstream. The convection velocity appears identical for the high-frequency resonance modes, but it too increases with downstream distance. This is in contrast to the well-known Rossiter equation, which assumes a convection velocity constant in space.
Time-resolved particle image velocimetry (PIV) using a pulse-burst laser has been acquired of a supersonic jet issuing into a Mach 0.8 crossflow. Simultaneously, the final pulse pair in each burst has been imaged using conventional PIV cameras to produce an independent two-component measurement and two stereoscopic measurements. Each measurement depicts generally similar flowfield features with vorticity contours marking turbulent eddies at corresponding locations. Probability density functions of the velocity fluctuations are essentially indistinguishable but the precision uncertainty estimated using correlation statistics shows that the pulse-burst PIV data have notably greater uncertainty than the three conventional measurements. This occurs due to greater noise in the cameras and a smaller size for the final iteration of the interrogation window. A small degree of peak locking is observed in the aggregate of the pulse-burst PIV data set. However, some of the individual vector fields show peak locking to non-integer pixel values as a result of real physical effects in the flow. Even if peak locking results entirely from measurement bias, the effect occurs at too low a level to anticipate a significant effect on data analysis.
Stereoscopic particle image velocimetry was used to experimentally measure the recirculating flow within finite-span cavities of varying complex geometry at a freestream Mach number of 0.8. Volumetric measurements were made to investigate the side wall influences by scanning a laser sheet across the cavity. Each of the geometries could be classied as an open-cavity, based on L/D. The addition of ramps altered the recirculation zone within the cavity, causing it to move along the streamwise direction. Within the simple rectangular cavity, a system of counter-rotating streamwise vortices formed due to spillage from along the side wall, which caused the mixing layer to develop a steady spanwise waviness. The ramped complex geometry, due to the presence of leading edge and side ramps, appeared to suppress the formation of streamwise vorticity associated with side wall spillage, resulting in a much more two-dimensional mixing layer.
A previous study in the UK demonstrated that vibration response on a scaled-down model of a missile structure in a wind tunnel could be replicated in a laboratory setting with multiple shakers using an approach dubbed as impedance matching. Here we demonstrate on a full scale industrial structure that the random vibration induced from a laboratory acoustic environment can be nearly replicated at 37 internal accelerometers using six shakers. The voltage input to the shaker amplifiers is calculated using a regularized inverse of the square of the amplitude of the frequency response function matrix and the power spectral density responses of the 37 internal accelerometers. No cross power spectral density responses are utilized. The structure has hundreds of modes and the simulation is performed out to 4000 Hz.
We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.
Solving Laplacian linear systems is an important task in a variety of practical and theoretical applications. This problem is known to have solutions that perform in linear times polylogarithmic work in theory, but these algorithms are difficult to implement in practice. We examine existing solution techniques in order to determine the best methods currently available and for which types of problems are they useful. We perform timing experiments using a variety of solvers on a variety of problems and present our results. We discover differing solver behavior between web graphs and a class of synthetic graphs designed to model them.
We have recently reported that two classes of time-dependent triaxial magnetic fields can induce vorticity in magnetic particle suspensions. The first class-symmetry-breaking fields-is comprised of two ac components and one dc component. The second class-rational triad fields-is comprised of three ac components. In both cases deterministic vorticity occurs when the ratios of the field frequencies form rational numbers. A strange aspect of these fields is that they produce fluid vorticity without generally having a circulating field vector, such as would occur in a rotating field. It has been shown, however, that the symmetry of the field trajectory, considered jointly with that of the converse field, allows vorticity to occur around one particular field axis. This axis might be any of the field components, and is determined by the relative frequencies of the field components. However, the symmetry theories give absolutely no insight into why vorticity should occur. In this paper we propose a particle-based model of vorticity in these driven fluids. This model proposes that particles form volatile chains that follow, but lag behind, the dynamic field vector. This model is consistent with the predictions of symmetry theory and gives reasonable agreement with previously reported torque density measurements for a variety of triaxial fields.
It is well known that the derivative-based classical approach to strain is problematic when the displacement field is irregular, noisy, or discontinuous. Difficulties arise wherever the displacements are not differentiable. We present an alternative, nonlocal approach to calculating strain from digital image correlation (DIC) data that is well-defined and robust, even for the pathological cases that undermine the classical strain measure. This integral formulation for strain has no spatial derivatives and when the displacement field is smooth, the nonlocal strain and the classical strain are identical. We submit that this approach to computing strains from displacements will greatly improve the fidelity and efficacy of DIC for new application spaces previously untenable in the classical framework.
Evermore sophisticated ductile plasticity and failure models demand experimental material characterization of shear behavior; yet, the mechanics community lacks a widely accepted, standard test method for shear-dominated deformation and failure of ductile metals. We investigated the use of the V-notched rail test, borrowed from the ASTM D7078 standard for shear testing of composites, for shear testing of Ti-6Al-4V titanium alloy sheet material, considering sheet rolling direction and quasi-static and transient load rates. In this paper, we discuss practical aspects of testing, modifications to the specimen geometry, and the experimental shear behavior of Ti-6Al-4V. Specimen installation, machine compliance, specimen-grip slip during testing, and specimen V-notched geometry all influenced the measured specimen behavior such that repeatable shear-dominated behavior was initially difficult to obtain. We will discuss the careful experimental procedure and set of measurements necessary to extract meaningful shear information for Ti-6Al-4V. We also evaluate the merits and deficiencies, including practicality of testing for engineering applications and quality of results, of the V-notched rail test for characterization of ductile shear behavior.
Waterborne pathogens pose significant threat to the global population and early detection plays an important role both in making drinking water safe, as well as in diagnostics and treatment of water-borne diseases. We present an innovative centrifugal sedimentation immunoassay platform for detection of bacterial pathogens in water. Our approach is based on binding of pathogens to antibody-functionalized capture particles followed by sedimentation of the particles through a density-media in a microfluidic disk. Beads at the distal end of the disk are imaged to quantify the fluorescence and determine the bacterial concentration. Our platform is fast (20 min), can detect as few as ~10 bacteria with minimal sample preparation, and can detect multiple pathogens simultaneously. The platform was used to detect a panel of enteric bacteria (Escherichia coli, Salmonella typhimurium, Shigella, Listeria, and Campylobacter) spiked in tap and ground water samples.
There has been a lot of interest in the matching error for two-dimensional digital image correlation (2D-DIC), including the matching bias and variance; however, there are a number of other sources of error that must also be considered. These include temperature drift of the camera, out-of-plane sample motion, lack of perpendicularity, under-matched subset shape functions, and filtering of the results during the strain calculation. This talk will use experimental evidence to demonstrate some of the ignored error sources and compile a complete “notional” error budget for a typical 2D measurement.
We are developing the capability to track material changes through numerous possible steps of the manufacturing process, such as forging, machining, and welding. In this work, experimental and modeling results are presented for a multiple-step process in which an ingot of stainless steel 304L is forged at high temperature, then machined into a thin slice, and finally subjected to an autogenous GTA weld. The predictions of temperature, yield stress, and recrystallized volume fraction are compared to experimental results.
Simulation is a widely adopted method to analyze and predict the performance of large-scale parallel applications. Validating the hardware model is highly important for complex simulations with a large number of parameters. Common practice involves calculating the percent error between the projected and the real execution time of a benchmark program. However, in a high-dimensional parameter space, this coarse-grained approach often suffers from parameter insensitivity, which may not be known a priori. Moreover, the traditional approach cannot be applied to the validation of software models, such as application skeletons used in online simulations. In this work, we present a methodology and a toolset for validating both hardware and software models by quantitatively comparing fine-grained statistical characteristics obtained from execution traces. Although statistical information has been used in tasks like performance optimization, this is the first attempt to apply it to simulation validation. Our experimental results show that the proposed evaluation approach offers significant improvement in fidelity when compared to evaluation using total execution time, and the proposed metrics serve as reliable criteria that progress toward automating the simulation tuning process.
Experiments were performed to characterize the mechanical response of several different rigid polyurethane foams to large deformation. In these experiments, the effects of load path, loading rate, and temperature were investigated. Results from these experiments indicated that rigid polyurethane foams exhibit significant damage, volumetric and deviatoric plasticity when they are compressed. Rigid polyurethane foams were also found to be extremely strain-rate and temperature dependent. These foams are also rather brittle and crack when loaded to small strains in tension or to larger strains in compression. Thus, a phenomenological Unified Creep Plasticity Damage (UCPD) model was developed to describe the mechanical response of these foams to large deformation at a variety of temperatures and strain rates. This paper includes a description of recent experiments and experimental findings. Next, development of a UCPD model for rigid, polyurethane foams is described. Finite element simulations with the new UCPD model are compared with experimental results to show behavior that can be captured with this model.
Glass forming materials like polymers exhibit a variety of complex, nonlinear, time-dependent relaxations in volume, enthalpy and stress, all of which affect material performance and aging. Durable product designs rely on the capability to predict accurately how these materials will respond to mechanical loading and temperature regimes over prolonged exposures to operating environments. This cannot be achieved by developing a constitutive framework to fit only one or two types of experiments. Rather, it requires a constitutive formalism that is quantitatively predictive to engineering accuracy for the broad range of observed relaxation behaviors. Moreover, all engineering analyses must be performed from a single set of material model parameters. The rigorous nonlinear viscoelastic Potential Energy Clock (PEC) model and its engineering phenomenological equivalent, the Simplified Potential Energy Clock (SPEC) model, were developed to fulfill such roles and have been applied successfully to thermoplastics and filled and unfilled thermosets. Recent work has provided an opportunity to assess the performance of the SPEC model in predicting the viscoelastic behavior of an inorganic sealing glass. This presentation will overview the history of PEC and SPEC and describe the material characterization, model calibration and validation associated with the high Tg (~460 °C) sealing glass.
To analyze the stresses and strains generated during the solidification of glass-forming materials, stress and volume relaxation must be predicted accurately. Although the modeling attributes required to depict physical aging in organic glassy thermosets strongly resemble the structural relaxation in inorganic glasses, the historical modeling approaches have been distinctly different. To determine whether a common constitutive framework can be applied to both classes of materials, the nonlinear viscoelastic simplified potential energy clock (SPEC) model, developed originally for glassy thermosets, was calibrated for the Schott 8061 inorganic glass and used to analyze a number of tests. A practical methodology for material characterization and model calibration is discussed, and the structural relaxation mechanism is interpreted in the context of SPEC model constitutive equations. SPEC predictions compared to inorganic glass data collected from thermal strain measurements and creep tests demonstrate the ability to achieve engineering accuracy and make the SPEC model feasible for engineering applications involving a much broader class of glassy materials.
Iridium alloys have been utilized as structural materials for certain high-temperature applications due to their superior strength and ductility at elevated temperatures. In some applications where the iridium alloys are subjected to high-temperature and high-speed impact simultaneously, the high-temperature high-strain-rate mechanical properties of the iridium alloys must be fully characterized to understand the mechanical response of the components in these severe applications. In this study, the room-temperature Kolsky tension bar was modified to characterize a DOP-26 iridium alloy in tension at elevated strain rates and temperatures. The modifications include (1) a unique cooling system to cool down the bars while the specimen was heated to high temperatures with an induction heater; (2) a small-force pre-tension system to compensate for the effect of thermal expansion in the high-temperature tensile specimen; (3) a laser system to directly measure the displacements at both ends of the tensile specimen independently; and (4) a pair of high-sensitivity semiconductor strain gages to measure the weak transmitted force. The dynamic high-temperature tensile stress-strain curves of the iridium alloy were experimentally obtained with the modified high-temperature Kolsky tension bar techniques at two different strain rates (~1000 and 3000 s-1) and temperatures (~750 and 1030 °C).
Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which is then compared to experimental output using appropriate statistical methods. Finally, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.
Huang, Maoyi; Ray, Jaideep; Hou, Zhangshuan; Ren, Huiying; Swiler, Laura
The Community Land Model (CLM) has been widely used in climate and Earth system modeling. Accurate estimation of model parameters is needed for reliable model simulations and predictions under current and future conditions, respectively. In our previous work, a subset of hydrological parameters has been identified to have significant impact on surface energy fluxes at selected flux tower sites based on parameter screening and sensitivity analysis, which indicate that the parameters could potentially be estimated from surface flux observations at the towers. To date, such estimates do not exist. In this paper, we assess the feasibility of applying a Bayesian model calibration technique to estimate CLM parameters at selected flux tower sites under various site conditions. The parameters are estimated as a joint probability density function (PDF) that provides estimates of uncertainty of the parameters being inverted, conditional on climatologically average latent heat fluxes derived from observations. We find that the simulated mean latent heat fluxes from CLM using the calibrated parameters are generally improved at all sites when compared to those obtained with CLM simulations using default parameter sets. Further, our calibration method also results in credibility bounds around the simulated mean fluxes which bracket the measured data. The modes (or maximum a posteriori values) and 95% credibility intervals of the site-specific posterior PDFs are tabulated as suggested parameter values for each site. Analysis of relationships between the posterior PDFs and site conditions suggests that the parameter values are likely correlated with the plant functional type, which needs to be confirmed in future studies by extending the approach to more sites.
At the completion of the National Ignition Campaign (NIC), the National Ignition Facility (NIF) had about 36 different types of diagnostics. These were based on several decades of development on Nova and OMEGA and involved the whole U.S. inertial confinement fusion community. In 1994, the Joint Central Diagnostic Team documented a plan for a limited set of NIF diagnostics in the NIF Conceptual Design Report. Two decades later, these diagnostics, and many others, were installed workhorse tools for all users of NIF. We give a short description of each of the 36 different types of NIC diagnostics grouped by the function of the diagnostics, namely, target drive, target response and target assembly, stagnation, and burn. A comparison of NIF diagnostics with the Nova diagnostics shows that the NIF diagnostic capability is broadly equivalent to that of Nova in 1999. Although NIF diagnostics have a much greater degree of automation and rigor than Nova's, new diagnostics are limited such as the higher-speed X-ray imager. Recommendations for future diagnostics on the NIF are discussed.
Rift Valley fever virus (RVFV) is an arbovirus within the Bunyaviridae family capable of causing serious morbidity and mortality in humans and livestock. To identify host factors involved in bunyavirus replication, we employed genome-wide RNA interference (RNAi) screening and identified 381 genes whose knockdown reduced infection. The Wnt pathway was the most represented pathway when gene hits were functionally clustered. With further investigation, we found that RVFV infection activated Wnt signaling, was enhanced when Wnt signaling was preactivated, was reduced with knockdown of β-catenin, and was blocked using Wnt signaling inhibitors. Similar results were found using distantly related bunyaviruses La Crosse virus and California encephalitis virus, suggesting a conserved role for Wnt signaling in bunyaviral infection. We propose a model where bunyaviruses activate Wnt-responsive genes to regulate optimal cell cycle conditions needed to promote efficient viral replication. The findings in this study should aid in the design of efficacious host-directed antiviral therapeutics.
Molecular motor-driven self-assembly has been an active area of soft matter research for the past decade. Because molecular motors transform chemical energy into mechanical work, systems which employ molecular motors to drive self-assembly processes are able to overcome kinetic and thermodynamic limits on assembly time, size, complexity, and structure. Here, we review the progress in elucidating and demonstrating the rules and capabilities of motor-driven active self-assembly. We focus on the types of structures created and the degree of control realized over these structures, and discuss the next steps necessary to achieve the full potential of this assembly mode which complements robotic manipulation and passive self-assembly.
We report experimental results and simulations showing efficient laser energy coupling into plasmas at conditions relevant to the magnetized liner inertial fusion (MagLIF) concept. In MagLIF, to limit convergence and increase the hydrodynamic stability of the implosion, the fuel must be efficiently preheated. To determine the efficiency and physics of preheating by a laser, an Ar plasma with ne/ncrit∼0.04 is irradiated by a multi-ns, multi-kJ, 0.35-μm, phase-plate-smoothed laser at spot-averaged intensities ranging from 1.0×1014 to 2.5×1014W/cm2 and pulse widths from 2 to 10 ns. Time-resolved x-ray images of the laser-heated plasma are compared to two-dimensional radiation-hydrodynamic simulations that show agreement with the propagating emission front, a comparison that constrains laser energy deposition to the plasma. The experiments show that long-pulse, modest-intensity (I=1.5×1014W/cm2) beams can efficiently couple energy (∼82% of the incident energy) to MagLIF-relevant long-length (9.5 mm) underdense plasmas. The demonstrated heating efficiency is significantly higher than is thought to have been achieved in early integrated MagLIF experiments [A. B. Sefkow, Phys. Plasmas 21, 072711 (2014)10.1063/1.4890298].
The 2014 Sandia Verification & Validation Challenge Workshop was held at the 3rd ASME Verification & Validation Symposium in Las Vegas, on May 5-8, 2014. The workshop was built around a challenge problem, formulated as an engineering investigation that required integration of experimental data, modeling and simulation, and verification and validation. The challenge problem served as a common basis for the ASME Journal of Verification, Validation, and Uncertainty Quantification participants to both demonstrate methodology and explore a critical aspect of the field: the role of verification and validation in establishing credibility and supporting decision making. Ten groups presented responses to the challenge problem at the workshop, and the follow-on efforts are documented in this special edition of the ASME Journal of Verification, Validation, and Uncertainty Quantification.
The objective of this work was twofold: (1) measure reliable fatigue crack growth relationships for X65 steel and its girth weld in high-pressure hydrogen gas to enable structural integrity assessments of hydrogen pipelines, and (2) evaluate the hydrogen accelerated fatigue crack growth susceptibility of the weld fusion zone and heat-affected zone relative to the base metal. Fatigue crack growth relationships (da/dN versus ΔK) were measured for girth welded X65 pipeline steel in high pressure hydrogen gas (21 MPa) and in air. Hydrogen assisted fatigue crack growth was observed for the base metal (BM), fusion zone (FZ), and heat-affected zone (HAZ), and was manifested through crack growth rates reaching nearly an order of magnitude acceleration over rates in air. At higher ΔK values, crack growth rates of BM, FZ, and HAZ were coincident; however, at lower ΔK, the fatigue crack growth relationships exhibited some divergence with the fusion zone having the highest crack growth rates. These relative fatigue crack growth rates in the BM, FZ, and HAZ were provisional, however, since both crack closure and residual stress contributed to the crack-tip driving force in specimens extracted from the HAZ. Despite the relatively high applied R-ratio (R = 0.5), crack closure was detected in the heat affected zone tests, in contrast to the absence of crack closure in the base metal tests. Crack closure corrections were performed using the adjusted compliance ratio method and the effect of residual stress on Kmax was determined by the crack-compliance method. Crack-tip driving forces that account for closure and residual stress effects were quantified as a weighted function of ΔK and Kmax (i.e., Knorm), and the resulting da/dN versus Knorm relationships showed that the HAZ exhibited higher hydrogen accelerated fatigue crack growth rates than the BM at lower Knorm values.
As discussed in the previous chapter, the purpose of peridynamics is to unify the mechanics of continuous media, continuous media with evolving discontinuities, and discrete particles. To accomplish this, peridynamics avoids the use of partial derivatives of the deformation with respect to spatial coordinates. Instead, it uses integral equations that remain valid on discontinuities. Discrete particles, as will be discussed later in this chapter, are treated using Dirac delta functions.
Small or moderate-weight space launches could significantly benefit from an electrically powered launch complex, based on an electromagnetic coil launcher. This paper presents results of studies to estimate the required launcher parameters, and estimate the cost of such a launch facility. This study is based on electromagnetic launch, or electromagnetic gun technology which is constrained to a coaxial geometry to take advantage of the efficiency of closely-coupled coils. This geometry, along with reasonable constraints on the length and power requirements for the launcher, match most naturally to relatively small satellites in low-earth orbits. The launcher energy and power requirements fall in the range of 40 - 260 GJ and 20 - 400 GW electric. Parametric evaluations have been conducted with a launcher length of 1-2 km, exit velocity of 1 - 6 km/s, and payloads of 100 -1000 kg. The launch requires high acceleration, so the satellite package must be hardened. The EM launch complex could greatly reduce the amount of fuels handling, reduce the turn-around time between launches, allow more concurrence in launch preparation, reduce the manpower requirements for launch vehicle preparation and increase the reliability of launch by using more standardized vehicle preparations.
A systematic approach to developing compact reduced reaction models is proposed for liquid hydrocarbon fuels using n-dodecane and n-butane as the model fuels. The approach has three elements. Fast fuel cracking reactions are treated by the quasi-steady state approximation (QSSA) and lumped into semi-global reactions to yield key cracking products that are C1-C4 in size. Directed relation graph (DRG) and sensitivity analysis reduce the foundational fuel chemistry model to a skeletal model describing the oxidation of the C1-C4 compounds. Timescale-based reduction using, e.g., QSSA, is then employed to produce the final reduced model. For n-dodecane, a 24-species reduced model is derived from JetSurF and tested against the detailed model for auto-ignition, perfectly stirred reactors (PSR), premixed flame propagation, and extinction of premixed and non-premixed counterflow flames. It is shown that the QSSA of fuel cracking reactions is valid and robust under high-temperature conditions from laminar flames, where mixing is controlled by molecular diffusion, to perfectly stirred reactors, which correspond to the limit of fast turbulent mixing. Bifurcation analysis identifies the controlling processes of ignition and extinction and shows that these phenomena are insensitive to the details of fuel cracking. To verify the applicability of the above finding to turbulent flames, 2-D direct numerical simulation (DNS) of a lean turbulent premixed flame of n-butane/air with Karlovitz number of 250 was carried out using a reduced model developed from USC-Mech II. The results show that QSSA for fuel cracking remains valid even under intense turbulence conditions. Statistical analysis of the DNS data shows that fuel cracking is complete before the flame zone, and for the conditions tested, turbulent transport does not bring any significant fuel molecules into the flame zones, thus further substantiating the validity of the approach proposed.
Three-dimensional deformation of rupture discs subjected to gas-dynamic shock loading was measured using a stereomicroscope digital image correlation (DIC) system. One-dimensional blast waves generated with a small-diameter, explosively driven shock tube were used for studying the fluid-structure interactions that exist when incident onto relatively low-strength rupture discs. Prior experiments have shown that subjecting the 0. 64-cm-diameter, stainless steel rupture discs to shock waves of varying strength results in a range of responses from no rupture to shear at the outer weld diameter. In this work, the outer surface of the rupture discs were prepared for DIC using 100–150 _m-sized speckles and illuminated with a Xenon flashlamp. Two synchronized Shimadzu HPV-2 cameras coupled to an Olympus microscope captured stereoimage sequences of rupture disc behavior at speeds of 1 MHz. Image correlation performed on the stereo-images resulted in spatially resolved surface deformation. The experimental facility, specifics of the DIC diagnostic technique, and the temporal deformation and velocity of the surface of a rupturing disc are presented.
Evermore sophisticated ductile plasticity and failure models demand experimental material characterization of shear behavior; yet, the mechanics community lacks a widely accepted, standard test method for shear-dominated deformation and failure of ductile metals. We investigated the use of the V-notched rail test, borrowed from the ASTM D7078 standard for shear testing of composites, for shear testing of Ti-6Al-4V titanium alloy sheet material, considering sheet rolling direction and quasi-static and transient load rates. In this paper, we discuss practical aspects of testing, modifications to the specimen geometry, and the experimental shear behavior of Ti-6Al-4V. Specimen installation, machine compliance, specimen-grip slip during testing, and specimen V-notched geometry all influenced the measured specimen behavior such that repeatable shear-dominated behavior was initially difficult to obtain. We will discuss the careful experimental procedure and set of measurements necessary to extract meaningful shear information for Ti-6Al-4V. We also evaluate the merits and deficiencies, including practicality of testing for engineering applications and quality of results, of the V-notched rail test for characterization of ductile shear behavior.
This paper describes the challenge problem associated with the 2014 Sandia Verification and Validation (V&V) Challenge Workshop. The problem was developed to highlight core issues in V&V of engineering models. It is intended as an analog to projects currently underway at the Sandia National Laboratories—in other words, a realistic case study in applying V&V methods and integrating information from experimental data and simulations to support decisions. The problem statement includes the data, model, and directions for participants in the challenge. In addition, the workings of the provided code and the “truth model” used to create the data are revealed. The code, data, and truth model are available in this paper.
Banded ferrite-pearlite X65 pipeline steel was tested in high pressure hydrogen gas to evaluate the effects of oriented pearlite on hydrogen assisted fatigue crack growth. Test specimens were oriented in the steel pipe such that cracks propagated either parallel or perpendicular to the banded pearlite. The ferrite-pearlite microstructure exhibited orientation dependent behavior in which fatigue crack growth rates were significantly lower for cracks oriented perpendicular to the banded pearlite compared to cracks oriented parallel to the bands. The reduction of hydrogen assisted fatigue crack growth across the banded pearlite is attributed to a combination of crack-tip branching and impeded hydrogen diffusion across the banded pearlite.
The contribution of each component of a power generation plant to the levelized cost of energy (LCOE) can be estimated and used to increase the power output while reducing system operation and maintenance costs. The LCOE is used in order to quantify solar receiver coating influence on the LCOE of solar power towers. Two new parameters are introduced: the absolute levelized cost of coating (LCOC) and the LCOC efficiency. Depending on the material properties, aging, costs, and temperature, the absolute LCOC enables quantifying the cost-effectiveness of absorber coatings, as well as finding optimal operating conditions. The absolute LCOC is investigated for different hypothetic coatings and is demonstrated on Pyromark 2500 paint. Results show that absorber coatings yield lower LCOE values in most cases, even at significant costs. Optimal reapplication intervals range from one to five years. At receiver temperatures greater than 700 °C, non-selective coatings are not always worthwhile while durable selective coatings consistently reduce the LCOE-up to 12% of the value obtained for an uncoated receiver. The absolute LCOC is a powerful tool to characterize and compare different coatings, not only considering their initial efficiencies but also including their durability.
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this paper that the numerical sacrifice of fast algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. Finally, we benchmark performance and demonstrate our improved numerical accuracy.
The model for penetration of a wire braid is rigorously formulated. Integral formulas are developed from energy principles for both self and transfer immittances in terms of potentials for the fields. The detailed boundary value problem for the wire braid is also set up in a very efficient manner; the braid wires act as sources for the potentials in the form of a sequence of line multipoles with unknown coefficients that are determined by means of conditions arising from the wire surface boundary conditions. Approximations are introduced to relate the local properties of the braid wires to a simplified infinite periodic planar geometry. This is used to treat nonuniform coaxial geometries including eccentric interior coaxial arrangements and an exterior ground plane.
Riley, Zachary B.; Deshmukh, Rohit; Miller, Brent A.; Mcnamara, Jack J.; Casper, Katya M.
The inherent relationship between boundary-layer stability, aerodynamic heating, and surface conditions makes the potential for interaction between the structural response and boundary-layer transition an important and challenging area of study in high-speed flows. This paper phenomenologically explores this interaction using a fundamental two-dimensional aerothermoelastic model under the assumption of an aluminum panel with simple supports. Specifically, an existing model is extended to examine the impact of transition onset location, transition length, and transitional overshoot in heat flux and fluctuating pressure on the structural response of surface panels. Transitional flow conditions are found to yield significantly increased thermal gradients, and they can result in higher maximumpanel temperatures compared to turbulent flow. Results indicate that overshoot in heat flux and fluctuating pressure reduces the flutter onset time and increases the strain energy accumulated in the panel. Furthermore, overshoot occurring near the midchord can yield average temperatures and peak displacements exceeding those experienced by the panel subject to turbulent flow. These results suggest that fully turbulent flow does not always conservatively predict the thermo-structural response of surface panels.
It has recently been reported that two types of triaxial electric or magnetic fields can drive vorticity in dielectric or magnetic particle suspensions, respectively. The first type - symmetry-breaking rational fields - consists of three mutually orthogonal fields, two alternating and one dc, and the second type - rational triads - consists of three mutually orthogonal alternating fields. In each case it can be shown through experiment and theory that the fluid vorticity vector is parallel to one of the three field components. For any given set of field frequencies this axis is invariant, but the sign and magnitude of the vorticity (at constant field strength) can be controlled by the phase angles of the alternating components and, at least for some symmetry-breaking rational fields, the direction of the dc field. In short, the locus of possible vorticity vectors is a 1-d set that is symmetric about zero and is along a field direction. In this paper we show that continuous, 3-d control of the vorticity vector is possible by progressively transitioning the field symmetry by applying a dc bias along one of the principal axes. Such biased rational triads are a combination of symmetry-breaking rational fields and rational triads. A surprising aspect of these transitions is that the locus of possible vorticity vectors for any given field bias is extremely complex, encompassing all three spatial dimensions. As a result, the evolution of a vorticity vector as the dc bias is increased is complex, with large components occurring along unexpected directions. More remarkable are the elaborate vorticity vector orbits that occur when one or more of the field frequencies are detuned. These orbits provide the basis for highly effective mixing strategies wherein the vorticity axis periodically explores a range of orientations and magnitudes.
Extension springs are used to apply a constant force at a set displacement in a wide variety of components. When subjected to an abnormal thermal event, such as in a fire, the load carrying capacity of these springs can degrade. In this study, relaxation tests were conducted on extension springs where the heating rate and dwell temperature were varied to investigate the reduction in force provided by the springs. Two commonly used spring material types were tested, 304 stainless steel and Elgiloy, a cobalt-chrome-nickel alloy. Challenges associated with obtaining accurate spring response to an abnormal thermal event are discussed. The resulting data can be used to help develop and test models for thermally activated creep in springs and to provide designers with recommendations to help ensure the reliability of the springs for the duration of the thermal event.
The Mount Simon Sandstone and Eau Claire Formation represent a potential reservoir-caprock system for wastewater disposal, geologic CO2 storage, and compressed air energy storage (CAES) in the Midwestern United States. A primary concern to site performance is heterogeneity in rock properties that could lead to nonideal injectivity and distribution of injected fluids (e.g., poor sweep efficiency). Using core samples from the Dallas Center domal structure, Iowa, we investigate pore characteristics that govern flow properties of major lithofacies of these formations. Methods include gas porosimetry and permeametry, mercury intrusion porosimetry, thin section petrography, and X-ray diffraction. The lithofacies exhibit highly variable intraformational and interformational distributions of pore throat and body sizes. Based on pore-throat size, there are four distinct sample groups. Micropore-throat-dominated samples are from the Eau Claire Formation, whereas the macropore-dominated, mesopore-dominated, and uniform-dominated samples are from the Mount Simon Sandstone. Complex paragenesis governs the high degree of pore and pore-throat size heterogeneity, due to an interplay of precipitation, nonuniform compaction, and later dissolution of cements. The cement dissolution event probably accounts for much of the current porosity in the unit. Mercury intrusion porosimetry data demonstrate that the heterogeneous nature of the pore networks in the Mount Simon Sandstone results in a greater than normal opportunity for reservoir capillary trapping of nonwetting fluids, as quantified by CO2 and air column heights that vary over three orders of magnitude, which should be taken into account when assessing the potential of the reservoir-caprock system for waste disposal (CO2 or produced water) and resource storage (natural gas and compressed air). Our study quantitatively demonstrates the significant impact of millimeter-scale to micron-scale porosity heterogeneity on flow and transport in reservoir sandstones.
Aerosol deposition (AD) is a solid-state deposition technology that has been developed to fabricate ceramic coatings nominally at room temperature. Sub-micron ceramic particles accelerated by pressurized gas impact, deform, and consolidate on substrates under vacuum. Ceramic particle consolidation in AD coatings is highly dependent on particle deformation and bonding; these behaviors are not well understood. In this work, atomistic simulations and in situ micro-compressions in the scanning electron microscope, and the transmission electron microscope (TEM) were utilized to investigate fundamental mechanisms responsible for plastic deformation/fracture of particles under applied compression. Results showed that highly defective micron-sized alumina particles, initially containing numerous dislocations or a grain boundary, exhibited no observable shape change before fracture/fragmentation. Simulations and experimental results indicated that particles containing a grain boundary only accommodate low strain energy per unit volume before crack nucleation and propagation. In contrast, nearly defect-free, sub-micron, single crystal alumina particles exhibited plastic deformation and fracture without fragmentation. Dislocation nucleation/motion, significant plastic deformation, and shape change were observed. Simulation and TEM in situ micro-compression results indicated that nearly defect-free particles accommodate high strain energy per unit volume associated with dislocation plasticity before fracture. The identified deformation mechanisms provide insight into feedstock design for AD.
Vertical GaN power diodes with a bilayer edge termination (ET) are demonstrated. The GaN p-n junction is formed on a low threading dislocation defect density (104 - 105 cm-2) GaN substrate, and has a 15-μm-thick n-type drift layer with a free carrier concentration of 5 × 1015 cm-3. The ET structure is formed by N implantation into the p+-GaN epilayer just outside the p-type contact to create compensating defects. The implant defect profile may be approximated by a bilayer structure consisting of a fully compensated layer near the surface, followed by a 90% compensated (p) layer near the n-type drift region. These devices exhibit avalanche breakdown as high as 2.6 kV at room temperature. Simulations show that the ET created by implantation is an effective way to laterally distribute the electric field over a large area. This increases the voltage at which impact ionization occurs and leads to the observed higher breakdown voltages.
During the Frio-I Brine Pilot CO2 injection experiment in 2004, distinct geochemical changes in response to the injection of 1600tons of CO2 were recorded in brine samples collected from the monitoring well. Previous geochemical modeling studies have considered dissolution of calcite and iron oxyhydroxides, or release of adsorbed iron, as the most likely sources of the increased ion concentrations. In this modeling study we explore possible alternative sources of the increasing calcium and iron, based on the data from the detailed petrographic characterization of the Upper Frio Formation "C". Particularly, we evaluate whether dissolution of pyrite and oligoclase (anorthite component) can account for the observed geochemical changes. Due to kinetic limitations, dissolution of pyrite and anorthite cannot account for the increased iron and calcium concentrations on the time scale of the field test (10 days). However, dissolution of these minerals is contributing to carbonate and clay mineral precipitation on the longer time scales (1000 years). We estimated that during the field test dissolution of calcite and iron oxide resulted in ~0.02wt.% loss of the reservoir rock mass. The reactive transport models were constructed for 25 and 59°C temperature and using Pitzer and B-dot activity correction methods. These models predict carbonate minerals, dolomite and ankerite, as well as clay minerals kaolinite, nontronite and montmorillonite, will precipitate in the Frio Formation "C" sandstone as the system progresses toward chemical equilibrium during a 1000-year period. Cumulative uncertainties associated with using different thermodynamic databases, activity correction models (Pitzer vs. B-dot), and extrapolating to reservoir temperature, are manifested in the difference in the predicted mineral phases. However, these models are consistent with regards to the total volume of mineral precipitation and porosity values which are predicted to within 0.002%.
Balzuweit, Evan; Bunde, David P.; Leung, Vitus J.; Finley, Austin; Lee, Alan C.S.
We present a local search strategy to improve the coordinate-based mapping of a parallel job's tasks to the MPI ranks of its parallel allocation in order to reduce network congestion and the job's communication time. The goal is to reduce the number of network hops between communicating pairs of ranks. Our target is applications with a nearest-neighbor stencil communication pattern running on mesh systems with non-contiguous processor allocation, such as Cray XE and XK Systems. Using the miniGhost mini-app, which models the shock physics application CTH, we demonstrate that our strategy reduces application running time while also reducing the runtime variability. We further show that mapping quality can vary based on the selected allocation algorithm, even between allocation algorithms of similar apparent quality.
This work describes the energy dissipation arising from microslip for an elastic shell incorporating shear and longitudinal deformation resting on a rough-rigid foundation. This phenomenon is investigated using finite element (FE) analysis and nonlinear geometrically exact shell theory. Both approaches illustrate the effect of shear within the shell and observe a reduction in the energy dissipated from microslip as compared to a similar system neglecting shear deformation. In particular, it is found that the shear deformation allows for load to be transmitted beyond the region of slip so that the entire interface contributes to the load carrying capability of the shell. The energy dissipation resulting from the shell model is shown to agree well with that arising from the FE model, and this representation can be used as a basis for reduced order models that capture the microslip phenomenon.
Nishawala, Vinesh V.; Ostoja-Starzewski, Martin; Leamy, Michael J.; Demmie, Paul N.
Peridynamics is a non-local continuum mechanics formulation that can handle spatial discontinuities as the governing equations are integro-differential equations which do not involve gradients such as strains and deformation rates. This paper employs bond-based peridynamics. Cellular Automata is a local computational method which, in its rectangular variant on interior domains, is mathematically equivalent to the central difference finite difference method. However, cellular automata does not require the derivation of the governing partial differential equations and provides for common boundary conditions based on physical reasoning. Both methodologies are used to solve a half-space subjected to a normal load, known as Lamb's Problem. The results are compared with theoretical solution from classical elasticity and experimental results. This paper is used to validate our implementation of these methods.
Throughout the development cycle of structural components or assemblies that require new and unproven manufacturing techniques, the issue of unit to unit variability inevitably arises. The challenge of defining dynamic similarity between units is a problem that is often overlooked or forgotten, but can be very important depending on the functional criteria of the final product. This work aims to provide some guidance on the approach to such a problem, utilizing different methodologies from the modal and vibration testing community. Expanding on previous efforts, a non-intrusive dynamic characterization test is defined to assess similarity on an assembly that is currently being developed. As the assembly is qualified through various test units, the same data sets are taken to build a database of “similarity” data. The work presented here will describe the challenges observed with defining similarity metrics on a multi-body structure with a limited quantity of test units. Also, two statistical characterizations of dynamic FRFs are presented from which one may choose criterion based on some judgment to establish whether units are in or out of family. The methods may be used when the “intended purpose” or “functional criteria” are unknown.
Experimental dynamic substructuring is a means whereby a mathematical model for a substructure can be obtained experimentally and then coupled to a model for the rest of the assembly to predict the response. Recently, various methods have been proposed that use a transmission simulator to overcome sensitivity to measurement errors and to exercise the interface between the substructures; including the Craig-Bampton, Dual Craig-Bampton, and Craig-Mayes methods. This work compares the advantages and disadvantages of these reduced order modeling strategies for two dynamic substructuring problems. The methods are first used on an analytical beam model to validate the methodologies. Then they are used to obtain an experimental model for structure consisting of a cylinder with several components inside connected to the outside case by foam with uncertain properties. This represents an exceedingly difficult structure to model and so experimental substructuring could be an attractive way to obtain a model of the system.
Magnetohydrodynamic (MHD) representations are used to model a wide range of plasma physics applications and are characterized by a nonlinear system of partial differential equations that strongly couples a charged fluid with the evolution of electromagnetic fields. The resulting linear systems that arise from discretization and linearization of the nonlinear problem are generally difficult to solve. In this paper, we investigate multigrid preconditioners for this system. We consider two well-known multigrid relaxation methods for incompressible fluid dynamics: Braess-Sarazin relaxation and Vanka relaxation. We first extend these to the context of steady-state one-fluid viscoresistive MHD. Then we compare the two relaxation procedures within a multigrid-preconditioned GMRES method employed within Newton's method. To isolate the effects of the different relaxation methods, we use structured grids, inf-sup stable finite elements, and geometric interpolation. We present convergence and timing results for a two-dimensional, steady-state test problem.
Broadband impact excitation in structural dynamics is a common technique used to detect and characterize nonlinearities in mechanical systems since it excites many frequencies of a structure at once. Non-stationary time signals from transient ring-down measurements require time-frequency analysis tools to observe variations in frequency and energy dissipation as the response evolves. This work uses the short-time Fourier transform to estimate the instantaneous parameters from measured or simulated data. By combining the discrete Fourier transform with an expanding or contracting window function that moves along the time axis, the resulting spectra are used to estimate the instantaneous frequencies, damping ratios and complex Fourier coefficients. This method is demonstrated on a multi-degree-of-freedom beam with a cubic spring attachment. The amplitude-frequency dependence in the damped response is compared to the undamped nonlinear normal modes. A second example shows the results from experimental ring-down measurements taken on a beam with a lap joint, revealing how the mechanical interface introduces nonlinear frequency and damping parameters.
Motivated by the current demands in high-performance structural analysis, and by a desire to better model systems with localized nonlinearities, analysts have developed a number of different approaches for modelling and simulating the dynamics of a bolted-joint structure. However, the types of conditions that make one approach more effective than the others remains poorly understood due to the fact that these approaches are developed from fundamentally and phenomenologically different concepts. To better grasp their similarities and differences, this research presents a numerical round robin that assesses how well three different approaches predict and simulate a mechanical joint. These approaches are applied to analyze a system comprised of two linear beam structures with a bolted joint interface, and their strengths and shortcomings are assessed in order to determine the optimal conditions for their use.
Simulation of the response of a system to an acoustic environment is desirable in the assessment of aerospace structures in flight-like environments. In simulating a laboratory acoustic test a large challenge is modeling the as-tested acoustic field. Acoustic source inversion capabilities in Sandia’s Sierra/SD structural dynamics code have allowed for the determination of an acoustic field based on measured microphone responses—given measured pressures, source inversion optimization algorithms determine the input parameters of a set of acoustic sources defined in an acoustic finite element model. Inherently, the resulting acoustic field is dependent on the target microphone data. If there are insufficient target points, then the as-tested field may not be recreated properly. Here, the question of number of microphones is studied using synthetic data, that is, target data taken from a previous simulation which allows for comparison of the full pressure field—an important benefit not available with test data. By exploring a range of target points distributed throughout the domain, a rate of convergence to the true field can be observed. Results will be compared with the goal of developing guidelines for the number of sensors required to aid in the design of future laboratory acoustic tests to be used for model assessment.
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
Digital systems in an out-of-nominal environment (e.g., one causing hardware bit flips) may not be expected to function correctly in all respects but may be required to fail safely. We present an approach for understanding and verifying a system’s out-of-nominal behavior as an abstraction of nominal behavior that preserves designated critical safety requirements. Because abstraction and refinement are already widely used for improved tractability in formal design and proof techniques, this additional way of viewing an abstraction can potentially verify a system’s out-of-nominal safety with little additional work. We illustrate the approach with a simple model of a turnstile controller with possible logic faults (formalized in the temporal logic of actions and NuSMV), noting how design choices can be guided by the desired out-of-nominal abstraction. Principles of robustness in complex systems (specifically, Boolean networks) are found to be compatible with the formal abstraction approach. This work indicates a direction for broader use of formal methods in safety-critical systems.
Conference Proceedings of the Society for Experimental Mechanics Series
Bonney, Matthew S.; Kammer, Daniel C.; Brake, M.R.W.
The quantification of model form uncertainty is very important for engineers to understand when using a reduced order model. This quantification requires multiple numerical simulations which can be computationally expensive. Different sampling techniques, including Monte Carlo and Latin Hypercube, are explored while using the maximum entropy method to quantify the uncertainty. The maximum entropy method implements random matrices that maintain essential properties. This is explored on a planar frame using different types of substructure representations, such as Craig-Bampton. Along with the model form uncertainty of the substructure representation, the effect of component mode synthesis for each type of substructure representation on the model form uncertainty is studied.
Coherent change detection (CCD) images, which are prod- ucts of combining two synthetic aperture radar (SAR) images taken at different times of the same scene, can reveal subtle sur- face changes such as those made by tire tracks. These images, however, have low texture and are noisy, making it difficult to au- Tomate track finding. Existing techniques either require user cues and can only trace a single track or make use of templates that are difficult to generalize to different types of tracks, such as those made by motorcycles, or vehicles sizes. This paper presents an approach to automatically identify vehicle tracks in CCD images. We identify high-quality track segments and leverage the con- strained Delaunay triangulation (CDT) to find completion track segments. We then impose global continuity and track smoothness using a binary random field on the resulting CDT graph to determine edges that belong to real tracks. Experimental results show that our algorithm outperforms existing state-of-the- Art techniques in both accuracy and speed.
This paper presents a probabilistic origin-destination table for waterborne containerized imports. The analysis makes use of 2012 Port Import/Export Reporting Service data, 2012 Surface Transportation Board waybill data, a gravity model, and information on the landside transportation mode split associated with specifc ports. This analysis suggests that about 70% of the origin-destination table entries have a coeffcient of variation of less than 20%. This 70% of entries is associated with about 78% of the total volume. This analysis also makes evident the importance of rail interchange points in Chicago, Illinois; Memphis, Tennessee; Dallas, Texas; and Kansas City, Missouri, in supporting the transportation of containerized goods from Asia through West Coast ports to the eastern United States.
This paper describes the design of Teko, an object-oriented C++ library for implementing advanced block preconditioners. Mathematical design criteria that elucidate the needs of block preconditioning libraries and techniques are explained and shown to motivate the structure of Teko. For instance, a principal design choice was for Teko to strongly reflect the mathematical statement of the preconditioners to reduce development burden and permit focus on the numerics. Additional mechanisms are explained that provide a pathway to developing an optimized production capable block preconditioning capability with Teko. Finally, Teko is demonstrated on fluid flow and magnetohydrodynamics applications. In addition to highlighting the features of the Teko library, these new results illustrate the effectiveness of recent preconditioning developments applied to advanced discretization approaches.
For film cooling of combustor linings and turbine blades, it is critical to be able to accurately model jets-in-crossflow. Current Reynolds Averaged Navier Stokes (RANS) models often give unsatisfactory predictions in these flows, due in large part to model form error, which cannot be resolved through calibration or tuning of model coefficients. The Boussinesq hypothesis, upon which most two-equation RANS models rely, posits the existence of a non-negative scalar eddy viscosity, which gives a linear relation between the Reynolds stresses and the mean strain rate. This model is rigorously analyzed in the context of a jet-in-crossflow using the high fidelity Large Eddy Simulation data of Ruiz et al. (2015), as well as RANS k-e results for the same flow. It is shown that the RANS models fail to accurately represent the Reynolds stress anisotropy in the injection hole, along the wall, and on the lee side of the jet. Machine learning methods are developed to provide improved predictions of the Reynolds stress anisotropy in this flow.
Relative motion at bolted connections can occur for large shock loads as the internal shear force in the bolted connection overcomes the frictional resistive force. This macroslip in a structure dissipates energy and reduces the response of the components above the bolted connection. There is a need to be able to capture macroslip behavior in a structural dynamics model. A linear model and many nonlinear models are not able to predict marcoslip effectively. The proposed method to capture macroslip is to use the multi-body dynamics code ADAMS to model joints with 3-D contact at the bolted interfaces. This model includes both static and dynamic friction. The joints are preloaded and the pinning effect when a bolt shank impacts a through hole inside diameter is captured. Substructure representations of the components are included to account for component flexibility and dynamics. This method was applied to a simplified model of an aerospace structure and validation experiments were performed to test the adequacy of the method.
The Structural Dynamics department at Sandia National Laboratories has acquired a 3D Scanning Laser Doppler Vibrometer system for making vibration and modal test measurements. This paper presents the results of testing performed to examine the capabilities and limitations of that system. The test article under consideration was a conical part with two different surface materials which allowed the examination of the effect of angle of incidence and surface reflectivity on the measurement. The system was operated in both 1D and 3D modes, and the results from the 1D scan were compared to a scan performed with a previous generation system to evaluate the improvements between the generations. Data from the laser systems were exported to standard curve fitting software, and modes were fit to the data.
We report the application of ultrafast rotational coherent anti-Stokes Raman scattering (CARS) for temperature and relative oxygen concentration measurements in the plume emanating from a burning aluminized ammonium perchlorate propellant strand. Combustion of these metal-based propellants is a particularly hostile environment for laserbased diagnostics, with intense background luminosity, scattering and beam obstruction from hot metal particles that can be as large as several hundred microns in diameter. CARS spectra that were previously obtained using nanosecond pulsed lasers in an aluminumparticle- seeded flame are examined and are determined to be severely impacted by nonresonant background, presumably as a result of the plasma formed by particulateenhanced laser-induced breakdown. Introduction of fs/ps laser pulses enables CARS detection at reduced pulse energies, decreasing the likelihood of breakdown, while simultaneously providing time-gated elimination of any nonresonant background interference. Temperature probability densities and temperature/oxygen correlations were constructed from ensembles of several thousand single-laser-shot measurements from the fs/ps rotational CARS measurement volume positioned within 3 mm or less of the burning propellant surface. Preliminary results in canonical flames are presented using a hybrid fs/ps vibrational CARS system to demonstrate our progress towards acquiring vibrational CARS measurements for more accurate temperatures in the very high temperature propellant burns.
Reynolds-Averaged Navier-Stokes models are not very accurate for high-Reynolds-number compressible jet-incrossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-Averaged Navier-Stokes model. In this work, the hypothesis is pursued that Reynolds-Averaged Navier-Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.ABayesian inverse problem is formulated to estimate three Reynolds-Averaged Navier-Stokes parameters (Cμ;Cϵ2;Cϵ1), and a Markov chain Monte Carlo method is used to develop a probability density function for them. The cost of the Markov chain Monte Carlo is addressed by developing statistical surrogates for the Reynolds-Averaged Navier-Stokes model. It is found that only a subset of the (Cμ;Cϵ2;Cϵ1) spaceRsupports realistic flow simulations.Ris used as a prior belief when formulating the inverse problem. It is enforced with a classifier in the current Markov chain Monte Carlo solution. It is found that the calibrated parameters improve predictions of the entire flowfield substantially when compared to the nominal/ literature values of (Cμ;Cϵ2;Cϵ1); furthermore, this improvement is seen to hold for interactions at other Mach numbers and jet strengths for which the experimental data are available to provide a comparison. The residual error is quantifies, which is an approximation of the model-form error; it is most easily measured in terms of turbulent stresses.
This work presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we propose specific goal-oriented POD "ingredients" that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting "augmented" POD subspace, we propose a hybrid direct/iterative three-stage method that leverages (1) the optimal ordering of POD basis vectors, and (2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.
We present all-dielectric 2D and 3D metamaterials that are monolithically fabricated from III-V semiconductor nanostructures. The active/gain and high optical nonlinearity properties of the metamaterials can lead to new classes of active devices.
The redox-active bis(imino)acenapthene (BIAN) ligand was used to synthesize homoleptic aluminum, chromium, and gallium complexes of the general formula (BIAN)3M. The resulting compounds were characterized using X-ray crystallography, NMR, EPR, magnetic susceptibility and cyclic voltammetry measurements and modeled using both DFT and ab initio wavefunction calculations to compare the orbital contributions of main group elements and transition metals in ligand-based redox events. Complexes of this type have the potential to improve the energy density and electrolyte stability of grid-scale energy storage technologies, such as redox flow batteries, through thermodynamically-clustered redox events.
Conference Proceedings of the Society for Experimental Mechanics Series
Stender, M.; Papangelo, A.; Allen, M.; Brake, M.R.W.; Schwingshackl, C.; Tiedemann, M.
Many engineered structures are assembled using different kinds of joints such as bolted, riveted and clamped joints. Even if joints are often a small part of the overall structure, they can have a massive impact on its dynamics due to the introduction of nonlinearities. Thus, joints are considered a design liability. Significant effort has been spent in joint characterization and modelling, but a predictive joint model is still non-existent. To overcome these uncertainties and ensure certain safety standards, joints are usually overdesigned according to static considerations and their stiffness. Especially damping and nonlinearity are not considered during the design process. This can lead to lower performance, lower payload, and as result of the joints structural dynamic models often do a poor job of predicting the dynamic response. However, it is well-known that, particularly for metal structures, joints represent the main source of energy dissipation. In this work a minimal model is used to show how structural performance can be improved using joints as a design variable. Common optimization tools are applied to a nonlinear joint model in order to damp undesired structural vibrations. Results illustrate how the intentional choice of joint parameters and locations can effectively reduce vibration level for a given operating point of a jointed structure.
The sequence of crystallization in a recrystallizable lithium silicate sealing glass-ceramic Li2O–SiO2–Al2O3–K2O–B2O3–P2O5–ZnO was analyzed by in situ high-temperature X-ray diffraction (HTXRD). Glass-ceramic specimens have been subjected to a two-stage heat-treatment schedule, including rapid cooling from sealing temperature to a first hold temperature 650°C, followed by heating to a second hold temperature of 810°C. Notable growth and saturation of Quartz was observed at 650°C (first hold). Cristobalite crystallized at the second hold temperature of 810°C, growing from the residual glass rather than converting from the Quartz. The coexistence of quartz and cristobalite resulted in a glass-ceramic having a near-linear thermal strain, as opposed to the highly nonlinear glass-ceramic where the cristobalite is the dominant silica crystalline phase. HTXRD was also performed to analyze the inversion and phase stability of the two types of fully crystallized glass-ceramics. While the inversion in cristobalite resembles the character of a first-order displacive phase transformation, i.e., step changes in lattice parameters and thermal hysteresis in the transition temperature, the inversion in quartz appears more diffuse and occurs over a much broader temperature range. Localized tensile stresses on quartz and possible solid-solution effects have been attributed to the transition behavior of quartz crystals embedded in the glass-ceramics.
Electric motors are a popular choice for mobile robots because they can provide high peak efficiencies, high speeds, and quiet operation. However, the continuous torque performance of these actuators is thermally limited due to joule heating, which can ultimately cause insulation breakdown. In this work we illustrate how motor housing design and active cooling can be used to significantly improve the ability of the motor to transfer heat to the environment. This can increase continuous torque density and reduce energy consumption. We present a novel housing design for brushless DC motors that provides improved heat transfer. This design achieves a 50% increase in heat transfer over a nominal design. Additionally, forced air or water cooling can be easily added to this configuration. Forced convection increases heat transfer over the nominal design by 79%with forced air and 107% with pumped water. Finally, we show how increased heat transfer reduces power consumption and we demonstrate that strategically spending energy on cooling can provide net energy savings of 4%-6%.
We present a synthetic study investigating the resolution limits of Full Wavefield Inversion (FWI) when applied to data generated from a visco-TTI-elastic (VTE) model. We compare VTE inversion having fixed Q and TTI, with acoustic inversion of acoustically generated data and elastic inversion of elastically generated data.
The need to better represent the material properties within the earth's interior has driven the development of higherfidelity physics, e.g., visco-tilted-transversely-isotropic (visco- TTI) elastic media and material interfaces, such as the ocean bottom and salt boundaries. This is especially true for full waveform inversion (FWI), where one would like to reproduce the real-world effects and invert on unprocessed raw data. Here we present a numerical formulation using a Discontinuous Galerkin (DG) finite-element (FE) method, which incorporates the desired high-fidelity physics and material interfaces. To offset the additional costs of this material representation, we include a variety of techniques (e.g., non-conformal meshing, and local polynomial refinement), which reduce the overall costs with little effect on the solution accuracy.
The kinetics of thermoset resin cure are multifaceted, with flow and wet-out being dependent on viscosity, devolatilization being a function of partial pressures, and crosslinking being dependent on temperature. A unique cure recipe must be developed to address and control each factor simultaneously. In the case of thick-section composites, an uncontrolled exotherm could cause the panel to cure from the inside out, causing severe process-induced residual stresses. To identify and control the peak heat generation from the exothermic crosslinking reaction, differential scanning calorimetry (DSC) was conducted for different candidate cure schedules. Resin rheology data and dynamic mechanical analysis (DMA) results were used to confirm a viable resin viscosity profile for each cure schedule. These experiments showed which isothermal holds and ramp rates best served to decrease the exothermic peak as well as when to apply pressure and vent the applied vacuum. From these data, a cure cycle was developed and applied to the material system. During cure, embedded thermocouples were used to monitor heat generation and drive cure temperature ramps and dwells. Ultrasonic testing and visual inspection by microscopy revealed good compaction and < 1 % porosity for two different composite panels with the same resin system. DSC of post-cured samples of each panel indicated a high degree of cure throughout the thickness of the panels, further qualifying the proven-in process.
A mathematical model was developed to investigate the performance limiting factors of Mg-ion battery with a Chevrel phase (MgxMo6S8) cathode and a Mg metal anode. The model was validated using experimental data from the literature [Cheng et al., Chem. Mater., 26, 4904 (2014)]. Two electrochemical reactions of the Chevrel phase with significantly different kinetics and solid diffusion were included in the porous electrode model, which captured the physics sufficiently well to generate charge curves of five rates (0.1C-2C) for two different particle sizes. Limitation analysis indicated that the solid diffusion and kinetics in the highervoltage plateau limit the capacity and increase the overpotential in the Cheng et al.'s thin (20-μm) electrodes. The model reveals that the performance of the cells with reasonable thickness would also be subject to electrolyte-phase limitations. The simulation also suggested that the polarization losses on discharge will be lower than that on charge, because of the differences in the kinetics and solid diffusion between the two reactions of the Chevrel phase.
We introduce Recursive Spoke Darts (RSD): a recursive hyperplane sampling algorithm that exploits the full duality between Voronoi and Delaunay entities of various dimensions. Our algorithm abandons the dependence on the empty sphere principle in the generation of Delaunay simplices providing the foundation needed for scalable consistent meshing. The algorithm relies on two simple operations: line-hyperplane trimming and spherical range search. Consequently, this approach improves scalability as multiple processors can operate on different seeds at the same time. Moreover, generating consistent meshes across processors eliminates the communication needed between them, improving scalability even more. We introduce a simple tweak to the algo- rithm which makes it possible not to visit all vertices of a Voronoi cell, generating almost-exact Delaunay graphs while avoiding the natural curse of dimensionality in high dimensions.
In this work we have presented a particle resuspension model implemented in the SNL code SIERRA/Fuego, which can be used to model particle dispersal and resuspension from surfaces. The method demonstrated is applicable to a class of particles, but would require additional parametric fits or physics models for extension to other applications, such as wetted particles or walls. We have demonstrated the importance of turbulent variations in the wall shear stress when considering resuspension, and implemented both shear stress variation models and stochastic resuspension models (not shown in this work). These models can be used in simulations with of physically realistic scenarios to augment lab-scale DOE Handbook data for airborne release fractions and respirable fractions in order to provide confidences for safety analysts and facility designers to apply in their analyses at DOE sites. Future work on this topic will involve validation of the presented model against experimental data and extension of the empirical models to be applicable to different classes of particles and surfaces.
This paper discusses the factor analysis that provides the basis for development and use of Bayesian Network (BN) models to support qualification planning in order to predict the suitability of Six Degrees of Freedom (6DOF) vibration testing for qualification. Qualification includes environmental testing such as temperature, vibration and shock to support a stochastic argument about the suitability of a design. Qualification is becoming more complex because it involves significant human expert judgment and relies on new technologies that have often never been fully utilized to support design assessment. Technology has advanced to the state where 6DOF vibration tests are possible, but these tests are far more complex than traditional single degree of freedom tests. This challenges systems engineers as they strive to plan qualification in an environment where technical and environmental constraints are coupled with the traditional costs, risk and schedule constraints. BN models may provide a framework to aid Systems Engineers in planning qualification efforts with complex constraints. Previous work identified a method for building a BN model for the predictive framework. This paper discusses validation efforts of models derived from the factor analysis and summarizes some recommendations on the factor analyses from industry subject matter experts.
Influence spread is an important phenomenon that occurs in many social networks. Influence maximization is the corresponding problem of finding the most influential nodes in these networks. In this paper, we present a new influence diffusion model, based on pairwise factor graphs, that captures dependencies and directions of influence among neighboring nodes.We use an augmented belief propagation algorithm to efficiently compute influence spread on this model so that the direction of influence is preserved. Due to its simplicity, the model can be used on large graphs with high-degree nodes, making the influence maximization problem practical on large, real-world graphs. Using large Flixster and Epinions datasets, we provide experimental results showing that our model predictions match well with ground-truth influence spreads, far better than other techniques. Furthermore, we show that the influential nodes identified by our model achieve significantly higher influence spread compared to other popular models. The model parameters can easily be learned from basic, readily available training data. In the absence of training, our approach can still be used to identify influential seed nodes.
Chambers, Mariah; Mallory, Stewart A.; Malone, Heather; Gao, Yuan; Anthony, Stephen M.; Yi, Yi; Cacciuto, Angelo; Yu, Yan
Amphiphilic Janus particles self-assemble into complex metastructures, but little is known about how their assembly might be modified by weak interactions with a nearby biological membrane surface. Here, we report an integrated experimental and molecular dynamics simulation study to investigate the self-assembly of amphiphilic Janus particles on a lipid membrane. We created an experimental system in which Janus particles are allowed to self-assemble in the same medium where zwitterionic lipids form giant unilamellar vesicles (GUVs). Janus particles spontaneously concentrated on the inner leaflet of the GUVs. They exhibited biased orientation and heterogeneous rotational dynamics as revealed by single particle rotational tracking. The combined experimental and simulation results show that Janus particles concentrate on the lipid membranes due to weak particle-lipid attraction, whereas the biased orientation of particles is driven predominantly by inter-particle interactions. This study demonstrates the potential of using lipid membranes to influence the self-assembly of Janus particles.
The structure-dependent vibrational properties of different Mg(BH4)2 polymorphs (α, β, γ, and δ phases) were investigated with a combination of neutron vibrational spectroscopy (NVS) measurements and density functional theory (DFT) calculations, with emphasis placed on the effects of the local structure and orientation of the BH4- anions. DFT simulations closely match the neutron vibrational spectra. The main bands in the low-energy region (20-80 meV) are associated with the BH4- librational modes. The features in the intermediate energy region (80-120 meV) are attributed to overtones and combination bands arising from the lower-energy modes. The features in the high-energy region (120-200 meV) correspond to the BH4- symmetric and asymmetric bending vibrations, of which four peaks located at 140, 142, 160, and 172 meV are especially intense. There are noticeable intensity distribution variations in the vibrational bands for different polymorphs. This is explained by the differences in the spatial distribution of BH4- anions within various structures. An example of the possible identification of products after the hydrogenation of MgB2, using NVS measurements, is presented. These results provide fundamental insights of benefit to researchers currently studying these promising hydrogen-storage materials.
Enhanced radiation tolerance of nanostructured metals is attributed to the high density of interfaces that can absorb radiationinduced defects. Here, cavity evolution mechanisms during cascade damage, helium implantation, and annealing of nanocrystalline nickel are characterized via in situ transmission electron microscopy (TEM). Films subjected to self-ion irradiation followed by helium implantation developed evenly distributed cavity structures, whereas films exposed in the reversed order developed cavities preferentially distributed along grain boundaries. Post-irradiation annealing and orientation mapping demonstrated uniform cavity growth in the nanocrystalline structure, and cavities spanning multiple grains. These mechanisms suggest limited ability to reduce swelling, despite the stability of the nanostructure.
The rapid release of energy from reactive multilayer foils can create extreme local temperature gradients near substrate materials. In order to fully exploit the potential of these materials, a better understanding of the interaction between the substrate or filler material and the foil is needed. Specifically, this work investigates how variations in local properties within the substrate (i.e. differences between properties in constituent phases) can affect heat transport into the substrate. This can affect the microstructural evolution observed within the substrate, which may affect the final joint properties. The effect of the initial substrate microstructure on microstructural evolution within the heat-affected zone is evaluated experimentally in two Sn-Zn alloys and numerical techniques are utilized to inform the analysis.
We know the rainbow color map is terrible, and it is emphatically reviled by the visualization community, yet its use continues to persist. Why do we continue to use a this perceptual encoding with so many known flaws? Instead of focusing on why we should not use rainbow colors, this position statement explores the rational for why we do pick these colors despite their flaws. Often the decision is influenced by a lack of knowledge, but even experts that know better sometimes choose poorly. A larger issue is the expedience that we have inadvertently made the rainbow color map become. Knowing why the rainbow color map is used will help us move away from it. Education is good, but clearly not sufficient. We gain traction by making sensible color alternatives more convenient. It is not feasible to force, a color map on users. Our goal is to supplant the rainbow color map as a common standard, and we w ill find that even those wedded to it will migrate away.
Performing experiments in the laboratory that mimic conditions in the field is challenging. In an attempt to understand hydraulic fracture in the field, and provide laboratory flow results for model verification, an effort to duplicate the typical fracture pattern for long horizontal wells has been made. The typical "disks on a string" fracture formation is caused by properly orienting the long horizontal well such that it is parallel to the minimum principal stress direction, then fracturing the rock. In order to replicate this feature in the laboratory with a traditional cylindrical specimen the test must be performed under extensile stress conditions and the specimen must have been cored parallel to bedding in order to avoid failure along a bedding plane, and replicate bedding orientation in the field. Testing has shown that it is possible to form failure features of this type in the laboratory. A novel method for jacketing is employed to allow fluid to flow out of the fracture and leave the specimen without risking the integrity of the jacket; this allows proppant to be injected into the fracture, simulating loss of fracturing fluids to the formation, and allowing a solid proppant pack to be developed.
Small silicon photonics micro-resonator modulators and filters hold the promise for multi-terabit per-second interconnects at energy consumptions well below 1 pJ/bit. To date, no products exist and little known commercial development is occurring using this technology. Why? In this talk, we review the many challenges that remain to be overcome in bringing this technology from the research labs to the field where they can overcome important commercial, industrial, and national security limitations of existing photonic technologies.
TlBr crystals have superior radiation detection properties; however, their properties degrade in the range of hours to weeks when an operating electrical field is applied. To account for this rapid degradation using the widely-accepted vacancy migration mechanism, the vacancy concentration must be orders of magnitude higher than any conventional estimates. The present work has incorporated a new analytical variable charge model in molecular dynamics (MD) simulations to examine the structural changes of materials under electrical fields. Our simulations indicate that dislocations in TlBr move under electrical fields. This discovery can lead to new understanding of TlBr aging mechanisms under external fields.
System-of-systems modeling has traditionally focused on physical systems rather than humans, but recent events have proved the necessity of considering the human in the loop. As technology becomes more complex and layered security continues to increase in importance, capturing humans and their interactions with technologies within the system-of-systems will be increasingly necessary. After an extensive job-task analysis, a novel type of system-ofsystems simulation model has been created to capture the human-technology interactions on an extra-small forward operating base to better understand performance, key security drivers, and the robustness of the base. In addition to the model, an innovative framework for using detection theory to calculate d’ for individual elements of the layered security system, and for the entire security system as a whole, is under development.
We report on the development of single-frequency VCSELs (vertical-cavity surface-emitting lasers) for sensing the position of a moving MEMS (micro-electro-mechanical system) object with resolution much less than 1nm. Position measurement is the basis of many different types of MEMS sensors, including accelerometers, gyroscopes, and pressure sensors. Typically, by switching from a traditional capacitive electronic readout to an interferometric optical readout, the resolution can be improved by an order of magnitude with a corresponding improvement in MEMS sensor performance. Because the VCSEL wavelength determines the scale of the position measurement, laser wavelength (frequency) stability is desirable. This paper discusses the impact of VCSEL amplitude and frequency noise on the position measurement.
When a severe nuclear power plant accident occurs, plant operators rely on Severe Accident Management Guidelines (SAMGs). However, current SAMGs are limited in scope and depth. The plant operators must work to mitigate the accident with limited experience and guidance for the situation. The SMART (Safely Managing Accidental Reactor Transients) procedures framework aims to fill the need for detailed guidance by creating a comprehensive probabilistic model, using a Dynamic Bayesian Network, to aid in the diagnosis of the reactor's state. In this paper, we explore the viability of the proposed SMART proceedures approach by building a prototype Bayesian network that allows tor the diagnosis of two types of accidents based on a comprehensive data set. We use Kullback-Leibler (K-L) divergence to gauge the relative importance of each of the plant's parameters. We compare accuracy and F-score measures across four different Bayesian networks: a baseline network that ignores observation variables, a network that ignores data from the observation variable with the highest K-L score, a network that ignores data from the variable with the lowest K-L score, and finally a network that includes all observation variable data. We conclude with an interpretation of these results for SMART procedures.
The present study results are focused on laboratory testing of surrogate materials representing Waste Isolation Pilot Plant (WIPP) waste. The surrogate wastes correspond to a conservative estimate of the containers and transuranic waste materials emplaced at the WIPP. Testing consists of hydrostatic, triaxial, and uniaxial tests performed on surrogate waste recipes based on those previously developed by Hansen et al. (1997). These recipes represent actual waste by weight percent of each constituent and total density. Testing was performed on full-scale and 1/4-scale containers. Axial, lateral, and volumetric strain and axial and lateral stress measurements were made. Unique testing techniques were developed during the course of the experimental program. The first involves the use of a spirometer or precision flow meter to measure sample volumetric strain under the various stress conditions. Since the manner in which the waste containers deformed when compressed was not even, the volumetric and axial strains were used to determine the lateral strains. The second technique involved the development of unique coating procedures that also acted as jackets during hydrostatic, triaxial, and full-scale uniaxial testing; 1/4-scale uniaxial tests were not coated but wrapped with clay to maintain an airtight seal for volumetric strain measurement. During all testing methods, the coatings allowed the use of either a spirometer or precision flow meter to estimate the amount of air driven from the container as it crushed down since the jacket adhered to the container and yet was flexible enough to remain airtight during deformation.
Research, the manufacture of knowledge, is currently practiced largely as an “art,” not a “science.” Just as science (understanding) and technology (tools) have revolutionized the manufacture of other goods and services, it is natural, perhaps inevitable, that they will ultimately also be applied to the manufacture of knowledge. In this article, we present an emerging perspective on opportunities for such application, at three different levels of the research enterprise. At the cognitive science level of the individual researcher, opportunities include: overcoming idea fixation and sloppy thinking, and balancing divergent and convergent thinking. At the social network level of the research team, opportunities include: overcoming strong links and groupthink, and optimally distributing divergent and convergent thinking between individuals and teams. At the research ecosystem level of the research institution and the larger national and international community of researchers, opportunities include: overcoming performance fixation, overcoming narrow measures of research impact, and overcoming (or harnessing) existential/social stress.
Analysts across national security domains are required to sift through large amounts of data to find and compile relevant information in a form that enables decision makers to take action in high-consequence scenarios. However, even the most experienced analysts are unable to be 100 % consistent and accurate based on the entire dataset, unbiased towards familiar documentation, and are unable to synthesize and process large amounts of information in a small amount of time. Sandia National Laboratories has attempted to solve this problem by developing an intelligent web crawler called Huntsman. Huntsman acts as a personal research assistant by browsing the internet or offline datasets in a way similar to the human search process, only much faster (millions of documents per day), by submitting queries to search engines and assessing the usefulness of page results through analysis of full-page content with a suite of text analytics. This paper will discuss Huntsman’s capability to both mirror and enhance human analysts using intelligent web crawling with analysts-in-the-loop. The goal is to demonstrate how weaknesses in human cognitive processing can be compensated for by fusing human processes with text analytics and web crawling systems, which ultimately reduces analysts’ cognitive burden and increases mission effectiveness.
In this study, eye tracking metrics and visual saliency maps were used to assess analysts' interactions with synthetic aperture radar (SAR) imagery. Participants with varying levels of experience with SAR imagery completed a target detection task while their eye movements and behavioral responses were recorded. The resulting gaze maps were compared with maps of bottom-up visual saliency and with maps of automatically detected image features The results showed striking differences between professional SAR analysis and novices in terms of how their visual search patterns related to the visual saliency of features in the imagery. They also revealed patterns that reflect the utility of various features in the images for the professional analysts These findings have implications for system design andfor the design and use of automatic feature classification algorithms.
Cognitive science is an interdisciplinary science which studies the human dimension, drawing from academic disciplines such as psychology, linguistics, philosophy, and computer modeling. Business management is controlling, leading, monitoring, organizing, and planning critical information to bring useful resources and capabilities to a viable market. Finally, the government sector has many roles, but one primary goal is to bring innovative solutions to maintain and enhance national security. There currently is a gap in the government sector between applied research and solutions applicable to the national security field. This is a deep problem since a critical element to many national security issues is the human dimension and requires cognitive science approaches. One major cause to this gap is the separation between business management and cognitive science: scientific research is either not being tailored to the mission need or deployed at a time when it can best be absorbed by national security concerns. This paper addresses three major themes: (1) how cognitive science and business management benefits the government sector, (2) the current gaps that exist between cognitive science and business management, and (3) how cognitive science and business management may work to address government sector, national security needs.
We present here an example of how a large,multi-dimensional unstructured data set, namely aircraft trajectories over the United States, can be analyzed using relatively straightforward unsupervised learning techniques. We begin by adding a rough structure to the trajectory data using the notion of distance geometry. This provides a very generic structure to the data that allows it to be indexed as an n-dimensional vector. We then do a clustering based on the HDBSCAN algorithm to both group flights with similar shapes and find outliers that have a relatively unique shape. Next, we expand the notion of geometric features to more specialized features and demonstrate the power of these features to solve specific problems. Finally, we highlight not just the power of the technique but also the speed and simplicity of the implementation by demonstrating them on very large data sets.
A critical challenge in data science is conveying the meaning of data to human decision makers. While working with visualizations, decision makers are engaged in a visual search for information to support their reasoning process. As sensors proliferate and high performance computing becomes increasingly accessible, the volume of data decision makers must contend with is growing continuously and driving the need for more efficient and effective data visualizations. Consequently, researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles to assess the effectiveness of data visualizations. In this paper, we compare the performance of three different saliency models across a common set of data visualizations. This comparison establishes a performance baseline for assessment of new data visualization saliency models.
‘Big data’ is a phrase that has gained much traction recently. It has been defined as ‘a broad term for data sets so large or complex that traditional data processing applications are inadequate and there are challenges with analysis, searching and visualization’ [1]. Many domains struggle with providing experts accurate visualizations of massive data sets so that the experts can understand and make decisions about the data e.g., [2, 3, 4, 5]. Abductive reasoning is the process of forming a conclusion that best explains observed facts and this type of reasoning plays an important role in process and product engineering. Throughout a production lifecycle, engineers will test subsystems for critical functions and use the test results to diagnose and improve production processes. This paper describes a value-driven evaluation study [7] for expert analyst interactions with big data for a complex visual abductive reasoning task. Participants were asked to perform different tasks using a new tool, while eye tracking data of their interactions with the tool was collected. The participants were also asked to give their feedback and assessments regarding the usability of the tool. The results showed that the interactive nature of the new tool allowed the participants to gain new insights into their data sets, and all participants indicated that they would begin using the tool in its current state.
This report presents computational analyses that simulate the structural response of crude oil storage caverns at the U.S. Strategic Petroleum Reserve (SPR) West Hackberry site in Louisiana. These analyses evaluate the geomechanical behavior of the 22 caverns at the West Hackberry SPR site for the current condition of the caverns and their wellbores, the effect of the caverns on surface facilities, and for potential enlargement related to drawdowns. These analyses represent a significant upgrade in modeling capability, as the following enhancements have been developed: a 6-million-element finite element model of the entire West Hackberry dome; cavern finite element mesh geometries fit to sonar measurements of those caverns; the full implementation of the multi-mechanism deformation (M-D) creep model; and the use of historic wellhead pressures to analyze the past geomechanical behavior of the caverns. The analyses examined the overall performance of the West Hackberry site by evaluating surface subsidence, horizontal surface strains, and axial well strains. This report presents a case study of how large-scale computational analyses may be used in conjunction with site data to make recommendations for safe depressurization and repressurization of oil storage caverns with unusual geometries and close proximity, and for the determination of the number of available drawdowns for a particular cavern.
ASME 2016 10th International Conference on Energy Sustainability, ES 2016, collocated with the ASME 2016 Power Conference and the ASME 2016 14th International Conference on Fuel Cell Science, Engineering and Technology
In an effort to increase thermal energy storage densities and turbine inlet temperatures in concentrating solar power (CSP) systems, focus on energy storage media has shifted from molten salts to solid particles. These solid particles are stable at temperatures far greater than that of molten salts, allowing the use of efficient high-temperature turbines in the power cycle. Furthermore, many of the solid particles under development store heat via reversible chemical reactions (thermochemical energy storage, TCES) in addition to the heat they store as sensible energy. The heat-storing reaction is often the thermal reduction of a metal oxide. If coupled to an Air-Brayton system, wherein air is used as the turbine working fluid, the subsequent extraction of both reaction and sensible heat, as well as the transfer of heat to the working fluid, can be accomplished in a direct-contact, counter-flow reoxidation reactor. However, there are several design challenges unique to such a reactor, such as maintaining requisite residence times for reactions to occur, particle conveying and mitigation of entrainment, and the balance of kinetics and heat transfer rates to achieve reactor outlet temperatures in excess of 1200 °C. In this paper, insights to addressing these challenges are offered, and design and operational tradeoffs that arise in this highlycoupled system are introduced and discussed.
Organizing multivariate time series data for presentation to an analyst is a challenging task. Typically, a dataset contains hundreds or thousands of datapoints, and each datapoint consists of dozens of time series measurements. Analysts are interested in how the datapoints are related, which measurements drive trends and/or produce clusters, and how the clusters are related to available metadata. In addition, interest in particular time series measurements will change depending on what the analyst is trying to understand about the dataset. Rather than providing a monolithic single use machine learning solution, we have developed a system that encourages analyst interaction. This system, Dial-A-Cluster (DAC), uses multidimensional scaling to provide a visualization of the datapoints depending on distance measures provided for each time series. The analyst can interactively adjust (dial) the relative influence of each time series to change the visualization (and resulting clusters). Additional computations are provided which optimize the visualization according to metadata of interest and rank time series measurements according to their influence on analyst selected clusters. The DAC system is a plug-in for Slycat (slycat.readthedocs.org), a framework which provides a web server, database, and Python infrastructure. The DAC web application allows an analyst to keep track of multiple datasets and interact with each as described above. It requires no installation, runs on any platform, and enables analyst collaboration. We anticipate an open source release in the near future.
Truly quantifying soot concentrations within turbulent flames is a difficult prospect. Laser extinction measurements are constrained by spatial resolution limitations and by uncertainty in the local soot extinction coefficient. Laser-induced incandescence (LII) measurements rely on calibration against extinction and thereby are plagued by uncertainty in the extinction coefficient. In addition, the LII measurements are subject to signal trapping in flames with significant soot concentrations and/or flame widths. In the study reported here, a turbulent ethylene non-premixed jet flame (jet exit Reynolds number of 20,000) is investigated by a combination of LII and full-flame HeNe laser (633 nm) extinction measurements. The LII measurements have been calibrated against extinction measurements in a laminar ethylene flame. An extinction coefficient previously measured in laminar ethylene flames is used as the basis of the calibration. The time-Averaged LII data in the turbulent flame has been corrected for signal trapping, which is shown to be significant in this flame, and then the line-of-sight extinction for a theoretical 633 nm light source has been calculated acrob the LII-determined soot concentration field. Comparison of the LII-based extinction with that actual measured along the flame centerline is favorable, showing an average deviation of approximately 10%. This lends credence to the measured values of soot concentrations in the flame and also gives a good indication of the level of uncertainty in the measured soot concentrations, subject to the additional uncertainty in the previously measured extinction coefficient, estimated to be ±15%.