Sachet, Edward; Shelton, Christopher T.; Harris, Joshua S.; Gaddy, Benjamin E.; Irving, Douglas L.; Curtarolo, Stefano; Donovan, Brian F.; Hopkins, Patrick E.; Sharma, Peter A.; Sharma, Peter A.; Ihlefeld, Jon F.; Franzen, Stefan; Maria, Jon P.
The interest in plasmonic technologies surrounds many emergent optoelectronic applications, such as plasmon lasers, transistors, sensors and information storage. Although plasmonic materials for ultraviolet-visible and near-infrared wavelengths have been found, the mid-infrared range remains a challenge to address: few known systems can achieve subwavelength optical confinement with low loss in this range. With a combination of experiments and ab initio modelling, here we demonstrate an extreme peak of electron mobility in Dy-doped CdO that is achieved through accurate 'defect equilibrium engineering'. In so doing, we create a tunable plasmon host that satisfies the criteria for mid-infrared spectrum plasmonics, and overcomes the losses seen in conventional plasmonic materials. In particular, extrinsic doping pins the CdO Fermi level above the conduction band minimum and it increases the formation energy of native oxygen vacancies, thus reducing their populations by several orders of magnitude. The substitutional lattice strain induced by Dy doping is sufficiently small, allowing mobility values around 500 cm2 V-1 s-1 for carrier densities above 1020 cm-3. Our work shows that CdO:Dy is a model system for intrinsic and extrinsic manipulation of defects affecting electrical, optical and thermal properties, that oxide conductors are ideal candidates for plasmonic devices and that the defect engineering approach for property optimization is generally applicable to other conducting metal oxides.
In the aerospace industry, hail strikes on a structure are an environment that must be considered when qualifying a product. Performing a physical test on a product would require a test setup that would launch a fabricated hail stone at an expensive prototype. This test may be difficult or impossible to execute and destructive to the product. Instead of testing, a finite element model (FEM) may be used to simulate the damage and consequences of a hail strike. In order to use a FEM in this way, an accurate representation of the input force from a hail stone must be known. The purpose of this paper is to calculate the force that a hail stone imparts on an object using the inverse method SWAT-TEEM. This paper discusses the advantages of using SWAT-TEEM over other force identification methods and exercises the algorithm for a test series of hail strikes that include multiple angles of attack and multiple velocities which include speeds that are supersonic.
Hattar, Khalid M.; Cheaito, Ramez; Gorham, Caroline S.; Misra, Amit; Hopkins, Patrick E.
The progressive build up of fission products inside different nuclear reactor components can lead to significant damage of the constituent materials. We demonstrate the use of time-domain thermoreflectance (TDTR), a nondestructive thermal measurement technique, to study the effects of radiation damage on material properties. We use TDTR to report on the thermal conductivity of optimized ZIRLO, a material used as fuel cladding in nuclear reactors. We find that the thermal conductivity of optimized ZIRLO is 10.7 ± 1.8 W m-1 K-1 at room temperature. Furthermore, we find that the thermal conductivities of copper-niobium nanostructured multilayers do not change with helium ion irradiation doses of 1015 cm-2 and ion energy of 200 keV, demonstrating the potential of heterogeneous multilayer materials for radiation tolerant coatings. Finally, we compare the effect of ion doses and ion beam energies on the measured thermal conductivity of bulk silicon. Our results demonstrate that TDTR can be used to quantify depth dependent damage.
Geiger mode detectors fabricated in silicon are used to detect incident photons with high sensitivity. They are operated with large internal electric fields so that a single electron-hole pair can trigger an avalanche breakdown which generates a signal in an external circuit. We have applied a modified version of the ion beam induced charge technique in a nuclear microprobe system to investigate the application of Geiger mode detectors to detect discrete ion impacts. Our detectors are fabricated with an architecture based on the avalanche diode structure and operated with a transient bias voltage that activates the Geiger mode. In this mode avalanche breakdown is triggered by ion impact followed by diffusion of an electron-hole pair into the sensitive volume. The avalanche breakdown is quenched by removal of the transient bias voltage which is synchronized with a beam gate. An alternative operation mode is possible at lower bias voltages where the avalanche process self-quenches and the device exhibits linear charge gain as a consequence. Incorporation of such a device into a silicon substrate potentially allows the exceptional sensitivity of Geiger mode to register an electron-hole pair from sub-10 keV donor atom implants for the deterministic construction of shallow arrays of single atoms in the substrate required for emerging quantum technologies. Our characterization system incorporates a fast electrostatic ion beam switcher gated by the transient device bias, duration 800 ns, with a time delay, duration 500 ns, that allows for both the ion time of flight and the diffusion of the electron-hole pairs in the substrate into the sensitive region of the device following ion impact of a scanned 1 MeV H microbeam. We compare images at the micron scale mapping the response of the device to ion impact operated in both Geiger mode and avalanche (linear) mode for silicon devices engineered with this ultimate-sensitivity detector structure.
Matrix multiplication is a fundamental computation in many scientific disciplines. In this paper, we show that novel fast matrix multiplication algorithms can significantly outperform vendor implementations of the classical algorithm and Strassen's fast algorithm on modest problem sizes and shapes. Furthermore, we show that the best choice of fast algorithm depends not only on the size of the matrices but also the shape. We develop a code generation tool to automatically implement multiple sequential and shared-memory parallel variants of each fast algorithm, including our novel parallelization scheme. This allows us to rapidly benchmark over 20 fast algorithms on several problem sizes. Furthermore, we discuss a number of practical implementation issues for these algorithms on shared-memory machines that can direct further research on making fast algorithms practical.
A series of simulations and experiments to resolve questions about the operation of arrays of closely spaced small aspect ratio rod pinches has been performed. Design and postshot analysis of the experimental results are supported by 3-D particle-in-cell simulations. Both simulations and experiments support these conclusions. Penetration of current to the interior of the array appears to be efficient, as the current on the center rods is essentially equal to the current on the outer rods. Current loss in the feed due to the formation of magnetic nulls was avoided in these experiments by design of the feed surface of the cathode and control of the gap to keep the electric fields on the cathode below the emission threshold. Some asymmetry in the electron flow to the rod was observed, but the flow appeared to symmetrize as it reached the end of the rod. Interaction between the rod pinches can be controlled to allow the stable and consistent operation of arrays of rod pinches.
Following prior experimental evidence of electrostatic charge separation, electric and magnetic fields produced by hypervelocity impact, we have developed a model of electrostatic charge separation based on plasma sheath theory and implemented it into the CTH shock physics code. Preliminary assessment of the model shows good qualitative and quantitative agreement between the model and prior experiments at least in the hypervelocity regime for the porous carbonate material tested. Moreover, the model agrees with the scaling analysis of experimental data performed in the prior work, suggesting that electric charge separation and the resulting electric and magnetic fields can be a substantial effect at larger scales, higher impact velocities, or both.
Occasionally, our well-controlled cookoff experiments with Comp-B give anomalous results when venting conditions are changed. For example, a vented experiment may take longer to ignite than a sealed experiment. In the current work, we show the effect of venting on thermal ignition of Comp-B. We use Sandia's Instrumented Thermal Ignition (SITI) experiment with various headspace volumes in both vented and sealed geometries to study ignition of Comp-B. In some of these experiments, we have used a boroscope to observe Comp-B as it melts and reacts. We propose that the mechanism for ignition involves TNT melting, dissolution of RDX, and complex bubbly liquid flow. High pressure inhibits bubble formation and flow is significantly reduced. At low pressure, a vigorous dispersed bubble flow was observed.
In order to reliably simulate the energy yield of photovoltaic (PV) systems, it is necessary to have an accurate model of how the PV modules perform with respect to irradiance and cell temperature. Building on a previous study that addresses the irradiance dependence, two approaches to fit the temperature dependence of module power in PVsyst have been developed and are applied here to recent multi-irradiance and temperature data for a standard Yingli Solar PV module type. The results demonstrate that it is possible to match the measured irradiance and temperature dependence of PV modules in PVsyst. Improvements in energy yield prediction using the optimized models relative to the PVsyst standard model are considered significant for decisions about project financing.
A notion of material homogeneity is proposed for peridynamic bodies with variable horizon but constant bulk properties. A relation is derived that scales the force state according to the position-dependent horizon while keeping the bulk properties unchanged. Using this scaling relation, if the horizon depends on position, artifacts called ghost forces may arise in a body under a homogeneous deformation. These artifacts depend on the second derivative of the horizon and can be reduced by employing a modified equilibrium equation using a new quantity called the partial stress. Bodies with piecewise constant horizon can be modeled without ghost forces by using a simpler technique called a splice. As a limiting case of zero horizon, both the partial stress and splice techniques can be used to achieve local-nonlocal coupling. Computational examples, including dynamic fracture in a one-dimensional model with local- nonlocal coupling, illustrate the methods.
Occasionally, our well-controlled cookoff experiments with Comp-B give anomalous results when venting conditions are changed. For example, a vented experiment may take longer to ignite than a sealed experiment. In the current work, we show the effect of venting on thermal ignition of Comp-B. We use Sandia's Instrumented Thermal Ignition (SITI) experiment with various headspace volumes in both vented and sealed geometries to study ignition of Comp-B. In some of these experiments, we have used a boroscope to observe Comp-B as it melts and reacts. We propose that the mechanism for ignition involves TNT melting, dissolution of RDX, and complex bubbly liquid flow. High pressure inhibits bubble formation and flow is significantly reduced. At low pressure, a vigorous dispersed bubble flow was observed.
We find for infrared wavelengths that there are broad ranges of particle sizes and refractive indices that represent fog and rain, where circular polarization can persist to longer ranges than linear polarization. Using polarization tracking Monte Carlo simulations for varying particle size, wavelength, and refractive index, we show that, for specific scene parameters, circular polarization outperforms linear polarization in maintaining the illuminating polarization state for large optical depths. This enhancement with circular polarization can be exploited to improve range and target detection in obscurant environments that are important in many critical sensing applications. Initially, researchers employed polarizationdiscriminating schemes, often using linearly polarized active illumination, to further distinguish target signals from the background noise. More recently, researchers have investigated circular polarization as a means to separate signal from noise even more. Specifically, we quantify both linearly and circularly polarized active illumination and show here that circular polarization persists better than linear for radiation fog in the short-wave infrared, for advection fog in the short-wave and long-wave infrared, and large particle sizes of Sahara dust around the 4 μmwavelength. Conversely, we quantify where linear polarization persists better than circular polarization for some limited particle sizes of radiation fog in the long-wave infrared, small particle sizes of Sahara dust for wavelengths of 9-10.5 μm, and large particle sizes of Sahara dust through the 8-11 μm wavelength range in the long-wave infrared.
Hutchins, Margot J.; Bhinge, Raunak; Micali, Maxwell K.; Robinson, Stefanie L.; Sutherland, John W.; Dornfeld, David
Increasing connectivity, use of digital computation, and off-site data storage provide potential for dramatic improvements in manufacturing productivity, quality, and cost. However, there are also risks associated with the increased volume and pervasiveness of data that are generated and potentially accessible to competitors or adversaries. Enterprises have experienced cyber attacks that exfiltrate confidential and/or proprietary data, alter information to cause an unexpected or unwanted effect, and destroy capital assets. Manufacturers need tools to incorporate these risks into their existing risk management processes. This paper establishes a framework that considers the data flows within a manufacturing enterprise and throughout its supply chain. The framework provides several mechanisms for identifying generic and manufacturing-specific vulnerabilities and is illustrated with details pertinent to an automotive manufacturer. In addition to providing manufacturers with insights into their potential data risks, this framework addresses an outcome identified by the NIST Cybersecurity Framework.
Three-dimensional (3D) network structure has been envisioned as a superior architecture for lithium ion battery (LIB) electrodes, which enhances both ion and electron transport to significantly improve battery performance. Herein, a 3D carbon nano-network is fabricated through chemical vapor deposition of carbon on a scalably manufactured 3D porous anodic alumina (PAA) template. As a demonstration on the applicability of 3D carbon nano-network for LIB electrodes, the low conductivity active material, TiO2, is then uniformly coated on the 3D carbon nano-network using atomic layer deposition. High power performance is demonstrated in the 3D C/TiO2 electrodes, where the parallel tubes and gaps in the 3D carbon nano-network facilitates fast Li ion transport. A large areal capacity of ~0.37mAh·cm-2 is achieved due to the large TiO2 mass loading in the 60μm-thick 3D C/TiO2 electrodes. At a test rate of C/5, the 3D C/TiO2 electrode with 18nm-thick TiO2 delivers a high gravimetric capacity of ~240mAhg-1, calculated with the mass of the whole electrode. A long cycle life of over 1000 cycles with a capacity retention of 91% is demonstrated at 1C. The effects of the electrical conductivity of carbon nano-network, ion diffusion, and the electrolyte permeability on the rate performance of these 3D C/TiO2 electrodes are systematically studied.
Benjamin B. Yang shares his views on photovoltaics (PV) failure analysis and reliability. He provides information about common failure mechanisms in the PV industry and the significant overlap with FA techniques and meth?ods in microelectronics. The rapid growth and adoption of this technology means that microelectronics failure analysis and reliabil-ity experts may be called upon to address current and future challenges. These failures can be analyzed and solved by the implementation of FA techniques and meth?ods in microelectronics.
Deep Borehole Disposal (DBD) of radioactive waste has some clear advantages over mined repositories, including incremental construction and loading, enhanced natural barriers provided by deep continental crystalline basement, and reduced site characterization. Unfavorable features for a DBD site include upward vertical fluid potential gradients, presence of economically exploitable natural resources, presence of high permeability connection from the waste disposal zone to the shallow subsurface, and significant probability of future volcanic activity. Site characterization activities would encompass geomechanical (i.e., rock stress state, fluid pressure, and faulting), geological (i.e., both overburden and bedrock lithology), hydrological (i.e., quantity of fluid, fluid convection properties, and solute transport mechanisms), chemical (i.e., rock and fluid interaction), and socioeconomic (i.e., likelihood for human intrusion) aspects. For a planned Deep Borehole Field Test (DBFT), site features and/or physical processes would be evaluated using both direct (i.e., sampling and in-hole testing) and indirect (i.e., surface and borehole geophysical) methods for efficient and effective characterization. Surface-based characterization would be used to guide the exploratory drilling program, once a candidate DBFT site has been selected. Borehole based characterization will be used to determine the variability of system state (i.e., stress, pressure, temperature, petrology, and water chemistry) with depth, and to develop material and system parameters relevant for numerical simulation. While the site design of DBD could involve an array of disposal boreholes, it may not be necessary to characterize each borehole in detail. Characterization strategies will be developed in the DBFT that establish disposal system safety sufficient for licensing a disposal array.
The United States Department of Energy (DOE) is conducting research and development (R&D) activities within the Used Fuel Disposition Campaign to support the implementation of the DOE's 2013 Strategy for the Management and Disposal of Used Nuclear Fuel and High-Level Radioactive Waste. R&D activities focus on storage, transportation, and disposal of used nuclear fuel (UNF) and wastes generated by existing and future nuclear fuel cycles and are ongoing at nine national laboratories. Additional relevant R&D is conducted at multiple universities through the DOE's Nuclear Energy University Program. Within the storage and transportation areas, R&D continues to focus on technical gaps related to extended storage and subsequent transportation of UNF. Primary emphasis for FY15 is on experimental and analysis activities that support the DOE s dry cask demonstration confirmatory data project initiated at the North Anna Nuclear Power Plant in Virginia by the Electric Power Research Institute in collaboration with AREVA and Dominion Power. Within the disposal research area, current planning calls for a significant increase in R&D associated with evaluating the feasibility of deep borehole disposal of some waste forms, in addition to a continued emphasis on confirming the viability of generic mined disposal concepts in multiple geologic media. International collaborations that allow the U.S. program to benefit from experience and opportunities for research in other nations remain a high priority.
An excellent scientific understanding of salt reconsolidation mechanisms has been established from experimental results and observational microscopy. Thermal, mechanical, and fluid transport properties of reconsolidating granular salt are fundamental to the design, analysis, and performance assessment of potential salt repositories for heat-generating nuclear waste. Application of acquired knowledge to construction techniques could potentially achieve high-performance seal properties upon construction or during the repository operational period, which lessens reliance on modeling to argue for evolving engineering characteristics and attainment of sealing functions at some future time. The robust database could be augmented by select reconsolidation experiments with admixtures and analogue studies with appropriate documentation of microprocesses.
Amazon Mechanical Turk (AMT) has become a powerful tool for social scientists due to its inexpensiveness, ease of use, and ability to attract large numbers of workers. While the subject pool is diverse, there are numerous questions regarding the composition of the workers as a function of when the “Human Intelligence Task”(HIT) is posted. Given the “queue” nature of HITs and the disparity in geography of participants, it is natural to wonder whether HIT posting time/day can have an impact on the population that is sampled.We address this question using a panel survey on AMT and show (surprisingly) that except for gender, there is no statistically significant difference in terms of demographics characteristics as a function of HIT posting time.
The Navigation, Guidance, and Control (NGC) Department at Sandia National Laboratories conducts flight test programs where rapid proto- typing is essential. To successfully maintain schedule it is critical to have high confidence in the NGC hardware and software prior to integration testing. The NGC Department has developed a V-Model diagram approach to ensure high confidence with hardware and software prior to in- tegration testing within a rapid prototyping environment. The V-Model detailed design process flow describes a design approach for testing hard- ware and software early and often using hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) simulations.
Sandia’s Hypersonic Wind Tunnel (HWT) became operational in 1962, providing a test capability for the nation’s nuclear weapons complex. The first modernization program was completed in 1977. A blowdown facility with a 0.46-m diameter test section, the HWT operates at Mach 5, 8, and 14 with stagnation pressures to 21 MPa and temperatures to 1400K. Minimal further alteration to the facility occurred until 2008, but in recent years the HWT has received considerable investment to ensure its viability for at least the next 25 years. This has included reconditioning of the vacuum spheres, replacement of the high-pressure air tanks for Mach 5, new compressors to provide the high-pressure air, upgrades to the cryogenic nitrogen source for Mach 8 and 14, an efficient high-pressure water cooling system for the nozzle throats, and refurbishment of the electric-resistance heaters. The HWT is now returning to operation following the largest of the modernization projects, in which the old variable transformer for the 3-MW electrical system powering the heaters was replaced with a silicon-controlled rectifier power system. The final planned upgrade is a complete redesign of the control console and much of the gas-handling equipment.
The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of these systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM-UBM which uses Runnalls' Gaussian mixture reduction technique, in order to reduce the number of these calculations. With this tree-structured hash, we can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we reduce this computation by a factor of 15 × while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.
This work examines simulation requirements for ensuring accurate predictions of compressible cavity flows. Lessons learned from this study will be used in the future to study the effects of complex geometric features, representative of those found on real weapons bays, on compressible flow past open cavities. A hybrid RANS/LES simulation method is applied to a rectangular cavity with length-to-depth ratio of 7, in order to first validate the model for this class of flows. Detailed studies of mesh resolution, absorbing boundary condition formulation, and boundary zone extent are included and guidelines are developed for ensuring accurate prediction of cavity pressure fluctuations.
In this paper, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory-epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
The primary problem in property testing is to decide whether a given function satisfies a certain property, or is far from any function satisfying it. This crucially requires a notion of distance between functions. The most prevalent notion is the Hamming distance over the uniform distribution on the domain. This restriction to uniformity is rather limiting, and it is important to investigate distances induced by more general distributions. In this paper, we give simple and optimal testers for bounded derivative properties over arbitrary product distributions. Bounded derivative properties include fundamental properties such as monotonicity and Lipschitz continuity. Our results subsume almost all known results (upper and lower bounds) on monotonicity and Lipschitz testing. We prove an intimate connection between bounded derivative property testing and binary search trees (BSTs). We exhibit a tester whose query complexity is the sum of expected depths of optimal BSTs for each marginal. Furthermore, we show this sum-of-depths is also a lower bound. A technical contribution of our work is an optimal dimension reduction theorem for all bounded derivative properties, which relates the distance of a function from the property to the distance of restrictions of the function to random lines. Such a theorem has been elusive even for monotonicity, and our theorem is an exponential improvement to the previous best known result.
Full-field axial deformation within molten-salt batteries was measured using x-ray imaging with a sampling moiré technique. This method worked for in situ testing of the batteries because of the inherent grid pattern of the battery layers when imaged with x-rays. High-speed x-ray imaging acquired movies of the layer deformation during battery activation. Numerical validation of the technique, as implemented in this paper, was done using synthetic and numerically shifted images. Typical results of a battery are shown for one test. Ongoing work on validation and more test results are in progress.
This work presents a technique for statistically modeling errors introduced by reduced-order models. The method employs Gaussian-process regression to construct a mapping from a small number of computationally inexpensive “error indicators” to a distribution over the true error. The variance of this distribution can be interpreted as the (epistemic) uncertainty introduced by the reduced-order model. To model normed errors, the method employs existing rigorous error bounds and residual norms as indicators; numerical experiments show that the method leads to a near-optimal expected effectivity in contrast to typical error bounds. To model errors in general outputs, the method uses dual-weighted residuals-which are amenable to uncertainty control-as indicators. Experiments illustrate that correcting the reduced-order-model output with this surrogate can improve prediction accuracy by an order of magnitude; this contrasts with existing “multifidelity correction” approaches, which often fail for reduced-order models and suffer from the curse of dimensionality. The proposed error surrogates also lead to a notion of “probabilistic rigor”; i.e., the surrogate bounds the error with specified probability.
A materials study of high reliability electronics cleaning is presented here. In Phase 1, mixed type substrates underwent a condensed contaminants application to view a worst- case scenario for unremoved flux with cleaning agent residue for parts in a silicone oil filled environment. In Phase 2, fluxes applied to copper coupons and to printed wiring boards underwent gentle cleaning then accelerated aging in air at 65% humidity and 30 O C. Both sets were aged for 4 weeks. Contaminants were no-clean (ORL0), water soluble (ORH1 liquid and ORH0 paste), and rosin (RMA; ROL0) fluxes. Defluxing agents were water, solvents, and engineered aqueous defluxers. In the first phase, coupons had flux applied and heated, then were placed in vials of oil with a small amount of cleaning agent and additional coupons. In the second phase, pairs of copper coupons and PWB were hand soldered by application of each flux, using tin-lead solder in a strip across the coupon or a set of test components on the PWB. One of each pair was cleaned in each cleaning agent, the first with a typical clean, and the second with a brief clean. Ionic contamination residue was measured before accelerated aging. After aging, substrates were removed and a visual record of coupon damage made, from which a subjective rank was applied for comparison between the various flux and defluxer combinations; more corrosion equated to higher rank. The ORH1 water soluble flux resulted in the highest ranking in both phases, the RMA flux the least. For the first phase, in which flux and defluxer remained on coupons, the aqueous defluxers led to worse corrosion. The vapor phase cleaning agents resulted in the highest ranking in the second phase, in which there was no physical cleaning. Further study of cleaning and rinsing parameters will be required.
We develop a computationally less expensive alternative to the direct solution of a large sparse symmetric positive definite system arising from the numerical solution of elliptic partial differential equation models. Our method, substituted factorization, replaces the computationally expensive factorization of certain dense submatrices that arise in the course of direct solution with sparse Cholesky factorization with one or more solutions of triangular systems using substitution. These substitutions fit into the tree-structure commonly used by parallel sparse Cholesky, and reduce the initial factorization cost at the expense of a slight increase cost in solving for a right-hand side vector. Our analysis shows that substituted factorization reduces the number of floating-point operations for the model k × k 5-point finite-difference problem by 10% and empirical tests show execution time reduction on average of 24.4%. On a test suite of three-dimensional problems we observe execution time reduction as high as 51.7% and 43.1% on average.
Resilience is a major challenge for large-scale systems. It is particularly important for iterative linear solvers, since they take much of the time of many scientific applications. We show that single bit flip errors in the Flexible GMRES iterative linear solver can lead to high computational overhead or even failure to converge to the right answer. Informed by these results, we design and evaluate several strategies for fault tolerance in both inner and outer solvers appropriate across a range of error rates.We implement them, extending Trilinos’ solver library with the Global View Resilience (GVR) programming model, which provides multi-stream snapshots, multi-version data structures with portable and rich error checking/recovery. Experimental results validate correct execution with low performance overhead under varied error conditions.
Experimental dynamic substructures in both modal and frequency response domains using the transmission simulator method have been developed for several systems since 2007. The standard methodology couples the stiffness, mass and damping matrices of the experimental substructure to a finite element (FE) model of the remainder of the system through multi-point constraints. This can be somewhat awkward in the FE code. It is desirable to have an experimental substructure in the Craig-Bampton (CB) form to ease the implementation process, since many codes such as Nastran, ABAQUS, ANSYS and Sierra Structural Dynamics have CB as a substructure option. Many analysts are familiar with the CB form. A square transformation matrix is derived that produces a modified CB form that still requires multi-point constraints to couple to the rest of the FE model. Finally the multi-point constraints are imported to the modified CB matrices to produce substructure matrices that fit in the standard CB form. The physical boundary degrees-of-freedom (dof) of the experimental substructure matrices can be directly attached to physical dof in the remainder of the FE model. This paper derives the new experimental substructure that fits in the CB form, and presents results from an analytical and an industrial example utilizing the new CB form.
This work was motivated by a desire to transform an experimental dynamic substructure derived using the transmission simulator method into the Craig-Bampton substructure form which could easily be coupled with a finite element code with the Craig-Bampton option. Near the middle of that derivation, a modal Craig-Bampton form emerges. The modal Craig-Bampton (MCB) form was found to have several useful properties. The MCB matrices separate the response into convenient partitions related to (1) the fixed boundary modes of the substructure (a diagonal partition), (2) the modes of the fixture it is mounted upon, (3) the coupling terms between the two sets of modes. Advantages of the MCB are addressed. (1) The impedance of the boundary condition for component testing, which is usually unknown, is quantified with simple terms. (2) The model is useful for shaker control in both single degree of freedom and multiple degree of freedom shaker control systems. (3) MCB provides an energy based framework for component specifications to reduce over-testing but still guarantee conservatism.
We report measurements of temperature and O2/N2 mole-fraction ratio in the vicinity of a burning and decomposing carbon-epoxy composite aircraft material samples exposed to uniform heat fluxes of 48 and 69 kW/m2. Controlled laboratory experiments were conducted with the samples suspended above a cone-type heater and enclosed in an optically accessible chimney. Noninvasive coherent anti-Stokes Raman scattering (CARS) measurements we performed on a single-laser-shot basis. The CARS data were performed with both a traditional point measurement system and with a one-dimensional line imaging scheme that provides single-shot temperature and O2/N2 profiles to reveal the quantitative structure of the temperature and oxygen concentration profiles over the duration of the 30-40 minute duration events. The measured near-surface temperature and oxygen transport are an important factor for exothermic chemistry and oxidation of char materials and the carbon fibers themselves in a fire scenario. These unique laser-diagnostic experiments provide new information on physical/chemical processes in a well-controlled environment which may be useful for the development of heat-and mass-transfer models for the composite fire scenario.
As more and more high-consequence applications such as aerospace systems leverage computational models to support decisions, the importance of assessing the credibility of these models becomes a high priority. Two elements in the credibility assessment are verification and validation. The former focuses on convergence of the solution (i.e. solution verification) and the “pedigree” of the codes used to evaluate the model. The latter assess the agreement of the model prediction to real data. The outcome of these elements should map to a statement of credibility on the predictions. As such this credibility should be integrated into the decision making process. In this paper, we present a perspective as to how to integrate these element into a decision making process. The key challenge is to span the gap between physics-based codes, quantitative capability assessments (V&V/UQ), and qualitative risk-mitigation concepts.
High-performance radar operation, particularly Ground Moving Target Indicator (GMTI) radar modes, are very sensitive to anomalous effects of system nonlinearities. System nonlinearities generate harmonic spurs that at best degrade, and at worst generate false target detections. One significant source of nonlinear behavior is the Analog to Digital Converter (ADC). One measure of its undesired nonlinearity is its Integral Nonlinearity (INL) specification. We examine in this paper the relationship of INL to radar performance; in particular its manifestation in a range-Doppler map or image.
In this paper, the effect of two different turbine blade designs on the wake characteristics was investigated using large-eddy simulation with an actuator line model. For the two different designs, the total axial load is nearly the same but the spanwise (radial) distributions are different. The one with higher load near the blade tip is denoted as Design A; the other is Design B. From the computed results, we observed that the velocity deficit from Design B is higher than that from Design A. The intensity of turbulence kinetic energy in the far wake is also higher for Design B. The effect of blade load distribution on the wind turbine axial and tangential induction factors was also investigated.
In the summer of 2020, the National Aeronautics and Space Administration (NASA) plans to launch a spacecraft as part of the Mars 2020 mission. One option for the rover on the proposed spacecraft uses a Multi-Mission Radioisotope Thermoelectric Generator (MMRTG) to provide continuous electrical and thermal power for the mission. NASA has prepared an Environmental Impact Statement (EIS) in accordance with the National Environmental Policy Act. The EIS includes information on the risks of mission accidents to the general public and on-site workers at the launch complex. The Nuclear Risk Assessment (NRA) addresses the responses of the MMRTG option to potential accident and abort conditions during the launch opportunity for the Mars 2020 mission and the associated consequences. This information provides the technical basis for the radiological risks of the MMRTG option for the EIS. This paper provides a summary of the methods and results used in the NRA.
Light body armor development for the war fighter is based on trial-and-error testing of prototype designs against ballistic projectiles. Torso armor testing against blast is virtually nonexistent but necessary to ensure adequate mitigation against injury to the heart and lungs. In this paper, we discuss the development of a high-fidelity human torso model and the associated modeling & simulation (M&S) capabilities. Using this torso model, we demonstrate the advantage of virtual simulation in the investigation of wound injury as it relates to the war fighter experience. Here, we present the results of virtual simulations of blast loading and ballistic projectile impact to the torso with and without notional protective armor. Our intent here is to demonstrate the advantages of applying a modeling and simulation approach to the investigation of wound injury and relative merit assessments of protective body armor.
Recent cyber security events have demonstrated the need for algorithms that adapt to the rapidly evolving threat landscape of complex network systems. In particular, human analysts often fail to identify data exfiltration when it is encrypted or disguised as innocuous data. Signature-based approaches for identifying data types are easily fooled and analysts can only investigate a small fraction of network events. However, neural networks can learn to identify subtle patterns in a suitably chosen input space. To this end, we have developed a signal processing approach for classifying data files which readily adapts to new data formats. We evaluate the performance for three input spaces consisting of the power spectral density, byte probability distribution and sliding-window entropy of the byte sequence in a file. By combining all three, we trained a deep neural network to discriminate amongst nine common data types found on the Internet with 97.4% accuracy.
The theme of the paper is that consolidated interim storage can provide an important integrating function between storage and disposal in the United States. Given the historical tension between consolidated interim storage and disposal in the United States, this paper articulates a rationale for consolidated interim storage. However, the paper concludes more effort could be expended on developing the societal aspects of the rationale, in addition to the technical and operational aspects of using consolidated interim storage.
The relationship between the damage potential of a series of relatively low level shocks and a single high level shock that causes severe damage is complex and depends on many factors. Shock Response Spectra are the standard for describing mechanical shock events for aerospace vehicles, but are only applicable to single shocks. Energy response spectra are applicable to multiple shock events. This paper describes the results of an initial study that sought to gain insight into how energy response spectra of low amplitude shocks relate to energy response spectra of a high amplitude shock in which the component of interest fails. The study showed that maximum energy spectra of low level shocks cannot simply be summed to estimate the energy response spectra of a high level, failure causing single shock. A power law relationship between the energy spectra of a low amplitude shock and the energy spectra of the high amplitude shock was postulated. A range of values of the exponent was empirically determined from test data and found to be consistent with the values typically used in high-cycle fatigue S-N curves.
Numerous domains, ranging from medical diagnostics to intelligence analysis, involve visual search tasks in which people must find and identify specific items within large sets of imagery. These tasks rely heavily on human judgment, making fully automated systems infeasible in many cases. Researchers have investigated methods for combining human judgment with computational processing to increase the speed at which humans can triage large image sets. One such method is rapid serial visual presentation (RSVP), in which images are presented in rapid succession to a human viewer. While viewing the images and looking for targets of interest, the participant’s brain activity is recorded using electroencephalography (EEG). The EEG signals can be time-locked to the presentation of each image, producing event-related potentials (ERPs) that provide information about the brain’s response to those stimuli. The participants’ judgments about whether or not each set of images contained a target and the ERPs elicited by target and non-target images are used to identify subsets of images that merit close expert scrutiny [1]. Although the RSVP/EEG paradigm holds promise for helping professional visual searchers to triage imagery rapidly, it may be limited by the nature of the target items. Targets that do not vary a great deal in appearance are likely to elicit useable ERPs, but more variable targets may not. In the present study, we sought to extend the RSVP/EEG paradigm to the domain of aviation security screening, and in doing so to explore the limitations of the technique for different types of targets. Professional Transportation Security Officers (TSOs) viewed bag X-rays that were presented using an RSVP paradigm. The TSOs viewed bursts of images containing 50 segments of bag X-rays that were presented for 100 ms each. Following each burst of images, the TSOs indicated whether or not they thought there was a threat item in any of the images in that set. EEG was recorded during each burst of images and ERPs were calculated by time-locking the EEG signal to the presentation of images containing threats and matched images that were identical except for the presence of the threat item. Half of the threat items had a prototypical appearance and half did not. We found that the bag images containing threat items with a prototypical appearance reliably elicited a P300 ERP component, while those without a prototypical appearance did not. These findings have implications for the application of the RSVP/EEG technique to real-world visual search domains.
Proceedings of the ASME Design Engineering Technical Conference
Bonney, Matthew S.; Kammer, Daniel C.; Brake, M.R.W.
The uncertainty of a system is usually quantified with the use of sampling methods such as Monte-Carlo or Latin hypercube sampling. These sampling methods require many computations of the model and may include re-meshing. The re-solving and re-meshing of the model is a very large computational burden. One way to greatly reduce this computational burden is to use a parameterized reduced order model. This is a model that contains the sensitivities of the desired results with respect to changing parameters such as Young's modulus. The typical method of computing these sensitivities is the use of finite difference technique that gives an approximation that is subject to truncation error and subtractive cancellation due to the precision of the computer. One way of eliminating this error is to use hyperdual numbers, which are able to generate exact sensitivities that are not subject to the precision of the computer. This paper uses the concept of hyper-dual numbers to parameterize a system that is composed of two substructures in the form of Craig-Bampton substructure representations, and combine them using component mode synthesis. The synthesis transformations using other techniques require the use of a nominal transformation while this approach allows for exact transformations when a perturbation is applied. This paper presents this technique for a planar motion frame and compares the use and accuracy of the approach against the true full system. This work lays the groundwork for performing component mode synthesis using hyper-dual numbers.
The potential for bias to affect the results of knowledge elicitation studies is well recognized. Researchers and knowledge engineers attempt to control for bias through careful selection of elicitation and analysis methods. Recently, the development of a wide range of physiological sensors, coupled with fast, portable and inexpensive computing platforms, has added an additional dimension of objective measurement that can reduce bias effects. In the case of an abductive reasoning task, bias can be introduced through design of the stimuli, cues from researchers, or omissions by the experts. We describe a knowledge elicitation methodology robust to various sources of bias, incorporating objective and cross-referenced measurements. The methodology was applied in a study of engineers who use multivariate time series data to diagnose mance of devices throughout the production lifecycle. For visual reasoning tasks, eye tracking is particularly effective at controlling for biases of omission by providing a record of the subject’s attention allocation.
Developers and security analysts have been using static analysis for a long time to analyze programs for defects and vulnerabilities. Generally a static analysis tool is run on the source code for a given program, flagging areas of code that need to be further inspected by a human analyst. These tools tend to work fairly well – every year they find many important bugs. These tools are more impressive considering the fact that they only examine the source code, which may be very complex. Now consider the amount of data available that these tools do not analyze. There are many additional pieces of information available that would prove useful for finding bugs in code, such as a history of bug reports, a history of all changes to the code, information about committers, etc. By leveraging all this additional data, it is possible to find more bugs with less user interaction, as well as track useful metrics such as number and type of defects injected by committer. This paper provides a method for leveraging development metadata to find bugs that would otherwise be difficult to find using standard static analysis tools. We showcase two case studies that demonstrate the ability to find new vulnerabilities in large and small software projects by finding new vulnerabilities in the cpython and Roundup open source projects.
With an abundance of scientific information in hand, what are the remaining geomechanics issues for a salt repository for heat-generating nuclear waste disposal? The context of this question pertains to the development of a license application, rather than an exploration of the entire breadth of salt research. The technical foundation supporting a licensed salt repository has been developed in the United States and Germany since the 1960s. Although the level of effort has been inconsistent and discontinuous over the years, site characterization activities, laboratory testing, field-scale experiments, and advanced computational capability provide information and tools required for a license application, should any nation make that policy decision. Ample scientific bases exist to develop a safety case in the event a site is identified and governing regulations promulgated. Some of the key remaining geomechanics issues pertain to application of advanced computational tools to the repository class of problems, refinement of constitutive models and their validation, reduction of uncertainty in a few areas, operational elements, and less tractable requirements that may arise from regulators and stakeholders. This realm of issues as they pertain to salt repositories is being addressed in various research, development and demonstration activities in the United States and Germany, including extensive collaborations. Many research areas such as constitutive models and performance of geotechnical barriers have industry applications beyond repositories. And, while esoteric salt-specific phenomenology and micromechanical processes remain of interest, they will not be reviewed here. The importance of addressing geomechanics issues and their associated prioritization are a matter of discussion, though the discriminating criterion for considerations in this paper is a demonstrable tie to the salt repository safety case.
A three dimensional time-domain model, based on Cummins equation, has been developed for an axisymmetric point absorbing wave energy converter (WEC) with an irregular cross section. This model incorporates a number of nonlinearities to accurately account for the dynamics of the device: hydrostatic restoring, motion constraints, saturation of the powertake-off force, and kinematic nonlinearities. Here, an interpolation model of the hydrostatic restoring reaction is developed and compared with a surface integral based method. The effects of these nonlinear hydrostatic models on device dynamics are explored by comparing predictions against those of a linear model. For the studied WEC, the interpolation model offers a large improvement over a linear model and is roughly two orders-of-magnitude less computationally expensive than the surface integral based method.
Radar Intelligence, Surveillance, and Reconnaissance (ISR) does not always involve cooperative or even friendly environments or targets. The environment in general, and an adversary in particular, may offer numerous characteristics and impeding techniques to diminish the effectiveness of a radar ISR sensor. These generally fall under the banner of jamming, spoofing, or otherwise interfering with the Electromagnetic (EM) signals required by the radar sensor. Consequently mitigation techniques are often prudent to retain efficacy of the radar sensor. We discuss in general terms a number of mitigation techniques.
The interaction of Cs adatoms with mono- or bi-layered graphene (MLG and BLG), either free-standing or on a SiO2 substrate, was investigated using density functional theory. The most stable adsorption sites for Cs are found to be hollow sites on both graphene sheets and graphene-veiled SiO2(0001). Larger dipole moments are created when a MLG-veiled SiO2(0001) substrate is used for adsorption of Cs atoms compared to the adsorption on free-standing MLG, due to charge transfer occurring between the MLG and the SiO2 substrate. For the adsorption of Cs on BLG-veiled SiO2(0001) substrate, these differences are smoothed out and the binding energies corresponding to different sites are nearly degenerate; smaller dipole moments created by the Cs adatoms on BLG compared to MLG are also predicted.
Visual search data describe people’s performance on the common perceptual problem of identifying target objects in a complex scene. Technological advances in areas such as eye tracking now provide researchers with a wealth of data not previously available. The goal of this work is to support researchers in analyzing this complex and multimodal data and in developing new insights into visual search techniques. We discuss several methods drawn from the statistics and machine learning literature for integrating visual search data derived from multiple sources and performing exploratory data analysis. We ground our discussion in a specific task performed by officers at the Transportation Security Administration and consider the applicability, likely issues, and possible adaptations of several candidate analysis methods.
A helium leakage detection system was modified to measure gas permeability on extracted cores of nearly impermeable rock. Here we use a Helium - Mass - Spectrometry - Permeameter (HMSP) to conduct a constant pressure, steady state flow test through a sample using helium gas. Under triaxial stress conditions, the HMSP can measure flow and estimate permeability of rocks and geomaterials down to the nanodarcy scale (10-21 m2). In this study, measurements of flow through eight shale samples under hydrostatic conditions were in the range of 10-7 to 10-9 Darcy. We extend this flow measurement technology by dynamically monitoring the release of helium from a helium saturated shale sample during a triaxial deformation experiment. The helium flow, initially extremely low, consistent with the low permeability of shale, is observed to increase in advance of volume strain increase during deformation of the shale. This is perhaps the result of microfracture development and flow path linkage through the microfractures within the shale. Once microfracturing coalescence initiates, there is a large increase in helium release and flow. This flow rate increase is likely the result of development of a macrofracture in the sample, a flow conduit, later confirmed by post-test observations of the deformed sample. The release rate (flow) peaks and then diminishes slightly during subsequent deformation; however the post deformation flow rate is considerably greater than that of undeformed shale.
Modern high performance computers connect hundreds of thousands of endpoints and employ thousands of switches. This allows for a great deal of freedom in the design of the network topology. At the same time, due to the sheer numbers and complexity involved, it becomes more challenging to easily distinguish between promising and improper designs. With ever increasing line rates and advances in optical interconnects, there is a need for renewed design methodologies that comprehensively capture the requirements and expose tradeoffs expeditiously in this complex design space. We introduce a systematic approach, based on Generalized Moore Graphs, allowing one to quickly gauge the ideal level of connectivity required for a given number of end-points and traffic hypothesis, and to collect insight on the role of the switch radix in the topology cost. Based on this approach, we present a methodology for the identification of Pareto-optimal topologies. We apply our method to a practical case with 25,000 nodes and present the results.
Performance at Transportation Security Administration (TSA) airport checkpoints must be consistently high to skillfully mitigate national security threats and incidents. To accomplish this, Transportation Security Officers (TSOs) must exceptionally perform in threat detection, interaction with passengers, and efficiency. It is difficult to measure the human attributes that contribute to high performing TSOs because cognitive ability such as memory, personality, and competence are inherently latent variables. Cognitive scientists at Sandia National Laboratories have developed a methodology that links TSOs’ cognitive ability to their performance. This paper discusses how the methodology was developed using a strict quantitative process, the strengths and weaknesses, as well as how this could be generalized to other non-TSA contexts. The scope of this project is to identify attributes that distinguished high and low TSO performance for the duties at the checkpoint that involved direct interaction with people going through the checkpoint.
In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. The local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained from the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In this paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.
Optically pumped semiconductor disk lasers (SDLs) provide high beam quality with high average-power power at designer wavelengths. However, material choices are limited by the need for a distributed Bragg reflector (DBR), usually monolithically integrated with the active region. We demonstrate DBR-free SDL active regions, which have been lifted off and bonded to various transparent substrates. For an InGaAs multi-quantum well sample bonded to a diamond window heat spreader, we achieved CW lasing with an output power of 2 W at 1150 nm with good beam quality.
We employ both the effective medium approximation (EMA) and Bloch theory to compare the dispersion properties of semiconductor hyperbolic metamaterials (SHMs) at mid-infrared frequencies and metallic hyperbolic metamaterials (MHMs) at visible frequencies. This analysis reveals the conditions under which the EMA can be safely applied for both MHMs and SHMs. We find that the combination of precise nanoscale layering and the longer infrared operating wavelengths puts the SHMs well within the effective medium limit and, in contrast to MHMs, allows for the attainment of very high photon momentum states. In addition, SHMs allow for new phenomena such as ultrafast creation of the hyperbolic manifold through optical pumping. In particular, we examine the possibility of achieving ultrafast topological transitions through optical pumping which can photo-dope appropriately designed quantum wells on the femtosecond time scale.
We present an improved deterministic method for analyzing transport problems in random media. In the original method realizations were generated by means of a product quadrature rule; transport calculations were performed on each realization and the results combined to produce ensemble averages. In the present work we recognize that many of these realizations yield identical transport problems. We describe a method to generate only unique transport problems with the proper weighting to produce identical ensemble-averaged results at reduced computational cost. We also describe a method to ignore relatively unimportant realizations in order to obtain nearly identical results with further reduction in costs. Our results demonstrate that these changes allow for the analysis of problems of greater complexity than was practical for the original algorithm.
Sabotage of spent nuclear fuel casks remains a concern nearly forty years after attacks against shipment casks were first analyzed and has a renewed relevance in the post-9/11 environment. A limited number of full-scale tests and supporting efforts using surrogate materials, typically depleted uranium dioxide (DUO2), have been conducted in the interim to more definitively determine the source term from these postulated events. In all the previous studies, the postulated attack of greatest interest was by a conical shape charge (CSC) that focuses the explosive energy much more efficiently than bulk explosives. However, the validity of these large-scale results remain in question due to the lack of a defensible Spent Fuel Ratio (SFR), defined as the amount of respirable aerosol generated by an attack on a mass of spent fuel compared to that of an otherwise identical surrogate. Previous attempts to define the SFR in the 1980's have resulted in estimates ranging from 0.42 to 12 and include suboptimal experimental techniques and data comparisons. Because of the large uncertainty surrounding the SFR, estimates of releases from security-related events may be unnecessarily conservative. Credible arguments exist that the SFR does not exceed a value of unity. A defensible determination of the SFR in this lower range would greatly reduce the calculated risk associated with the transport and storage of spent nuclear fuel in dry cask systems. In the present work, the CTH shock physics code is used to simulate spent nuclear fuel (SNF) and DUO2 targets impacted by a CSC jet at an ambient temperature condition. These preliminary results are used to illustrate an approach to estimate the respirable release fraction for each type of material and ultimately, an estimate of the SFR.
In this paper we explore simulated responses of electromagnetic (EM) signals relative to in situ field surveys and quantify the effects that different values of conductivity in sea ice have on the EM fields. We compute EM responses of ice types with a three-dimensional (3-D) finite-volume discretization of Maxwell's equations and present 2-D sliced visualizations of their associated EM fields at discrete frequencies. Several interesting observations result: First, since the simulator computes the fields everywhere, each gridcell acts as a receiver within the model volume, and captures the complete, coupled interactions between air, snow, sea ice and sea water as a function of their conductivity; second, visualizations demonstrate how 1-D approximations near deformed ice features are violated. But the most important new finding is that changes in conductivity affect EM field response by modifying the magnitude and spatial patterns (i.e. footprint size and shape) of current density and magnetic fields. These effects are demonstrated through a visual feature we define as 'null lines'. Null line shape is affected by changes in conductivity near material boundaries as well as transmitter location. Our results encourage the use of null lines as a planning tool for better ground-truth field measurements near deformed ice types.
We investigate through numerical simulation the usefulness of DC resistivity data for characterizing subsurface fractures with elevated electrical conductivity by considering a geophysical experiment consisting of a grounded current source deployed in a steel cased borehole. In doing so, the borehole casing behaves electrically as a spatially extended line source, efficiently energizing the fractures with a steady current. Finite element simulations of this experiment for a horizontal well intersecting a small set of vertical fractures indicate that the fractures manifest electrically in (at least) two ways: a local perturbation in the electric potential proximal the fracture set, with limited far-field expression; and, an overall reduction in the electric potential along the entire length of borehole casing due to enhanced current flow through the fractures into the surrounding formation. The change in casing potential results in a measureable effect that can be observed far from fractures themselves, at distances where the local perturbations in the electric potential around the fractures are imperceptible. Under these conditions, our results suggest that far-field, time-lapse measurements of DC potentials surrounding a borehole casing can be reasonably interpreted by simple, linear inversion for a Coulomb charge distribution along the borehole path, including a local charge perturbation due to the fractures. Such an approach offers an inexpensive method for detecting and monitoring the time-evolution of electrically conducting fractures while ultimately providing an estimate of their effective conductivity - the latter providing an important measure independent of seismic methods on fracture shape, size, and hydraulic connectivity.
Recent synthetic advances have made available very monodisperse zincblende CdSe/CdS quantum dots having near-unity photoluminescence quantum yields. Because of the absence of nonradiative decay pathways, accurate values of the radiative lifetimes can be obtained from time-resolved PL measurements. Radiative lifetimes can also be obtained from the Einstein relations, using the static absorption spectra and the relative thermal populations in the angular momentum sublevels. One of the inputs into these calculations is the shell thickness, and it is useful to be able to determine shell thickness from spectroscopic measurements. We use an empirically corrected effective mass model to produce a "map" of exciton wavelength as a function of core size and shell thickness. These calculations use an elastic continuum model and the known lattice and elastic constants to include the e ffect of lattice strain on the band gap energy. The map is in agreement with the known CdSe sizing curve and with the shell thicknesses of zincblende core/shell particles obtained from TEM images. If selenium-sulfur diffusion is included and lattice strain is omitted from the calculation then the resulting map is appropriate for wurtzite CdSe/CdS quantum dots synthesized at high temperatures, and this map is very similar to one previously reported (J. Am. Chem. Soc. 2009, 131, 14299). Radiative lifetimes determined from time-resolved measurements are compared to values obtained from the Einstein relations, and found to be in excellent agreement. For a specific core size (2.64 nm diameter, in the present case), radiative lifetimes are found to decrease with increasing shell thickness. This is similar to the size dependence of one-component CdSe quantum dots and in contrast to the size dependence in type-II quantum dots. (Graph Presented).
Interference and interference mitigation techniques degrade synthetic aperture radar (SAR) coherent data products. Radars utilizing stretch processing present a unique challenge for many mitigation techniques because the interference signal itself is modified through stretch processing from its original signal characteristics. Many sources of interference, including constant tones, are only present within the fast-time sample data for a limited number of samples, depending on the radar and interference bandwidth. Adaptive filtering algorithms to estimate and remove the interference signal that rely upon assuming stationary interference signal characteristics can be ineffective. An effective mitigation method, called notching, forces the value of the data samples containing interference to zero. However, as the number of data samples set to zero increases, image distortion and loss of resolution degrade both the image product and any second order image products. Techniques to repair image distortions,1 are effective for point-like targets. However, these techniques are not designed to model and repair distortions in SAR image terrain. Good terrain coherence is important for SAR second order image products because terrain occupies the majority of many scenes. For the case of coherent change detection it is the terrain coherence itself that determines the quality of the change detection image. This paper proposes an unique equalization technique that improves coherence over existing notching techniques. First, the proposed algorithm limits mitigation to only the samples containing interference, unlike adaptive filtering algorithms, so the remaining samples are not modified. Additionally, the mitigation adapts to changing interference power such that the resulting correction equalizes the power across the data samples. The result is reduced distortion and improved coherence for the terrain. SAR data demonstrates improved coherence from the proposed equalization correction over existing notching methods for chirped interference sources.
The dynamic wake meandering model (DWM) is a common wake model used for fast prediction of wind farm power and loads. This model is compared to higher fidelity vortex method (VM) and actuator line large eddy simulation (AL-LES) model results. By looking independently at the steady wake deficit model of DWM, and performing a more rigorous comparison than averaged result comparisons alone can produce, the models and their physical processes can be compared. The DWM and VM results of wake deficit agree best in the mid-wake region due to the consistent recovery prior to wake breakdown predicted in the VM results. DWM and AL-LES results agree best in the far-wake due to the low recovery of the laminar flow field AL-LES simulation. The physical process of wake recovery in the DWM model differed from the higher fidelity models and resulted solely from wake expansion downstream, with no momentum recovery up to 10 diameters. Sensitivity to DWM model input boundary conditions and their effects are shown, with greatest sensitivity to the rotor loading and to the turbulence model.
ASME 2015 9th International Conference on Energy Sustainability, ES 2015, collocated with the ASME 2015 Power Conference, the ASME 2015 13th International Conference on Fuel Cell Science, Engineering and Technology, and the ASME 2015 Nuclear Forum
Falling particle receivers are being evaluated as an alternative to conventional fluid-based solar receivers to enable higher temperatures and higher efficiency power cycles with direct storage for concentrating solar power applications. This paper presents studies of the particle mass flow rate, velocity, particle-curtain opacity and density, and other characteristics of free-falling ceramic particles as a function of different discharge slot apertures. The methods to characterize the particle flow are described, and results are compared to theoretical and numerical models for unheated conditions.
ASME 2015 9th International Conference on Energy Sustainability, ES 2015, collocated with the ASME 2015 Power Conference, the ASME 2015 13th International Conference on Fuel Cell Science, Engineering and Technology, and the ASME 2015 Nuclear Forum
This paper evaluates cost and performance tradeoffs of alternative supercritical carbon dioxide (s-CO2) closed-loop Brayton cycle configurations with a concentrated solar heat source. Alternative s-CO2 power cycle configurations include simple, recompression, cascaded, and partial cooling cycles. Results show that the simple closed-loop Brayton cycle yielded the lowest power-block component costs while allowing variable temperature differentials across the s-CO2 heating source, depending on the level of recuperation. Lower temperature differentials led to higher sensible storage costs, but cycle configurations with lower temperature differentials (higher recuperation) yielded higher cycle efficiencies and lower solar collector and receiver costs. The cycles with higher efficiencies (simple recuperated, recompression, and partial cooling) yielded the lowest overall solar and power-block component costs for a prescribed power output.
The complex coherence function describes information that is necessary to create maps from interferometric synthetic aperture radar (InSAR). This coherence function is complicated by building layover. This paper presents a mathematical model for this complex coherence in the presence of building layover and shows how it can describe intriguing phenomena observed in real interferometric SAR data.
Thermoelectric (TE) generators have very important applications, such as emerging automotive waste heat recovery and cooling applications. However, reliable transport properties characterization techniques are needed in order to scale-up module production and thermoelectric generator design DOE round-robin testing found that literature values for figure of merit (ZT) are sometimes not reproducible in part for the lack of standardization of transport properties measurements. In Sandia National Laboratories (SNL), we have been optimizing transport properties measurements techniques of TE materials and modules. We have been using commercial and custom-built instruments to analyze the perfomance of TE materials and modules We developed a reliable procedure to measure thermal conductivity, seebeck coefficient and resistivity of TE materials to calculate the ZT as function of temperature. We use NIST standards to validate our procedures and measure multiple samples of each specific material to establish consistency. Using these developed thermoelectric capabilities, we studied transport properties of BizTe, based alloys diermal aged up to 2 years. Parallel with analytical and microscopy studies, we correlated transport properties changes with chemical changes. Also, we have developed a resistance mApplng setup to measure the contact resistance of Au contacts on TE materials and TE modules as a whole in a non-destnictive way. The development of novel but reliable characterization techniques has been fundamental to better understand TE materials as fimction of aging hme, temperature and environmental conditions.
High temperature solid state sodium (23Na) magic angle spinning (MAS) NMR spin lattice relaxation times (T1) were evaluated for a series of NASICON (Na3Zr2PS12O12) materials to directly determine Na jump rates. Simulations of the Ti temperature variations that incorporated distributions in Na jump activation energies, or distribution of jump rates, improved the agreement with experiment. The 23Na NMR T1 relaxation results revealed that distributions in the Na dynamics were present for all of the NASICON materials investigated here. The 23Na relaxation experiments also showed that small differences in material composition and/or changes in the processing conditions impacted the distributions in the Na dynamics. The extent of the distribution was related to the presence of a disordered or glassy phosphate phase present in these different sol-gel processed materials. The 23Na NMR T1 relaxation experiments are a powerful tool to directly probing Na jump dynamics and provide additional molecular level details that could impact transport phenomena.
Measurements are presented from a two-beam structure with several bolted interfaces in order to characterize the nonlinear damping introduced by the joints. The measurements (all at force levels below macroslip) reveal that each underlying mode of the structure is well approximated by a single degree-of-freedom (SDOF) system with a nonlinear mechanical joint. At low enough force levels, the measurements show dissipation that scales as the second power of the applied force, agreeing with theory for a linear viscously damped system. This is attributed to linear viscous behavior of the material and/or damping provided by the support structure. At larger force levels, the damping is observed to behave nonlinearly, suggesting that damping from the mechanical joints is dominant. A model is presented that captures these effects, consisting of a spring and viscous damping element in parallel with a four-parameter Iwan model. The parameters of this model are identified for each mode of the structure and comparisons suggest that the model captures the stiffness and damping accurately over a range of forcing levels.
A radome, or radar dome, protects a radar system from exposure to the elements. Unfortunately, radomes can affect the radiation pattern of the enclosed antenna. The co-design of a platform"™s radome and radar is ideal to mitigate any deleterious effects of the radome. However, maintaining structural integrity and other platform flight requirements, particularly when integrating a new radar onto an existing platform, often limits radome electrical design choices. Radars that rely heavily on phase measurements such as monopulse, interferometric, or coherent change detection (CCD) systems require particular attention be paid to components, such as the radome, that might introduce loss and phase variations as a function of the antenna scan angle. Material properties, radome wall construction, overall dimensions, and shape characteristics of a radome can impact insertion loss and phase delay, antenna beamwidth and sidelobe level, polarization, and ultimately the impulse response of the radar, among other things, over the desired radar operating parameters. The precision-guided munitions literature has analyzed radome effects on monopulse systems for well over half a century. However, to the best of our knowledge, radome-induced errors on CCD performance have not been described. The impact of radome material and wall construction, shape, dimensions, and antenna characteristics on CCD is examined herein for select radar and radome examples using electromagnetic simulations.
Proceedings of SPIE - The International Society for Optical Engineering
Johnson, Timothy J.; Sweet, Lucas E.; Meier, David E.; Mausolf, Edward J.; Kim, Eunja; Weck, Philippe F.; Buck, Edgar C.; Mcnamara, Bruce K.
Uranyl nitrate is a key species in the nuclear fuel cycle, but is known to exist in different states of hydration, including the hexahydrate [UO2(NO3)2(H2O)6] (UNH) and the trihydrate [UO2(NO3)2(H2O)3] (UNT) forms. Their stabilities depend on both relative humidity and temperature. Both phases have previously been studied by infrared transmission spectroscopy, but the data were limited by both instrumental resolution and the ability to prepare the samples as pellets without desiccating it. We report time-resolved infrared (IR) measurements using an integrating sphere that allow us to observe the transformation from the hexahydrate to the trihydrate simply by flowing dry nitrogen gas over the sample. Hexahydrate samples were prepared and confirmed via known XRD patterns, then measured in reflectance mode. The hexahydrate has a distinct uranyl asymmetric stretch band at 949.0 cm-1 that shifts to shorter wavelengths and broadens as the sample dehydrates and recrystallizes to the trihydrate, first as a blue edge shoulder but ultimately resulting in a doublet band with reflectance peaks at 966 and 957 cm-1. The data are consistent with transformation from UNH to UNT since UNT has two non-equivalent UO22+ sites. The dehydration of UO2(NO3)2(H2O)6 to UO2(NO3)2(H2O)3 is both a morphological and structural change that has the lustrous lime green crystals changing to the dull greenish yellow of the trihydrate. Crystal structures and phase transformation were confirmed theoretically using DFT calculations and experimentally via microscopy methods. Both methods showed a transformation with two distinct sites for the uranyl cation in the trihydrate, as opposed to a single crystallographic site in the hexahydrate.
The efficiency of discrete-ordinates transport sweeps depends on the scheduling algorithm, domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amount of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 105 processor cores.
The objective of this study was to explore an approach for measuring fatigue crack growth rates (da/dN) for Cr-Mo pressure vessel steels in high-pressure hydrogen gas over a broad cyclic stress intensity factor (ΔK) range while limiting test duration, which could serve as an alternative to the method prescribed in ASME BPVC VIII-3, Article KD-10. Fatigue crack growth rates were measured for SA-372 Grade J and 34CrMo4 steels in hydrogen gas as a function of ΔK, loadcycle frequency (f), and gas pressure. The da/dN vs. ΔK relationships measured for the Cr-Mo steels in hydrogen gas at 10 Hz indicate that capturing data at lower ΔK is valuable when these relationships serve as inputs into design-life analyses of hydrogen pressure vessels, since in this ΔK range crack growth rates in hydrogen gas approach rates in air. The da/dN vs. f data measured for the Cr-Mo steels in hydrogen gas at selected constant-ΔK levels demonstrate that crack growth rates at 10 Hz do not represent upper-bound behavior, since da/dN generally increases as f decreases. Consequently, although fatigue crack growth testing at 10 Hz can efficiently measure da/dN over a wide ΔK range, these da/dN vs. ΔK relationships at 10 Hz cannot be considered reliable inputs into design-life analyses. A possible hybrid approach to efficiently establishing the fatigue crack growth rate relationship in hydrogen gas without compromising data quality is to measure the da/dN vs. ΔK relationship at 10 Hz and then apply a correction based on the da/dN vs. f data. The reliability of such a hybrid approach depends on adequacy of the da/dN vs. f data, i.e., the data are measured at appropriate constant-ΔK levels and the data include upper-bound crack growth rates.
The Department of Energy (DOE) is laying the groundwork for implementing the Administration's Strategy for the Management and Disposal of Used Nuclear Fuel and High-Level Radioactive Waste, which calls for a consent-based siting process. Potential destinations for an interim storage facility or repository have yet to be identified. The purpose of this study is to evaluate how planning for future transportation of spent nuclear fuel as part of a waste management system may be affected by different choices and strategies. The transportation system is modeled using TOM (Transportation Operations Model), a computer code developed at the Oak Ridge National Laboratory (ORNL). The simulations include scenarios with and without an interim storage facility (ISF) and employing different at-reactor management practices. Various operational start times for the ISF and repository were also considered. The results of the cost analysis provide Rough Order of Magnitude (ROM) capital, operational, and maintenance costs of the transportation system and the corresponding spending profiles as well as information regarding the size of the transportation fleet, distance traveled (consist and cask miles), and fuel age and burnup during the transportation. This study provides useful insights regarding the role of the transportation as an integral part of the waste management system.
For long-term storage, spent nuclear fuel (SNF) is placed in dry storage cask systems, commonly consisting of welded stainless steel containers enclosed in ventilated cement or steel overpacks. At near-marine sites, failure by chloride-induced stress corrosion cracking (SCC) due to deliquescence of deposited salt aerosols is a major concern. This paper presents a preliminary probabilistic performance assessment model to assess canister penetration by SCC. The model first determines whether conditions for salt deliquescence are present at any given location on the canister surface, using an abstracted waste package thermal model and site-specific weather data (ambient temperature and absolute humidity). As the canister cools and aqueous conditions become possible, corrosion is assumed to initiate and is modeled as pitting (initiation and growth). With increasing penetration, pits convert to SCC and a crack growth model is implemented. The SCC growth model includes rate dependencies on temperature and crack tip stress intensity factor. The amount of penetration represents the summed effect of corrosion during time steps when aqueous conditions are predicted to occur. Model results and sensitivity analyses provide information on the impact of model assumptions and parameter values on predicted storage canister performance, and provide guidance for further research to reduce uncertainties.
The impact of automation on human performance has been studied by human factors researchers for over 35 years. One unresolved facet of this research is measurement of the level of automation across and within engineered systems. Repeatable methods of observing, measuring and documenting the level of automation are critical to the creation and validation of generalized theories of automation's impact on the reliability and resilience of human-in-the-loop systems. Numerous qualitative scales for measuring automation have been proposed. However these methods require subjective assessments based on the researcher's knowledge and experience, or through expert knowledge elicitation involving highly experienced individuals from each work domain. More recently, quantitative scales have been proposed, but have yet to be widely adopted, likely due to the difficulty associated with obtaining a sufficient number of empirical measurements from each system component. Our research suggests the need for a quantitative method that enables rapid measurement of a system's level of automation, is applicable across domains, and can be used by human factors practitioners in field studies or by system engineers as part of their technical planning processes. In this paper we present our research methodology and early research results from studies of electricity grid distribution control rooms. Using a system analysis approach based on quantitative measures of level of automation, we provide an illustrative analysis of select grid modernization efforts. This measure of the level of automation can be displayed as either a static, historical view of the system's automation dynamics (the dynamic interplay between human and automation required to maintain system performance) or it can be incorporated into real-time visualization systems already present in control rooms.
This paper describes technical, logistical, and sociopolitical factors to be considered in the development of guidelines for siting a facility for deep borehole disposal of radioactive waste. Technical factors include geological, hydro-geochemical, and geophysical characteristics that are related to the suitability of the site for drilling and borehole construction, waste emplacement activities, waste isolation, and long-term safety of the deep borehole disposal system. Logistical factors to be considered during site selection include: The local or regional availability of drilling contractors (equipment, services, and materials) capable of drilling a large-diameter borehole to approximately 5 km depth; the legal and regulatory requirements associated with drilling, construction of surface facilities, waste handling and emplacement, and postclosure safety; and access to transportation systems. Social and political factors related to site selection include the distance from population centers and the support or opposition of local and state entities and other stakeholders to the facility and its operations. These considerations are examined in the context of the siting process and guidelines for a deep borehole field test, designed to evaluate the feasibility of siting and operating a deep borehole disposal facility.
Options for disposal of the spent nuclear fuel and high level radioactive waste that are projected to exist in the United States in 2048 were studied. The options included four different disposal concepts: mined repositories in salt, clay/shale rocks, and crystalline rocks; and deep boreholes in crystalline rocks. Some of the results of this study are that all waste forms, with the exception of untreated sodium-bonded spent nuclear fuel, can be disposed of in any of the mined disposal concepts, although with varying degrees of confidence; salt allows for more flexibility in managing high-heat waste in mined repositories than other media; small waste forms are potentially attractive candidates for deep borehole disposal; and disposal of commercial SNF in existing dual-purpose canisters is potentially feasible but could pose significant challenges both in repository operations and in demonstrating confidence in long-term performance. Questions addressed by this study include: is a " 'one-size-fits-all ' repository a good strategic option for disposal?" and "do some disposal concepts perform significantly better with or without specific waste types or forms? " The study provides the bases for answering these questions by evaluating potential impacts of waste forms on the feasibility and performance of representative generic concepts for geologic disposal.
Sabotage of spent nuclear fuel casks remains a concern nearly forty years after attacks against shipment casks were first analyzed and has a renewed relevance in the post-9/11 environment. A limited number of full-scale tests and supporting efforts using surrogate materials, typically depleted uranium dioxide (DUO2), have been conducted in the interim to more definitively determine the source term from these postulated events. In all the previous studies, the postulated attack of greatest interest was by a conical shape charge (CSC) that focuses the explosive energy much more efficiently than bulk explosives. However, the validity of these large-scale results remain in question due to the lack of a defensible Spent Fuel Ratio (SFR), defined as the amount of respirable aerosol generated by an attack on a mass of spent fuel compared to that of an otherwise identical surrogate. Previous attempts to define the SFR in the 1980's have resulted in estimates ranging from 0.42 to 12 and include suboptimal experimental techniques and data comparisons. Because of the large uncertainty surrounding the SFR, estimates of releases from security-related events may be unnecessarily conservative. Credible arguments exist that the SFR does not exceed a value of unity. A defensible determination of the SFR in this lower range would greatly reduce the calculated risk associated with the transport and storage of spent nuclear fuel in dry cask systems. In the present work, the CTH shock physics code is used to simulate spent nuclear fuel (SNF) and DUO2 targets impacted by a CSC jet at an ambient temperature condition. These preliminary results are used to illustrate an approach to estimate the respirable release fraction for each type of material and ultimately, an estimate of the SFR.
Current techniques for building detection in Synthetic Aperture Radar (SAR) imagery can be computationally expensive and/or enforce stringent requirements for data acquisition. We present a technique that is effective and efficient at determining an approximate building location from multi-pass single-pol SAR imagery. This approximate location provides focus-of-attention to specific image regions for subsequent processing. The proposed technique assumes that for the desired image, a preprocessing algorithm has detected and labeled bright lines and shadows. Because we observe that buildings produce bright lines and shadows with predetermined relationships, our algorithm uses a graph clustering technique to find groups of bright lines and shadows that create a building. The nodes of the graph represent bright line and shadow regions, while the arcs represent the relationships between the bright lines and shadow. Constraints based on angle of depression and the relationship between connected bright lines and shadows are applied to remove unrelated arcs. Once the related bright lines and shadows are grouped, their locations are combined to provide an approximate building location. Experimental results are presented to demonstrate the outcome of this technique.
Stochastic media transport problems have long posed challenges for accurate modeling. Brute force Monte Carlo or deterministic sampling of realizations can be expensive in order to achieve the desired accuracy. The well-known Levermore-Pomraning (LP) closure is very simple and inexpensive, but is inaccurate in many circumstances. We propose a generalization to the LP closure that may help bridge the gap between the two approaches. Our model consists of local calculations to approximately determine the relationship between ensemble-averaged angular fluxes and the corresponding averages at material interfaces. The expense and accuracy of the method are related to how "local" the model is and how much local detail it contains. We show through numerical results that our approach is more accurate than LP for benchmark problems, provided that we capture enough local detail. Thus we identify two approaches to using ensemble calculations for stochastic media calculations: direct averaging of ensemble results for transport quantities of interest, or indirect use via a generalized LP equation to determine those same quantities; in some cases the latter method is more efficient. However, the method is subject to creating ill-posed problems if insufficient local detail is included in the model.
A Monte Carlo solution method for the system of deterministic equations arising in the application of stochastic collocation (SCM) and stochastic Galerkin (SGM) methods in radiation transport computations with uncertainty is presented for an arbitrary number of materials each containing two uncertain random cross sections. Moments of the resulting random flux are calculated using an intrusive and a non-intrusive Monte Carlo based SCM and two different SGM implementations each with two different truncation methods and compared to the brute force Monte Carlo sampling approach. For the intrusive SCM and SGM, a single set of particle histories is solved and weight adjustments are used to produce flux moments for the stochastic problem. Memory and runtime scaling of each method is compared for increased complexity in stochastic dimensionality and moment truncation. Results are also compared for efficiency in terms of a statistical figure-of-merit. The memory savings for the total-order truncation method prove significant over the full-tensor-product truncation. Scaling shows relatively constant cost per moment calculated of SCM and tensor-product SGM. Total-order truncation may be worthwhile despite poorer runtime scaling by achieving better accuracy at lower cost. The figure-of-merit results show that all of the intrusive methods can improve efficiency for calculating low-order moments, but the intrusive SCM approach is the most efficient for calculating high-order moments.
In this paper, we describe the use of various methods of one-dimensional spectral compression by variable selection as well as principal component analysis (PCA) for compressing multi-dimensional sets of spectral data. We have examined methods of variable selection such as wavelength spacing, spectral derivatives, and spectral integration error. After variable selection, reduced transmission spectra must be decompressed for use. Here we examine various methods of interpolation, e.g., linear, cubic spline and piecewise cubic Hermite interpolating polynomial (PCHIP) to recover the spectra prior to estimating at-sensor radiance. Finally, we compressed multi-dimensional sets of spectral transmittance data from moderate resolution atmospheric transmission (MODTRAN) data using PCA. PCA seeks to find a set of basis spectra (vectors) that model the variance of a data matrix in a linear additive sense. Although MODTRAN data are intricate and are used in nonlinear modeling, their base spectra can be reasonably modeled using PCA yielding excellent results in terms of spectral reconstruction and estimation of at-sensor radiance. The major finding of this work is that PCA can be implemented to compress MODTRAN data with great effect, reducing file size, access time and computational burden while producing high-quality transmission spectra for a given set of input conditions.
Researchers at Sandia National Laboratories are integrating qualitative and quantitative methods from anthropology, human factors and cognitive psychology in the study of military and civilian intelligence analyst workflows in the United States’ national security community. Researchers who study human work processes often use qualitative theory and methods, including grounded theory, cognitive work analysis, and ethnography, to generate rich descriptive models of human behavior in context. In contrast, experimental psychologists typically do not receive training in qualitative induction, nor are they likely to practice ethnographic methods in their work, since experimental psychology tends to emphasize generalizability and quantitative hypothesis testing over qualitative description. However, qualitative frameworks and methods from anthropology, sociology, and human factors can play an important role in enhancing the ecological validity of experimental research designs.
We present simulation results that show circularly polarized light persists through scattering environments better than linearly polarized light. Specifically, we show persistence is enhanced through many scattering events in an environment with a size parameter representative of advection fog at infrared wavelengths. Utilizing polarization tracking Monte Carlo simulations we show a larger persistence benefit for circular polarization versus linear polarization for both forward and backscattered photons. We show the evolution of the incident polarization states after various scattering events which highlight the mechanism leading to circular polarization's superior persistence.
The construction of the Grand Ethiopian Renaissance Dam (GERD) has generated tensions between Egypt and Ethiopia over control of the Nile River in Northern Africa. However, tensions within Egypt have also been pronounced, leading up to and following the Arab Spring uprising of 2011. This study used the Behavior Influence Assessment (BIA) framework to simulate a dynamic hypothesis regarding how tensions within Egypt may evolve given the impacts of the GERD. Primarily, we addressed the interplay between four parties over an upcoming ten-year period: the Egyptian Regime, the Military-Elite, the Militant population, and the non-Militant population. The core tenant of the hypothesis is that rising food prices was a strong driver to the unrest leading up to the Arab Spring events and that this same type of economic stress could be driven by the GERD—albeit with different political undertones. Namely, the GERD offers the Regime a target for inciting nationalism, and while this may buy the regime time to fix the underlying economic impacts, ultimately there exists a tipping point beyond which exponentially increasing unrest is unavoidable without implementing strong measures, such as state militarization.
Attitude diffusion is when "attitudes" (general, relatively enduring evaluative responses to a topic) spread through a population. Attitudes play an incredibly important role in human decision making and are a critical part of social psychology. However, existing models of diffusion do not account for key differentiating aspects of attitudes. We develop the "Multi-Agent, Multi-Attitude" (MAMA) model which incorporates several of these key factors: (1) multiple, interacting attitudes; (2) social influence between individuals; and (3) media influence. All three components have strong support from the social science community. Using the MAMA model, we study influence maximization in a attitude diffusion setting where media influence is possible - we show that strategic manipulation of the media can lead to statistically significant decreases in diffusion of attitudes.
Thermoelectric (TE) generators have very important applications, such as emerging automotive waste heat recovery and cooling applications. However, reliable transport properties characterization techniques are needed in order to scale-up module production and thermoelectric generator design DOE round-robin testing found that literature values for figure of merit (ZT) are sometimes not reproducible in part for the lack of standardization of transport properties measurements. In Sandia National Laboratories (SNL), we have been optimizing transport properties measurements techniques of TE materials and modules. We have been using commercial and custom-built instruments to analyze the perfomance of TE materials and modules We developed a reliable procedure to measure thermal conductivity, seebeck coefficient and resistivity of TE materials to calculate the ZT as function of temperature. We use NIST standards to validate our procedures and measure multiple samples of each specific material to establish consistency. Using these developed thermoelectric capabilities, we studied transport properties of BizTe, based alloys diermal aged up to 2 years. Parallel with analytical and microscopy studies, we correlated transport properties changes with chemical changes. Also, we have developed a resistance mApplng setup to measure the contact resistance of Au contacts on TE materials and TE modules as a whole in a non-destnictive way. The development of novel but reliable characterization techniques has been fundamental to better understand TE materials as fimction of aging hme, temperature and environmental conditions.
A microscale model of the brain was developed in order to understand the details of intracranial fluid cavitation and the damage mechanisms associated with cavitation bubble collapse due to blast-induced traumatic brain injury (TBI). Our macroscale model predicted cavitation in regions of high concentration of cerebrospinal fluid (CSF) and blood. The results from this macroscale simulation directed the development of the microscale model of the superior sagittal sinus (SSS) region. The microscale model includes layers of scalp, skull, dura, superior sagittal sinus, falx, arachnoid, subarachnoid spacing, pia, and gray matter. We conducted numerical simulations to understand the effects of a blast load applied to the scalp with the pressure wave propagating through the layers and eventually causing the cavitation bubbles to collapse. Collapse of these bubbles creates spikes in pressure and von Mises stress downstream from the bubble locations. We investigate the influence of cavitation bubble size, compressive wave amplitude, and internal bubble pressure. The results indicate that these factors may contribute to a greater downstream pressure and von Mises stress which could lead to significant tissue damage.
We describe the computational simulations and damage assessments that we provided in support of a tabletop exercise (TTX) at the request of NASA's Near-Earth Objects Program Office. The overall purpose of the exercise was to assess leadership reactions, information requirements, and emergency management responses to a hypothetical asteroid impact with Earth. The scripted exercise consisted of discovery, tracking, and characterization of a hypothetical asteroid; inclusive of mission planning, mitigation, response, impact to population, infrastructure and GDP, and explicit quantification of uncertainty. Participants at the meeting included representatives of NASA, Department of Defense, Department of State, Department of Homeland Security/Federal Emergency Management Agency (FEMA), and the White House. The exercise took place at FEMA headquarters. Sandia's role was to assist the Jet Propulsion Laboratory (JPL) in developing the impact scenario, to predict the physical effects of the impact, and to forecast the infrastructure and economic losses. We ran simulations using Sandia's CTH hydrocode to estimate physical effects on the ground, and to produce contour maps indicating damage assessments that could be used as input for the infrastructure and economic models. We used the FASTMap tool to provide estimates of infrastructure damage over the affected area, and the REAcct tool to estimate the potential economic severity expressed as changes to GDP (by nation, region, or sector) due to damage and short-term business interruptions.
Transmission electron microscopy (TEM) is a valuable methodology for investigating radiation-induced microstructural changes and elucidating the underlying mechanisms involved in the aging and degradation of nuclear reactor materials. However, the use of electrons for imaging may result in several inadvertent effects that can potentially change the microstructure and mechanisms active in the material being investigated. In this study, in situ TEM characterization is performed on nanocrystalline nickel samples under self-ion irradiation and post irradiation annealing. During annealing, voids are formed around 200 °C only in the area illuminated by the electron beam. Based on diffraction patterns analyses, it is hypothesized that the electron beam enhanced the growth of a NiO layer resulting in a decrease of vacancy mobility during annealing. The electron beam used to investigate self-ion irradiation ultimately significantly affected the type of defects formed and the final defect microstructure.