Ethylene carbonate (EC) and propylene carbonate (PC) are widely used solvents in lithium (Li)-ion batteries and supercapacitors. Ion dissolution and diffusion in those media are correlated with solvent dielectric responses. Here, we use all-atom molecular dynamics simulations of the pure solvents to calculate dielectric constants and relaxation times, and molecular mobilities. The computed results are compared with limited available experiments to assist more exhaustive studies of these important characteristics. As a result, the observed agreement is encouraging and provides guidance for further validation of force-field simulation models for EC and PC solvents.
Low- and high-voltage Soliton waves were produced and used to demonstrate collision and compression using diode-based nonlinear transmission lines. Experiments demonstrate soliton addition and compression using homogeneous nonlinear lines. We built the nonlinear lines using commercially available diodes. These diodes are chosen after their capacitance versus voltage dependence is used in a model and the line design characteristics are calculated and simulated. Nonlinear ceramic capacitors are then used to demonstrate high-voltage pulse amplification and compression. The line is designed such that a simple capacitor discharge, input signal, develops soliton trains in as few as 12 stages. We also demonstrated output voltages in excess of 40 kV using Y5V-based commercial capacitors. The results show some key features that determine efficient production of trains of solitons in the kilovolt range.
Soil from an excavated test was being surveyed and fresh DU fragments removed so the soil could be returned to the excavation hole. Sandia RCTs discovered two locations where highly oxidized DU was present just below the undisturbed lake bed surface. The oxidized DU was removed for disposal as radioactive waste. Approximately one cubic foot of oxidized DU mixed with soil was removed. Line Manager, Sr. Manager and Acting Director were all notified of this event.
This paper presents results from one and two-dimensional direct numerical simulations under Reactivity Controlled Compression Ignition (RCCI) conditions of a primary reference fuel (PRF) mixture consisting of n-heptane and iso-octane. RCCI uses in-cylinder blending of two fuels with different autoignition characteristics to control combustion phasing and the rate of heat release. These simulations employ an improved model of compression heating through mass source/sink terms developed in a previous work by Bhagatwala et al. (2014), which incorporates feedback from the flow to follow a predetermined experimental pressure trace. Two-dimensional simulations explored parametric variations with respect to temperature stratification, pressure profiles and n-heptane concentration. Statistics derived from analysis of diffusion/reaction balances locally normal to the flame surface were used to elucidate combustion characteristics for the different cases. Both deflagration and spontaneous ignition fronts were observed to co-exist, however it was found that higher n-heptane concentration provided a greater degree of flame propagation, whereas lower n-heptane concentration (higher fraction of iso-octane) resulted in more spontaneous ignition fronts. A significant finding was that simulations initialized with a uniform initial temperature and a stratified n-heptane concentration field, resulted in a large fraction of combustion occurring through flame propagation. It was also found that the proportion of spontaneous ignition fronts increased at higher pressures due to shorter ignition delay when other factors were held constant. For the same pressure and fuel concentration, the contribution of flame propagation to the overall combustion was found to depend on the level of thermal stratification, with higher initial temperature gradients resulting in more deflagration and lower gradients generating more ignition fronts. Statistics of ignition delay are computed to assess the Zel'dovich (1980) theory for the mode of combustion propagation based on ignition delay gradients.
Previous experiments have shown a link between oxidation and strength changes in single crystal silicon nanostructures but provided no clues as to the mechanisms leading to this relationship. Using atomic force microscope-based fracture strength experiments, molecular dynamics modeling, and measurement of oxide development with angle resolved x-ray spectroscopy we study the evolution of strength of silicon (111) surfaces as they oxidize and with fully developed oxide layers. We find that strength drops with partial oxidation but recovers when a fully developed oxide is formed and that surfaces intentionally oxidized from the start maintain their high initial strengths. MD simulations show that strength decreases with the height of atomic layer steps on the surface. These results are corroborated by a completely separate line of testing using micro-scale, polysilicon devices, and the slack chain method in which strength recovers over a long period of exposure to the atmosphere. Combining our results with insights from prior experiments we conclude that previously described strength decrease is a result of oxidation induced roughening of an initially flat silicon (1 1 1) surface and that this effect is transient, a result consistent with the observation that surfaces flatten upon full oxidation.
2015 International 3D Systems Integration Conference, 3DIC 2015
Wyers, Eric J.; Harris, T.R.; Pitts, W.S.; Massad, Jordan; Franzon, Paul D.
The stress impact of the CMOS and III-V heterogeneous integration environment on device electrical performance is being characterized. Measurements from a partial heterogeneous integration fabrication run will be presented to provide insight into how the backside source vias, alternatively referred to as through-silicon-carbide vias (TSCVs), used within the heterogeneous integration environment impacts GaN HEMT device-level DC performance.
Osborn, David L.; Gozem, Samer; Gunina, Anastasia O.; Ichino, Takatoshi; Stanton, John F.; Krylov, Anna I.
The calculation of absolute total cross sections requires accurate wave functions of the photoelectron and of the initial and final states of the system. The essential information contained in the latter two can be condensed into a Dyson orbital. We employ correlated Dyson orbitals and test approximate treatments of the photoelectron wave function, that is, plane and Coulomb waves, by comparing computed and experimental photoionization and photodetachment spectra. We find that in anions, a plane wave treatment of the photoelectron provides a good description of photodetachment spectra. For photoionization of neutral atoms or molecules with one heavy atom, the photoelectron wave function must be treated as a Coulomb wave to account for the interaction of the photoelectron with the +1 charge of the ionized core. For larger molecules, the best agreement with experiment is often achieved by using a Coulomb wave with a partial (effective) charge smaller than unity. This likely derives from the fact that the effective charge at the centroid of the Dyson orbital, which serves as the origin of the spherical wave expansion, is smaller than the total charge of a polyatomic cation. The results suggest that accurate molecular photoionization cross sections can be computed with a modified central potential model that accounts for the nonspherical charge distribution of the core by adjusting the charge in the center of the expansion.
Dumitrescu, Cosmin E.; Polonowski, Christopher J.; Fisher, Brian T.; Lilik, Gregory K.; Mueller, Charles J.
In this study, elastic scattering was employed to investigate diesel fuel property effects on the liquid length (i.e., the maximum extent of in-cylinder liquid-phase fuel penetration) using select research fuels: an ultralow-sulfur #2 diesel emissions-certification fuel (CF) and four of the Coordinating Research Council (CRC) Fuels for Advanced Combustion Engines (FACE) diesel fuels (F1, F2, F6, and F8). The experiments were performed in a single-cylinder heavy-duty optical compression-ignition engine under time-varying, noncombusting conditions to minimize the influence of chemical heat release on the liquid-length measurement. The FACE diesel fuel and CF liquid lengths under combusting conditions were also predicted using Siebers scaling law and pressure data from previous work using the same fuels at similar in-cylinder conditions. The objective was to observe if the liquid length under noncombusting or combusting conditions provides additional insights into the relationships among the main fuel properties (i.e., cetane number (CN), the 90 vol % distillation recovery temperature (T90), and aromatic content) and smoke emissions. Results suggest that liquid-length values are best correlated to fuel distillation characteristics measured with ASTM D2887 (simulated distillation method). This work also studied the relationship between liquid length and lift-off length, H (i.e., distance from the fuel-injector orifice exit to the position where the standing premixed autoignition zone stabilizes during mixing-controlled combustion). Two possible cases were identified based on the relative magnitudes of liquid length under combusting conditions (Lc) and H. The low-CN fuels are representative of the first case, Lc < H, in which the fuel is always fully vaporized at H. The high-CN fuels are mostly representative of the second case, Lc ≥ H, in which there is still liquid fuel at H. Lc ≥ H would suggest higher smoke emissions, but there is not enough evidence in this work to support a compounding effect of a longer liquid length on top of the aromatic-content effect on smoke emissions for fuels with similar CN, supporting previous findings in the literature that lift-off length plays a more important role than liquid-length on diesel combustion. At the same time, the experimental results suggest a decrease in the fuel-jet spreading angle, i.e., a decrease in the entrainment rate into the jet at and downstream of H, under combusting conditions, that is not accounted for in the model used to predict the values of ø(H). As a result, Lc may be of interest for accurate predictions of ø(H), especially for combustion strategies designed to lower in-cylinder soot by operating near or below the nonsooting ø(H)-value (i.e., ø(H) - 2).
Illumination by a narrow-band laser has been shown to enable photoelectrochemical (PEC) etching of InGaN thin films into quantum dots with sizes controlled by the laser wavelength. Here, we investigate and elucidate the influence of solution pH on such quantum-size-controlled PEC etch process. We find that although a pH above 5 is often used for PEC etching of GaN-based materials, oxides (In2O3 and/or Ga2O3) form which interfere with quantum dot formation. At pH below 3, however, oxide-free QDs with self-terminated sizes can be successfully realized.
The P,P-chelated heteroleptic complex bis[bis(diisopropylphosphino)amido]indium chloride [(i-Pr2P)2N]2InCl was prepared in high yield by treating InCl3 with 2 equiv of (i-Pr2P)2NLi in Et2O/tetrahydrofuran solution. Samples of [(i-Pr2P)2N]2InCl in a pentane slurry, a CH2Cl2 solution, or in the solid state were exposed to CO2, resulting in the insertion of CO2 into two of the four M-P bonds to produce [O2CP(i-Pr2)NP(i-Pr2)]2InCl in each case. Compounds were characterized by multinuclear NMR and IR spectroscopy, as well as single-crystal X-ray diffraction. ReactIR solution studies show that the reaction is complete in less than 1 min at room temperature in solution and in less than 2 h in the solid-gas reaction. The CO2 complex is stable up to at least 60°C under vacuum, but the starting material is regenerated with concomitant loss of carbon dioxide upon heating above 75°C. The compound [(i-Pr2P)2N]2InCl also reacts with CS2 to give a complicated mixture of products, one of which was identified as the CS2 cleavage product [Si=P(i-Pr2)NP(i-Pr2)]2InCl]2(μ-Cl)[μ-(i-Pr2P)2N)].
Kinetic, time-dependent, electromagnetic, particle-in-cell simulations of the inductive current divider are presented. The inductive current divider is a passive method for controlling the trajectory of an intense, hollow electron beam using a vacuum structure that inductively splits the beam’s return current. The current divider concept was proposed and studied theoretically in a previous publication [Phys. Plasmas 22, 023107 (2015)] A central post carries a portion of the return current (I1) while the outer conductor carries the remainder (I2) with the injected beam current given by Ib=I1+I2. The simulations are in agreement with the theory which predicts that the total force on the beam trajectory is proportional to (I2-I1) and the force on the beam envelope is proportional to Ib. For a fixed central post, the beam trajectory is controlled by varying the outer conductor radius which changes the inductance in the return-current path. The simulations show that the beam emittance is approximately constant as the beam propagates through the current divider to the target. As a result, independent control over both the current density and the beam angle at the target is possible by choosing the appropriate return-current geometry.
The formation of thin film superlattices consisting of alternating layers of nitrogen-doped SiC (SiC:N) and C is reported. Periodically terminating the SiC:N surface with a graphitic C boundary layer and controlling the SiC:N/C thickness ratio yield nanocrystalline SiC grains ranging in size from 365 to 23 nm. Frequency domain thermo-reflectance is employed to determine the thermal conductivity, which is found to vary from 35.5 W m-1 K-1 for monolithic undoped α-SiC films to 1.6 W m-1 K-1 for a SiC:N/C superlattice with a 47 nm period and a SiC:N/C thickness ratio of 11. A series conductance model is employed to explain the dependence of the thermal conductivity on the superlattice structure. The results indicate that the thermal conductivity is more dependent on the SiC:N/C thickness ratio than the SiC:N grain size, indicative of strong boundary layer phonon scattering.
The study of mineral-water interfaces is of great importance to a variety of applications including oil and gas extraction, gas subsurface storage, environmental contaminant treatment, and nuclear waste repositories. Understanding the fundamentals of that interface is key to the success of those applications. Confinement of water in the interlayer of smectite clay minerals provides a unique environment to examine the interactions among water molecules, interlayer cations, and clay mineral surfaces. Smectite minerals are characterized by a relatively low layer charge that allows the clay to swell with increasing water content. Montmorillonite and beidellite varieties of smectite were investigated to compare the impact of the location of layer charge on the interlayer structure and dynamics. Inelastic neutron scattering of hydrated and dehydrated cation-exchanged smectites was used to probe the dynamics of the interlayer water (200-900 cm-1 spectral region) and identify the shift in the librational edge as a function of the interlayer cation. Molecular dynamics simulations of equivalent phases and power spectra, derived from the resulting molecular trajectories, indicate a general shift in the librational behavior with interlayer cation that is generally consistent with the neutron scattering results for the monolayer hydrates. Both neutron scattering and power spectra exhibit librational structures affected by the location of layer charge and by the charge of the interlayer cation. Divalent cations (Ba2+ and Mg2+) characterized by large hydration enthalpies typically exhibit multiple broad librational peaks compared to monovalent cations (Cs+ and Na+), which have relatively small hydration enthalpies.
Proceedings of ISAV 2015: 1st International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2015: The International Conference for High Performance Computing, Networking, Storage and Analysis
We present an architecture for high-performance computers that integrates in situ analysis of hardware and system monitoring data with application-specific data to reduce application runtimes and improve overall platform utilization. Large-scale high-performance computing systems typically use monitoring as a tool unrelated to application execution. Monitoring data flows from sampling points to a centralized off-system machine for storage and post-processing when root-cause analysis is required. Along the way, it may also be used for instantaneous threshold-based error detection. Applications can know their application state and possibly allocated resource state, but typically, they have no insight into globally shared resource state that may affect their execution. By analyzing performance data in situ rather than off-line, we enable applications to make real-time decisions about their resource utilization. We address the particular case of in situ network congestion analysis and its potential to improve task placement and data partitioning. We present several design and analysis considerations.
Application resilience is a key challenge that has to be addressed to realize the exascale vision. Online recovery, even when it involves all processes, can dramatically reduce the overhead of failures as compared to the more traditional approach where the job is terminated and restarted from the last checkpoint. In this paper we explore how local recovery can be used for certain classes of applications to further reduce overheads due to resilience. Specifically we develop programming support and scalable runtime mechanisms to enable online and transparent local recovery for stencil-based parallel applications on current leadership class systems. We also show how multiple independent failures can be masked to effectively reduce the impact on the total time to solution. We integrate these mechanisms with the S3D combustion simulation, and experimentally demonstrate (using the Titan Cray-XK7 system at ORNL) the ability to tolerate high failure rates (i.e., node failures every 5 seconds) with low overhead while sustaining performance, at scales up to 262144 cores.
Herein, we show how introducing a small amount of gas can completely change the motion of a solid object in a viscous liquid during vibration. We analyze an idealized system exhibiting this behavior: a piston moving in a liquid-filled housing, where the gaps between the piston and the housing are narrow and depend on the piston position. Recent experiments have shown that vibration causes some gas to move below the piston and the piston to subsequently move downward and compress its supporting spring. Herein, we analyze the analogous but simpler situation in which the gas regions are replaced by bellows with similar pressure-volume relationships. We show that these bellows form a spring (analogous to the pneumatic spring formed by the gas regions) which enables the piston and the liquid to oscillate in a mode that does not exist without this spring. This mode is referred to here as the Couette mode because the liquid in the gaps moves essentially in Couette flow (i.e., with almost no component of Poiseuille flow). Since Couette flow by itself produces extremely low damping, the Couette mode has a strong resonance. We show that, near this resonance, the dependence of the gap geometry on the piston position produces a large rectified (net) force on the piston during vibration. As a result, this force can be much larger than the piston weight and the strength of its supporting spring and is in the direction that decreases the flow resistance of the gap geometry.
We consider techniques to improve the performance of parallel sparse triangular solution on non-uniform memory architecture multicores by extending earlier coloring and level set schemes for single-core multiprocessors. We develop STS-k, where k represents a small number of transformations for latency reduction from increased spatial and temporal locality of data accesses. We propose a graph model of data reuse to inform the development of STS-k and to prove that computing an optimal cost schedule is NP-complete. We observe significant speed-ups with STS-3 on 32-core Intel Westmere-Ex and 24-core AMD 'MagnyCours' processors. Incremental gains solely from the 3-level transformations in STS-3 for a fixed ordering, correspond to reductions in execution times by factors of 1.4(Intel) and 1.5(AMD) for level sets and 2(Intel) and 2.2(AMD) for coloring. On average, execution times are reduced by a factor of 6(Intel) and 4(AMD) for STS-3 with coloring compared to a reference implementation using level sets.
Proceedings of ISAV 2015: 1st International Workshop on In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization, Held in conjunction with SC 2015: The International Conference for High Performance Computing, Networking, Storage and Analysis
Next generation architectures necessitate a shift away from traditional workflows in which the simulation state is saved at prescribed frequencies for post-processing analysis. While the need to shift to in situ workflows has been acknowledged for some time, much of the current research is focused on static workflows, where the analysis that would have been done as a post-process is performed concurrently with the simulation at user-prescribed frequencies. Recently, research efforts are striving to enable adaptive workflows, in which the frequency, composition, and execution of computational and data manipulation steps dynamically depend on the state of the simulation. Adapting the workflow to the state of simulation in such a data-driven fashion puts extremely strict efficiency requirements on the analysis capabilities that are used to identify the transitions in the workflow. In this paper we build upon earlier work on trigger detection using sublinear techniques to drive adaptive workflows. Here we propose a methodology to detect the time when sudden heat release occurs in simulations of turbulent combustion. Our proposed method provides an alternative metric that can be used along with our former metric to increase the robustness of trigger detection. We show the effectiveness of our metric empirically for predicting heat release for two use cases.
Proceedings of E2SC 2015: 3rd International Workshop on Energy Efficient Supercomputing - Held in conjunction with SC 2015: The International Conference for High Performance Computing, Networking, Storage and Analysis
Power consumption of extreme-scale supercomputers has become a key performance bottleneck. Yet current practices do not leverage power management opportunities, instead running at maximum power. This is not sustainable. Future systems will need to manage power as a critical resource, directing it to where it has greatest benefit. Power capping is one mechanism for managing power budgets, however its behavior is not well understood. This paper presents an empirical evaluation of several key HPC workloads running under a power cap on a Cray XC40 system, and provides a comparison of this technique with p-state control, demonstrating the performance differences of each. These results show: 1.) Maximum performance requires ensuring the cap is not reached; 2.) Performance slowdown under a cap can be attributed to cascading delays which result in unsynchronized performance variability across nodes; and, 3.) Due to lag in reaction time, considerable time is spent operating above the set cap. This work provides a timely and much needed comparison of HPC application performance under a power cap and attempts to enable users and system administrators to understand how to best optimize application performance on power-constrained HPC systems.
The SunShot Initiative is focused on reducing cost to improve competitiveness with respect to other electricity generation options. The goal of the Sandia Transmission Grid Integration (TGI) program is to reduce grid access barriers for solar generation. Sandia’s three-year TGI work was divided into five objectives.
Knowledge of nanoscale heteroepitaxy is continually evolving as advances in material synthesis reveal new mechanisms that have not been theoretically predicted and are different than what is known about planar structures. In addition to a wide range of potential applications, core/shell nanowire structures offer a useful template to investigate heteroepitaxy at the atomistic scale. We show that the growth of a Ge shell on a Si core can be tuned from the theoretically predicted island growth mode to a conformal, crystalline, and smooth shell by careful adjustment of growth parameters in a narrow growth window that has not been explored before. In the latter growth mode, Ge adatoms preferentially nucleate islands on the {113} facets of the Si core, which outgrow over the {220} facets. Islands on the low-energy {111} facets appear to have a nucleation delay compared to the {113} islands; however, they eventually coalesce to form a crystalline conformal shell. Synthesis of epitaxial and conformal Si/Ge/Si core/multishell structures enables us to fabricate unique cylindrical ring nanowire field-effect transistors, which we demonstrate to have steeper on/off characteristics than conventional core/shell nanowire transistors.
It is challenging to obtain scalable HPC performance on real applications, especially for data science applications with irregular memory access and computation patterns. To drive co-design efforts in architecture, system, and application design, we are developing miniapps representative of data science workloads. These in turn stress the state of the art in Graph BLAS-like Graph Algorithm Building Blocks (GABB). In this work, we outline a Graph BLAS-like, linear algebra based approach to miniTri, one such miniapp. We describe a task-based prototype implementation and give initial scalability results.
Low- and high-energy proton experimental data and error rate predictions are presented for many bulk Si and SOI circuits from the 20-90 nm technology nodes to quantify how much low-energy protons (LEPs) can contribute to the total on-orbit single-event upset (SEU) rate. Every effort was made to predict LEP error rates that are conservatively high; even secondary protons generated in the spacecraft shielding have been included in the analysis. Across all the environments and circuits investigated, and when operating within 10% of the nominal operating voltage, LEPs were found to increase the total SEU rate to up to 4.3 times as high as it would have been in the absence of LEPs. Therefore, the best approach to account for LEP effects may be to calculate the total error rate from high-energy protons and heavy ions, and then multiply it by a safety margin of 5. If that error rate can be tolerated then our findings suggest that it is justified to waive LEP tests in certain situations. Trends were observed in the LEP angular responses of the circuits tested. As a result, grazing angles were the worst case for the SOI circuits, whereas the worst-case angle was at or near normal incidence for the bulk circuits.
Carbon nanodots (CDs) have generated enormous excitement because of their superiority in water solubility, chemical inertness, low toxicity, ease of functionalization and resistance to photobleaching. Here we report a facile thermal pyrolysis route to prepare CDs with high quantum yield (QY) using citric acid as the carbon source and ethylene diamine derivatives (EDAs) including triethylenetetramine (TETA), tetraethylenepentamine (TEPA) and polyene polyamine (PEPA) as the passivation agents. We find that the CDs prepared from EDAs, such as TETA, TEPA and PEPA, show relatively high photoluminescence (PL) QY (11.4, 10.6, and 9.8%, respectively) at 1ex of 465 nm. The cytotoxicity of the CDs has been investigated through in vitro and in vivo bio-imaging studies. The results indicate that these CDs possess low toxicity and good biocompatibility. As a result, the unique properties such as the high PL QY at large excitation wave length and the low toxicity of the resulting CDs make them promising fluorescent nanoprobes for applications in optical bio-imaging and biosensing.
Nanostructuring has been proposed as a method to enhance radiation tolerance, but many metallic systems are rejected due to significant concerns regarding long term grain boundary and interface stability. This work utilized recent advancements in transmission electron microscopy (TEM) to quantitatively characterize the grain size, texture, and individual grain boundary character in a nanocrystalline gold model system before and after in situ TEM ion irradiation with 10 MeV Si. The initial experimental measurements were fed into a mesoscale phase field model, which incorporates the role of irradiation-induced thermal events on boundary properties, to directly compare the observed and simulated grain growth with varied parameters. The observed microstructure evolution deviated subtly from previously reported normal grain growth in which some boundaries remained essentially static. In broader terms, the combined experimental and modeling techniques presented herein provide future avenues to enhance quantification and prediction of the thermal, mechanical, or radiation stability of grain boundaries in nanostructured crystalline systems.
It is challenging to obtain scalable HPC performance on real applications, especially for data science applications with irregular memory access and computation patterns. To drive co-design efforts in architecture, system, and application design, we are developing miniapps representative of data science workloads. These in turn stress the state of the art in Graph BLAS-like Graph Algorithm Building Blocks (GABB). In this work, we outline a Graph BLAS-like, linear algebra based approach to miniTri, one such miniapp. We describe a task-based prototype implementation and give initial scalability results.
Water and oxygen electrochemistry lies at the heart of interfacial processes controlling energy transformations in fuel cells, electrolyzers, and batteries. Here, by comparing results for the ORR obtained in alkaline aqueous media to those obtained in ultradry organic electrolytes with known amounts of H2O added intentionally, we propose a new rationale in which water itself plays an important role in determining the reaction kinetics. This effect derives from the formation of HOad···H2O (aqueous solutions) and LiO2···H2O (organic solvents) complexes that place water in a configurationally favorable position for proton transfer to weakly adsorbed intermediates. We also find that, even at low concentrations (<10 ppm), water acts simultaneously as a promoter and as a catalyst in the production of Li2O2, regenerating itself through a sequence of steps that include the formation and recombination of H+ and OH-. We conclude that, although the binding energy between metal surfaces and oxygen intermediates is an important descriptor in electrocatalysis, understanding the role of water as a proton-donor reactant may explain many anomalous features in electrocatalysis at metal-liquid interfaces.
The moon-forming impact and the subsequent evolution of the proto-Earth is strongly dependent on the properties of materials at the extreme conditions generated by this violent collision. We examine the high pressure behavior of MgO, one of the dominant constituents in Earth's mantle, using high-precision, plate impact shock compression experiments performed on Sandia National Laboratories' Z Machine and extensive quantum calculations using density functional theory (DFT) and quantum Monte Carlo (QMC) methods. The combined data span from ambient conditions to 1.2 TPa and 42 000 K, showing solid-solid and solid-liquid phase boundaries. Furthermore our results indicate that under impact the solid and liquid phases coexist for more than 100 GPa, pushing complete melting to pressures in excess of 600 GPa. The high pressure required for complete shock melting has implications for a broad range of planetary collision events.
This work represents the first complete analysis of the use of a racetrack resonator to measure the insertion loss of efficient, compact photonic components. Beginning with an in-depth analysis of potential error sources and a discussion of the calibration procedure, the technique is used to estimate the insertion loss of waveguide width tapers of varying geometry with a resulting 95% confidence interval of 0.007 dB. The work concludes with a performance comparison of the analyzed tapers with results presented for four taper profiles and three taper lengths.
Carrier lifetime and dark current measurements are reported for a mid-wavelength infrared InAs0.91Sb0.09 alloy nBn photodetector. Minority carrier lifetimes are measured using a non-contact time-resolved microwave technique on unprocessed portions of the nBn wafer and the Auger recombination Bloch function parameter is determined to be |F1F2|=0.292. The measured lifetimes are also used to calculate the expected diffusion dark current of the nBn devices and are compared with the experimental dark current measured in processed photodetector pixels from the same wafer. Excellent agreement is found between the two, highlighting the important relationship between lifetimes and diffusion currents in nBn photodetectors.
Semiconducting nanowires have been explored for a number of applications in optoelectronics such as photodetectors and solar cells. Currently, there is ample interest in identifying the mechanisms that lead to photoresponse in nanowires in order to improve and optimize performance. However, distinguishing among the different mechanisms, including photovoltaic, photothermoelectric, photoemission, bolometric, and photoconductive, is often difficult using purely optoelectronic measurements. In this work, we present an approach for performing combined and simultaneous thermoelectric and optoelectronic measurements on the same individual nanowire. We apply the approach to GaN/AlGaN core/shell and GaN/AlGaN/GaN core/shell/shell nanowires and demonstrate the photothermoelectric nature of the photocurrent observed at the electrical contacts at zero bias, for above- and below-bandgap illumination. Furthermore, the approach allows for the experimental determination of the temperature rise due to laser illumination, which is often obtained indirectly through modeling. We also show that under bias, both above- and below-bandgap illumination leads to a photoresponse in the channel with signatures of persistent photoconductivity due to photogating. Finally, we reveal the concomitant presence of photothermoelectric and photogating phenomena at the contacts in scanning photocurrent microscopy under bias by using their different temporal response. Furthermore, our approach is applicable to a broad range of nanomaterials to elucidate their fundamental optoelectronic and thermoelectric properties.
We report that methane, CH4, can be used as an efficient F-state quenching gas for trapped ytterbium ions. The quenching rate coefficient is measured to be (2.8 ± 0.3) × 106 s-1 Torr-1. For applications that use microwave hyperfine transitions of the ground-state 171Y b ions, the CH4 induced frequency shift coefficient and the decoherence rate coefficient are measured as δν/ν = (-3.6 ± 0.1) × 10-6 Torr-1 and 1/T2 = (1.5 ± 0.2) × 105 s-1 Torr-1. In our buffer-gas cooled 171Y b+ microwave clock system, we find that only ≤10-8 Torr of CH4 is required under normal operating conditions to efficiently clear the F-state and maintain ≥85% of trapped ions in the ground state with insignificant pressure shift and collisional decoherence of the clock resonance.
This document presents the facility-recommended characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the Annular Core Research Reactor (ACRR) for the central cavity free-field environment with the 32-inch pedestal at the core centerline. The designation for this environment is ACRR-FF-CC-32-cl. The neutron, prompt gamma-ray, and delayed gamma-ray energy spectra, uncertainties, and covariance matrices are presented as well as radial and axial neutron and gamma-ray fluence profiles within the experiment area of the cavity. Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulse operations are presented with conversion examples.
Maksud, M.; Yoo, J.; Harris, Charles T.; Palapati, N.K.R.; Subramanian, A.
This paper reports a diameter-independent Young's modulus of 91.9 ± 8.2 GPa for [111] Germanium nanowires (Ge NWs). When the surface oxide layer is accounted for using a core-shell NW approximation, the YM of the Ge core approaches a near theoretical value of 147.6 ± 23.4 GPa. The ultimate strength of a NW device was measured at 10.9 GPa, which represents a very high experimental-to-theoretical strength ratio of ∼75%. With increasing interest in this material system as a high-capacity lithium-ion battery anode, the presented data provide inputs that are essential in predicting its lithiation-induced stress fields and fracture behavior.
Reliable methods for tin whisker mitigation are needed for applications that utilize tin-plated commercial components. Tin can grow whiskers that can lead to electrical shorting, possibly causing critical systems to fail catastrophically. The mechanisms of tin whisker growth are unclear and this makes prediction of the lifetimes of critical components uncertain. The development of robust methods for tin whisker mitigation is currently the best approach to eliminating the risk of shorting. Current mitigation methods are based on unfilled polymer coatings that are not impenetrable to tin whiskers. In this paper we report tin whisker mitigation results for several filled polymer coatings. The whisker-penetration resistance of the coatings was evaluated at elevated temperature and high humidity and under temperature cycling conditions. The composite coatings comprised Ni and MgF2-coated Al/Ni/Al platelets in epoxy resin or silicone rubber. In addition to improved whisker mitigation, these platelet composites have enhanced thermal conductivity and dielectric constant compared with unfilled polymers.
Imaging systems that include a specific source, imaging concept, geometry, and detector have unique properties such as signal-to-noise ratio, dynamic range, spatial resolution, distortions, and contrast. Some of these properties are inherently connected, particularly dynamic range and spatial resolution. It must be emphasized that spatial resolution is not a single number but must be seen in the context of dynamic range and consequently is better described by a function or distribution. We introduce the "dynamic granularity" G dyn as a standardized, objective relation between a detector's spatial resolution (granularity) and dynamic range for complex imaging systems in a given environment rather than the widely found characterization of detectors such as cameras or films by themselves. This relation can partly be explained through consideration of the signal's photon statistics, background noise, and detector sensitivity, but a comprehensive description including some unpredictable data such as dust, damages, or an unknown spectral distribution will ultimately have to be based on measurements. Measured dynamic granularities can be objectively used to assess the limits of an imaging system's performance including all contributing noise sources and to qualify the influence of alternative components within an imaging system. This article explains the construction criteria to formulate a dynamic granularity and compares measured dynamic granularities for different detectors used in the X-ray backlighting scheme employed at Sandia's Z-Backlighter facility.
Antenna apertures that are tapered for sidelobe control can also be parsed into subapertures for Direction of Arrival (DOA) measurements. However, the aperture tapering complicates phase center location for the subapertures, knowledge of which is critical for proper DOA calculation. In addition, tapering affects subaperture gains, making gain dependent on subaperture position. Techniques are presented to calculate subaperture phase center locations, and algorithms are given for equalizing subapertures’ gains. Sidelobe characteristics and mitigation are also discussed.
The transient degradation of semiconductor device performance under irradiation has long been an issue of concern. A single high-energy charged particle can degrade or permanently destroy the microelectronic component, potentially altering the course or function of the systems. Disruption of the the crystalline structure through the introduction of quasi-stable defect structures can change properties from semiconductor to conductor. Typically, the initial defect formation phase is followed by a recovery phase in which defect-defect or defect-dopant interactions modify the characteristics of the damaged structure. In this LDRD Express, in-situ ion irradiation transmission microscopy (TEM) in-situ TEM experiments combined with atomistic simulations have been conducted to determine the feasibility of imaging and characterizing the defect structure resulting from a single cascade in silicon. In-situ TEM experiments have been conducted to demonstrate that a single ion strike can be observed in Si thin films with nanometer resolution in real time using the in-situ ion irradiation transmission electron microscope (I3TEM). Parallel to this experimental effort, ion implantation has been numerically simulated using Molecular Dynamics (MD). This numerical framework provides detailed predictions of the damage and follow the evolution of the damage during the first nanoseconds. The experimental results demonstrate that single ion strike can be observed in prototypical semiconductors.
This report examines the technical elements necessary to evaluate EBS concepts and perform thermal analysis of DOE-Managed SNF and HLW in the disposal settings of primary interest – argillite, crystalline, salt, and deep borehole. As the disposal design concept is composed of waste inventory, geologic setting, and engineered concept of operation, the engineered barrier system (EBS) falls into the last component of engineered concept of operation. The waste inventory for DOE-Managed HLW and SNF is closely examined, with specific attention to the number of waste packages, the size of waste packages, and the thermal output per package. As expected, the DOE-Managed HLW and SNF inventory has a much smaller volume, and hence smaller number of canisters, as well a lower thermal output, relative to a waste inventory that would include commercial spent nuclear fuel (CSNF). A survey of available data and methods from previous studies of thermal analysis indicates that, in some cases, thermo-hydrologic modeling will be necessary to appropriately address the problem. This report also outlines scope for FY16 work -- a key challenge identified is developing a methodology to effectively and efficiently evaluate EBS performance in each disposal setting on the basis of thermal analyses results.
Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.
Light body armor development for the warfighter is based on trial-and-error testing of prototype designs against ballistic projectiles. Torso armor testing against blast is nonexistent but necessary to protect the heart and lungs. In tests against ballistic projectiles, protective apparel is placed over ballistic clay and the projectiles are fired into the armor/clay target. The clay represents the human torso and its behind-armor, permanent deflection is the principal metric used to assess armor protection. Although this approach provides relative merit assessment of protection, it does not examine the behind-armor blunt trauma to crucial torso organs. We propose a modeling and simulation (M&S) capability for wound injury scenarios to the head, neck, and torso of the warfighter. We will use this toolset to investigate the consequences of, and mitigation against, blast exposure, blunt force impact, and ballistic projectile penetration leading to damage of critical organs comprising the central nervous, cardiovascular, and respiratory systems. We will leverage Sandia codes and our M&S expertise on traumatic brain injury to develop virtual anatomical models of the head, neck, and torso and the simulation methodology to capture the physics of wound mechanics. Specifically, we will investigate virtual wound injuries to the head, neck, and torso without and with protective armor to demonstrate the advantages of performing injury simulations for the development of body armor. The proposed toolset constitutes a significant advance over current methods by providing a virtual simulation capability to investigate wound injury and optimize armor design without the need for extensive field testing.
This Executive Summary provides highlights from the company's full report quantifying the link between health conditions and their business outcomes based on 828 employee survey responses (8% of the workforce) to the HPQ-Select employee questionnaire. These highlights provide key findings on the magnitude of lost productivity, the prevalence of key chronic conditions, their treatment, key conditions driving lost productivity and the potential business impacts of improvements. Details on each of these dimensions can be found in the full report.
People use social media resources like Twitter, Facebook, forums etc. to share and discuss various activities or topics. By aggregating topic trends across many individuals using these services, we seek to construct a richer profile of a person’s activities and interests as well as provide a broader context of those activities. This profile may then be used in a variety of ways to understand groups as a collection of interests and affinities and an individual’s participation in those groups. Our approach considers that much of these data will be unstructured, free-form text. By analyzing free-form text directly, we may be able to gain an implicit grouping of individuals with shared interests based on shared conversation, and not on explicit social software linking them. In this paper, we discuss a proof-of-concept application called Grandmaster built to pull short sections of text, a person’s comments or Twitter posts, together by analysis and visualization to allow a gestalt understanding of the full collection of all individuals: how groups are similar and how they differ, based on their text inputs.
The United States Department of Energy, Office of Nuclear Energy, Fuel Cycle Technology Program sponsors nuclear fuel cycle research and development. As part of its Fuel Cycle Options campaign, the DOE has established the Nuclear Fuel Cycle Options Catalog. The catalog is intended for use by the Fuel Cycle Technologies Program in planning its research and development activities and disseminating information regarding nuclear energy to interested parties. The purpose of this report is to document the improvements and additions that have been made to the Nuclear Fuel Cycle Options Catalog in the 2015 fiscal year.
Sandia National Laboratories will provide technical assistance, within time and budget, to Requester on testing and analyzing a microneedle-based electrolyte sensing platform. Hollow microneedles will be fabricated at Sandia and integrated with a fluidic chip using plastic laminate prototyping technology available at Sandia. In connection with commercial ion selective electrodes the sensing platform will be tested for detection of electrolytes (sodium and/or potassium) within physiological relevant concent ration ranges.
The Western National Robot Rodeo & CAPEX (Capability Exercise) is a technical competition for military and civilian bomb squads and emergency responders that puts teams through ten to twelve challenging scenarios ranging from operator skill to full mission planning, execution, and TTPs (Tactics, Training, and Procedures). “The goal of the event is to make good robot operators into great robot operators,” Jake Deuel says. Jake Deuel of Sandia National Laboratories co-hosts the event each year with Chris Ory of Los Alamos National Laboratories. This year’s competition was held at Sandia Labs in Albuquerque, New Mexico.
We fabricated optically pumped and electrically injected ultraviolet (UV) lasers on reduced-threading-dislocation-density (reduced-TDD) AlGaN templates. The overgrowth of sub-micron-wide mesas in the Al0.32Ga0.68N templates enabled a tenfold reduction in TDD, to (2-3) × 108cm%2. Optical pumping of AlGaN hetero-structures grown on the reduced-TDD templates yielded a low lasing threshold of 34kW/cm2 at 346 nm. Roomtemperature pulsed operation of laser diodes at 353nm was demonstrated, with a threshold of 22.5 kA/cm2. Reduced-TDD templates have been developed across the entire range of AlGaN compositions, presenting a promising approach for extending laser diodes into the deep UV.
This report details experimental testing and constitutive modeling of sandy soil deformation under quasi - static conditions. This is driven by the need to understand constitutive response of soil to target/component behavior upon impact . An experimental and constitutive modeling program was followed to determine elastic - plastic properties and a compressional failure envelope of dry soil . One hydrostatic, one unconfined compressive stress (UCS), nine axisymmetric compression (ACS) , and one uniaxial strain (US) test were conducted at room temperature . Elastic moduli, assuming isotropy, are determined from unload/reload loops and final unloading for all tests pre - failure and increase monotonically with mean stress. Very little modulus degradation was discernable from elastic results even when exposed to mean stresses above 200 MPa . The failure envelope and initial yield surface were determined from peak stresses and observed onset of plastic yielding from all test results. Soil elasto - plastic behavior is described using the Brannon et al. (2009) Kayenta constitutive model. As a validation exercise, the ACS - parameterized Kayenta model is used to predict response of the soil material under uniaxial strain loading. The resulting parameterized and validated Kayenta model is of high quality and suitable for modeling sandy soil deformation under a range of conditions, including that for impact prediction.
This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.
The objective of this thesis is to create a program to quickly estimate the radioactivity and decay of experiments conducted inside of the Annular Core Research Reactor at Sandia National Laboratories and eliminate the need for users to write code. This is achieved by model the neutron fluxes in the reactor’s central cavity where experiments are conducted for 4 different neutron spectra using MCNP. The desired neutron spectrum, experiment material composition, and reactor power level are then input into CINDER2008 burnup code to obtain activation and decay information for every isotope generated. DREAD creates all of the files required for CINDER2008 through user selected inputs in a graphical user interface and executes the program for the user and displays the resulting estimation for dose rate at various distances. The DREAD program was validated by weighing and measuring various experiments in the different spectra and then collecting dose rate information after they were irradiated and comparing it to the dose rates that DREAD predicted. The program provides results with an average of 17% higher estimates than the actual values and takes seconds to execute.
We report on the progress made to date for a Laboratory Directed Research and Development (LDRD) project aimed at diagnosing magnetic flux compression on the Z pulsed-power accelerator (0-20 MA in 100 ns). Each experiment consisted of an initially solid Be or Al liner (cylindrical tube), which was imploded using the Z accelerator's drive current (0-20 MA in 100 ns). The imploding liner compresses a 10-T axial seed field, B z ( 0 ) , supplied by an independently driven Helmholtz coil pair. Assuming perfect flux conservation, the axial field amplification should be well described by B z ( t ) = B z ( 0 ) x [ R ( 0 ) / R ( t )] 2 , where R is the liner's inner surface radius. With perfect flux conservation, B z ( t ) and dB z / dt values exceeding 10 4 T and 10 12 T/s, respectively, are expected. These large values, the diminishing liner volume, and the harsh environment on Z, make it particularly challenging to measure these fields. We report on our latest efforts to do so using three primary techniques: (1) micro B-dot probes to measure the fringe fields associated with flux compression, (2) streaked visible Zeeman absorption spectroscopy, and (3) fiber-based Faraday rotation. We also mention two new techniques that make use of the neutron diagnostics suite on Z. These techniques were not developed under this LDRD, but they could influence how we prioritize our efforts to diagnose magnetic flux compression on Z in the future. The first technique is based on the yield ratio of secondary DT to primary DD reactions. The second technique makes use of the secondary DT neutron time-of-flight energy spectra. Both of these techniques have been used successfully to infer the degree of magnetization at stagnation in fully integrated Magnetized Liner Inertial Fusion (MagLIF) experiments on Z [P. F. Schmit et al. , Phys. Rev. Lett. 113 , 155004 (2014); P. F. Knapp et al. , Phys. Plasmas, 22 , 056312 (2015)]. Finally, we present some recent developments for designing and fabricating novel micro B-dot probes to measure B z ( t ) inside of an imploding liner. In one approach, the micro B-dot loops were fabricated on a printed circuit board (PCB). The PCB was then soldered to off-the-shelf 0.020- inch-diameter semi-rigid coaxial cables, which were terminated with standard SMA connectors. These probes were recently tested using the COBRA pulsed power generator (0-1 MA in 100 ns) at Cornell University. In another approach, we are planning to use new multi-material 3D printing capabilities to fabricate novel micro B-dot packages. In the near future, we plan to 3D print these probes and then test them on the COBRA generator. With successful operation demonstrated at 1-MA, we will then make plans to use these probes on a 20-MA Z experiment.
People use social media resources like Twitter, Facebook, forums etc. to share and discuss various activities or topics. By aggregating topic trends across many individuals using these services, we seek to construct a richer profile of a person’s activities and interests as well as provide a broader context of those activities. This profile may then be used in a variety of ways to understand groups as a collection of interests and affinities and an individual’s participation in those groups. Our approach considers that much of these data will be unstructured, free-form text. By analyzing free-form text directly, we may be able to gain an implicit grouping of individuals with shared interests based on shared conversation, and not on explicit social software linking them. In this paper, we discuss a proof-of-concept application called Grandmaster built to pull short sections of text, a person’s comments or Twitter posts, together by analysis and visualization to allow a gestalt understanding of the full collection of all individuals: how groups are similar and how they differ, based on their text inputs.
Two major sections were drafted (each with several subsections) for the IAEA dealing with designing and implementing a Physical Protection System (PPS). Areas addressed were Search Systems and the evaluation of PPS effectiveness.
There is a need in many fields, such as nuclear medicine, non-proliferation, energy exploration, national security, homeland security, nuclear energy, etc, for miniature, thermal neutron detectors. Until recently, thermal neutron detection has required physically large devices to provide sufficient neutron interaction and transduction signal. Miniaturization would allow broader use in the fields just mentioned and open up other applications potentially. Recent research shows promise in creating smaller neutron detectors through the combination of high-neutron-cross-section converter materials and solid-state devices. Yet, till recently it is difficult to measure low neutron fluxes by solidstate means given the need for optimized converter materials (purity, chemical composition and thickness) and a lack of designs capable of efficient transduction of the neutron conversion products (x-rays, electrons, gamma rays). Gadolinium-based semiconductor heterojunctions have detected electrons produced by Gd-neutron reactions but only at high neutron fluxes. One of the main limitations to this type of approach is the use of thin converter layers and the inability to utilize all the conversion products. In this LDRD we have optimized the converter material thickness and chemical composition to improve capture of conversion electrons and have detected thermal neutrons with high fidelity at low flux. We are also examining different semiconductor materials and converter materials to attempt to capture a greater percentage of the conversion electrons, both low and higher energy varieties. We have studied detector size and bias scaling, and cross-sensitivity to xrays and shown that we can detect low fluxes of thermal neutrons in less than 30 minutes with high selectivity by our approach. We are currently studying improvements in performance with direct placement of the Gd converter on the detector. The advancement of sensitive, miniature neutron detectors will have benefits in energy production, nonproliferation and medicine.
As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are that: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.
High hole concentrations in AlxGa1−xN become increasingly difficult to obtain as the Al mole fraction increases. The problem is believed to be related to compensation, extended defects, and the band gap of the alloy. Whereas electrical measurements are commonly used to measure hole density, in this work we used electron paramagnetic resonance (EPR) spectroscopy to investigate a defect related to the neutral Mg acceptor. The amount and symmetry of neutral Mg in MOCVD-grown AlxGa1−xN with x = 0 to 0.28 was monitored for films with different dislocation densities and surface conditions. EPR measurements indicated that the amount of neutral Mg decreased by 60% in 900°C-annealed AlxGa1−xN films for x = 0.18 and 0.28 as compared with x = 0.00 and 0.08. A decrease in the angular dependence of the EPR signal accompanied the increased x, suggesting a change in the local environment of the Mg. Neither dislocation density nor annealing conditions contribute to the reduced amount of neutral Mg in samples with the higher Al concentration. Rather, compensation is the simplest explanation of the observations, because a donor could both reduce the number of neutral acceptors and cause the variation in the angular dependence.
This report describes specific GDSA activities in fiscal year 2015 (FY2015) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code (Hammond et al., 2011) and the Dakota uncertainty sampling and propagation code (Adams et al., 2013). Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through the engineered barriers and natural geologic barriers to a well location in an overlying or underlying aquifer. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.