Hansen, Nils; Skeen, S.A.; Adamson, B.D.; Ahmed, M.
This paper provides experimental evidence for the chemical structures of aliphatically substituted and bridged polycyclic aromatic hydrocarbon (PAH) species in gas-physe combustion environments. The identification of these single- and multicore aromatic species, which have been hypothesized to be important in PAH growth and soot nucleation, was made possible through a combination of sampling gaseous constituents from an atmospheric pressure inverse coflow diffusion flame of ethylene and high-resolution tandem mass spectrometry (MS-MS). In these experiments, the flame-sampled components were ionized using a continuous VUV lamp at 10.0 eV and the ions were subsequently fragmented through collisions with Ar atoms in a collision-induced dissociation (CID) process. The resulting fragment ions, which were separated using a reflectron time-of-flight mass spectrometer, were used to extract structural information about the sampled aromatic compounds. The high-resolution mass spectra revealed the presence of alkylated single-core aromatic compounds and the fragment ions that were observed correspond to the loss of saturated and unsaturated units containing up to a total of 6 carbon atoms. Furthermore, the aromatic structures that form the foundational building blocks of the larger PAHs were identified to be smaller single-ring and pericondensed aromatic species with repetitive structural features. For demonstrative purposes, details are provided for the CID of molecular ions at masses 202 and 434. Insights into the role of the aliphatically substituted and bridged aromatics in the reaction network of PAH growth chemistry were obtained from spatially resolved measurements of the flame. The experimental results are consistent with a growth mechanism in which alkylated aromatics are oxidized to form pericondensed ring structures or react and recombine with other aromatics to form larger, potentially three-dimensional, aliphatically bridged multicore aromatic hydrocarbons.
Modern digital hardware and software designs are increasingly complex but are themselves only idealizations of a real system that is instantiated in, and interacts with, an analog physical environment. Insights from physics, formal methods, and complex systems theory can aid in extending reliability and security measures from pure digital computation (itself a challenging problem) to the broader cyber-physical and out-of-nominal arena. Example applications to design and analysis of high-consequence controllers and extreme-scale scientific computing illustrate the interplay of physics and computation. In particular, we discuss the limitations of digital models in an analog world, the modeling and verification of out-of-nominal logic, and the resilience of computational physics simulation. A common theme is that robustness to failures and attacks is fostered by cyber-physical system designs that are constrained to possess inherent stability or smoothness. This chapter contains excerpts from previous publications by the authors.
Sandia National Laboratories performed a 6-month effort to stand up a "zero-entry" cyber range environment for the purpose of providing self-directed practice to augment transmedia learning across diverse media and/or devices that may be part of a loosely coupled, distributed ecosystem. This 6-month effort leveraged Minimega, an open-source Emulytics™ (emulation + analytics) tool for launching and managing virtual machines in a cyber range. The proof of concept addressed a set of learning objectives for cybersecurity operations by providing three, short "zero-entry" exercises for beginner, intermediate, and advanced levels in network forensics, social engineering, penetration testing, and reverse engineering. Learners provided answers to problems they explored in networked virtual machines. The hands-on environment, Cyber Scorpion, participated in a preliminary demonstration in April 2017 at Ft. Bragg, NC. The present chapter describes the learning experience research and software development effort for a cybersecurity use case and subsequent lessons learned. It offers general recommendations for challenges which may be present in future learning ecosystems.
Mixed, augmented, and virtual reality holds promise for many securityrelated applications including physical security systems. When combined with models of a site, an augmented reality (AR) approach can be designed to enhance knowledge and understanding of the status of the facility. The present chapter describes how improved modeling and simulation will increase situational awareness by blurring the lines among the use of tools for analysis, rehearsal, and training-especially when coupled with immersive interaction experiences offered by augmented reality. We demonstrate how the notion of a digital twin can blur these lines. We conclude with challenges that must be overcome when applying digital twins, advanced modeling, and augmented reality to the design and development of next-generation physical security systems.
In the present study, three boundary-layer stability codes are compared based on hypersonic high-enthalpy boundary-layer flows around a blunted 7 deg half-angle cone. The code-to-code comparison is conducted between the following codes: the Nonlocal Transition analysis code of the DLR, German Aerospace Center (DLR); the Stability and Transition Analysis for hypersonic Boundary Layers code of VirtusAero LLC; and the VKI Extensible Stability and Transition Analysis code of the von Kármán Institute for Fluid Dynamics. The comparison focuses on the role of real-gas effects on the second-mode instability, in particular the disturbance frequency, and deals with the question on how far not accounting for real-gas effects compromises the stability analysis. Here, the experimental test cases for the comparison are provided by the DLR High Enthalpy Shock Tunnel Göttingen and the Japan Aerospace Exploration Agency High Enthalpy Shock Tunnel. The focus of the comparison between the stability results and the measurements is, besides real-gas effects, the influence of uncertainties in the mean flow on the stability analysis.
Deep neural networks are often computationally expensive, during both the training stage and inference stage. Training is always expensive, because back-propagation requires high-precision floating-pointmultiplication and addition. However, various mathematical optimizations may be employed to reduce the computational cost of inference. Optimized inference is important for reducing power consumption and latency and for increasing throughput. This chapter introduces the central approaches for optimizing deep neural network inference: pruning "unnecessary" weights, quantizing weights and inputs, sharing weights between layer units, compressing weights before transferring from main memory, distilling large high-performance models into smaller models, and decomposing convolutional filters to reduce multiply and accumulate operations. In this chapter, using a unified notation, we provide a mathematical and algorithmic description of the aforementioned deep neural network inference optimization methods.
Herein we present details of the design, simulation, and performance of a 100-GW linear transformer driver (LTD) cavity at Sandia National Laboratories. The cavity consists of 20 "bricks." Each brick is comprised of two 80 nF, 100 kV capacitors connected electrically in series with a custom, 200 kV, three-electrode, field-distortion gas switch. The brick capacitors are bipolar charged to ±100 kV for a total switch voltage of 200 kV. Typical brick circuit parameters are 40 nF capacitance (two 80 nF capacitors in series) and 160 nH inductance. The switch electrodes are fabricated from a WCu alloy and are operated with breathable air. Over the course of 6,556 shots the cavity generated a peak electrical current and power of 1.03 MA (±1.8%) and 106 GW (±3.1%). Experimental results are consistent (to within uncertainties) with circuit simulations for normal operation, and expected failure modes including prefire and late-fire events. New features of this development that are reported here in detail include: (1) 100 ns, 1 MA, 100-GW output from a 2.2 m diameter LTD into a 0.1 Ω load, (2) high-impedance solid charging resistors that are optimized for this application, and (3) evaluation of maintenance-free trigger circuits using capacitive coupling and inductive isolation.
An implicit, low-dissipation, low-Mach, variable density control volume finite element formulation is used to explore foundational understanding of numerical accuracy for large-eddy simulation applications on hybrid meshes. Detailed simulation comparisons are made between low-order hexahedral, tetrahedral, pyramid, and wedge/prism topologies against a third-order, unstructured hexahedral topology. Using smooth analytical and manufactured low-Mach solutions, design-order convergence is established for the hexahedral, tetrahedral, pyramid, and wedge element topologies using a new open boundary condition based on energy-stable methodologies previously deployed within a finite-difference context. A wide range of simulations demonstrate that low-order hexahedral- and wedge-based element topologies behave nearly identically in both computed numerical errors and overall simulation timings. Moreover, low-order tetrahedral and pyramid element topologies also display nearly the same numerical characteristics. Although the superiority of the hexahedral-based topology is clearly demonstrated for trivial laminar, principally-aligned flows, e.g., a 1x2x10 channel flow with specified pressure drop, this advantage is reduced for non-aligned, turbulent flows including the Taylor–Green Vortex, turbulent plane channel flow (Reτ395), and buoyant flow past a heated cylinder. With the order of accuracy demonstrated for both homogenous and hybrid meshes, it is shown that solution verification for the selected complex flows can be established for all topology types. Although the number of elements in a mesh of like spacing comprised of tetrahedral, wedge, or pyramid elements increases as compared to the hexahedral counterpart, for wall-resolved large-eddy simulation, the increased assembly and residual evaluation computational time for non-hexahedral is offset by more efficient linear solver times. Lastly, most simulation results indicate that modest polynomial promotion provides a significant increase in solution accuracy.
The Center for Computing Research (CCR) at Sandia National Laboratories organizes an active and productive summer program each year, in coordination with the Computer Science Research Institute (CSRI) and Cyber Engineering Research Institute (CERI). CERI focuses on open, exploratory research in cyber security in partnership with academia, industry, and government, and provides collaborators an accessible portal to Sandia's cybersecurity experts and facilities. Moreover, CERI provides an environment for visionary, threat-informed research on national cyber challenges. CSRI brings university faculty and students to Sandia National Laboratories for focused collaborative research on DOE computer and computational science problems. CSRI provides a mechanism by which university researchers learn about problems in computer and computational science at DOE Laboratories. Participants conduct leading - edge research, interact with scientists and engineers at the laboratories, and help transfer the results of their research to programs at the labs.
Post-polymerization reactions of Diels-Alder polyphenylene with ring-substituted benzoyl chloride derivatives using triflic acid as the catalyst, effected selective Friedel-Crafts acylation of the lateral phenyl groups attached to the polyphenylene backbone. Using 4-(trifluoromethyl) benzoyl chloride gave a polymer with increased hydrophobicity. Using 4-fluorobenzoyl chloride afforded lateral 4-(fluorobenzoyl)phenyl substituents, which were further functionalized by nucleophilic aromatic substitution of the reactive fluoro substituent by 4-methoxyphenol.
A forensics investigation after a breach often uncovers network and host indicators of compromise (IOCs) that can be deployed to sensors to allow early detection of the adversary in the future. Over time, the adversary will change tactics, techniques, and procedures (TTPs), which will also change the data generated. If the IOCs are not kept up-to-date with the adversary's new TTPs, the adversary will no longer be detected once all of the IOCs become invalid. Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular expression (regexes), up-to-date with a dynamic adversary. Our framework solves the TTK problem in an automated, cyclic fashion to bracket a previously discovered adversary. This tracking is accomplished through a data-driven approach of self-adapting a given model based on its own detection capabilities.In our initial experiments, we found that the true positive rate (TPR) of the adaptive solution degrades much less significantly over time than the naïve solution, suggesting that self-updating the model allows the continued detection of positives (i.e., adversaries). The cost for this performance is in the false positive rate (FPR), which increases over time for the adaptive solution, but remains constant for the naïve solution. However, the difference in overall detection performance, as measured by the area under the curve (AUC), between the two methods is negligible. This result suggests that self-updating the model over time should be done in practice to continue to detect known, evolving adversaries.
In this project we studied undoped Ge/SiGe heterostructure field-effect transistors, which had a very wide hole density range from 1x1010cm-2 to 3.5x1011 cm-2 tunable by (negative) gate voltage. At low temperatures reasonably high carriers mobility of about 3.4x105 cm2/Vs was achieved.
The use of self–assembling, pre–polymer materials in 3D printing is rare, due to difficulties of facilitating printing with low molecular weight species and preserving their reactivity and/or functions on the macroscale. Akin to 3D printing of small molecules, examples of extrusion–based printing of pre–polymer thermosets are uncommon, arising from their limited rheological tuneability and slow reactions kinetics. The direct ink write (DIW) 3D printing of a two–part resin, Epon 828 and Jeffamine D230, using a self–assembly approach is reported. Through the addition of self–assembling, ureidopyrimidinone–modified Jeffamine D230 and nanoclay filler, suitable viscoelastic properties are obtained, enabling 3D printing of the epoxy–amine pre–polymer resin. A significant increase in viscosity is observed, with an infinite shear rate viscosity of approximately two orders of magnitude higher than control resins, in addition to, an increase in yield strength and thixotropic behavior. As a result, printing of simple geometries is demonstrated with parts showing excellent interlayer adhesion, unachievable using control resins.
With current lithium ion batteries optimized for performance under relatively low charge rate conditions, implementation of XFC has been hindered by drawbacks including Li plating, kinetic polarization, and heat dissipation. This project will utilize model-informed design of 3-D hierarchical electrodes to tune key XFC related variables like 1) bulk porosity/tortuosity 2) vertical pore diameter, spacing, and lattice 3) crystallographic orientation of graphite particles relative to exposed surfaces 4) interfacial chemistry of the graphite surfaces through "artificial sEr formation using ALD 5) current collector surface roughness (aspect ratio, roughness factor, etc.). A key aspect of implementing novel electrodes is characterizing them in relevant settings. For this project, ultimately led out of University of Michigan by Neil Dasgupta, that includes both coin cell and 2+ Ah pouch cell testing, as well as comparison testing against baselines. Sandia National Labs will be conducting detailed cell characterization on iterative versions/improvements of the model-based hierarchical electrodes, as well as COTS cells for baseline comparisons. Key metrics include performance under fast charge conditions, as well as the absence or degree of lithium plating. Sandia will use their unique high precision cycling and rapid EIS capabilities to accurately characterize performance and any lithium plating during 6C charging and beyond, coupling electrochemical observations with cell teardown. Sandia will also design custom fixturing to cool cells during rapid charge, to decouple any kinetic effects brought about by cell heating and allow comparisons between different cells and charge rates. Using these techniques, Sandia will assess HOH electrodes from the University of Michigan, as well as aiding in iterative model and electrode design.
The uncontrolled interaction of a quantum system with its environment is detrimental for quantum coherence. For quantum bits in the solid state, decoherence from thermal vibrations of the surrounding lattice can typically only be suppressed by lowering the temperature of operation. Here, we use a nano-electro-mechanical system to mitigate the effect of thermal phonons on a spin qubit - the silicon-vacancy colour centre in diamond - without changing the system temperature. By controlling the strain environment of the colour centre, we tune its electronic levels to probe, control, and eventually suppress the interaction of its spin with the thermal bath. Strain control provides both large tunability of the optical transitions and significantly improved spin coherence. Finally, our findings indicate the possibility to achieve strong coupling between the silicon-vacancy spin and single phonons, which can lead to the realisation of phonon-mediated quantum gates and nonlinear quantum phononics.
Luk, Ting S.; De Ceglia, Domenico; Scalora, Michael; Vincenti, Maria A.; Campione, Salvatore; Kelley, Kyle; Maria, Jon P.; Keeler, Gordon A.
Optical nonlocalities are elusive and hardly observable in traditional plasmonic materials like noble and alkali metals. Here we report experimental observation of viscoelastic nonlocalities in the infrared optical response of epsilon-near-zero nanofilms made of low-loss doped cadmium-oxide. The nonlocality is detectable thanks to the low damping rate of conduction electrons and the virtual absence of interband transitions at infrared wavelengths. We describe the motion of conduction electrons using a hydrodynamic model for a viscoelastic fluid, and find excellent agreement with experimental results. The electrons' elasticity blue-shifts the infrared plasmonic resonance associated with the main epsilon-near-zero mode, and triggers the onset of higher-order resonances due to the excitation of electron-pressure modes above the bulk plasma frequency. We also provide evidence of the existence of nonlocal damping, i.e., viscosity, in the motion of optically-excited conduction electrons using a combination of spectroscopic ellipsometry data and predictions based on the viscoelastic hydrodynamic model.
Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed the quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.
Methanol is a benchmark for understanding tropospheric oxidation, but is underpredicted by up to 100% in atmospheric models. Recent work has suggested this discrepancy can be reconciled by the rapid reaction of hydroxyl and methylperoxy radicals with a methanol branching fraction of 30%. However, for fractions below 15%, methanol underprediction is exacerbated. Theoretical investigations of this reaction are challenging because of intersystem crossing between singlet and triplet surfaces – ∼45% of reaction products are obtained via intersystem crossing of a pre-product complex – which demands experimental determinations of product branching. Here we report direct measurements of methanol from this reaction. A branching fraction below 15% is established, consequently highlighting a large gap in the understanding of global methanol sources. These results support the recent high-level theoretical work and substantially reduce its uncertainties.
U-Pu-Zr alloys are considered ideal metallic fuels for experimental breeder reactors because of their superior material properties and potential for increased burnup performance. However, significant constituent redistribution has been observed in these alloys when irradiated, or subject to a thermal gradient, resulting in inhomogeneity of both composition and phase, which, in turn, alters the fuel performance. The hybrid Potts-phase field method is reformulated for ternary alloys in a thermal gradient and utilized to simulate and predict constituent redistribution and phase transformations in the U-Pu-Zr nuclear fuel system. Simulated evolution profiles for the U-16Pu-23Zr (at. pct) alloy show concentric zones that are compared with published experimental results; discrepancies in zone size are attributed to thermal profile differences and assumptions related to the diffusivity values used. Twenty-one alloys, over the entire ternary compositional spectrum, are also simulated to investigate the effects of alloy composition on constituent redistribution and phase transformations. The U-40Pu-20Zr (at. pct) alloy shows the most potential for compositional uniformity and phase homogeneity, throughout a thermal gradient, while remaining in the compositional range of feasible alloys.
We study semiconductor hyperbolic metamaterials (SHMs) at the quantum limit experimentally using spectroscopic ellipsometry as well as theoretically using a new microscopic theory. The theory is a combination of microscopic density matrix approach for the material response and Green’s function approach for the propagating electric field. Our approach predicts absorptivity of the full multilayer system and for the first time allows the prediction of in-plane and out-of-plane dielectric functions for every individual layer constructing the SHM as well as effective dielectric functions that can be used to describe a homogenized SHM.
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
Calcite (CaCO3) is one of the most abundant minerals in the Earth’s crust, and it is susceptible to subcritical chemically-driven fracturing. Understanding chemical processes at individual fracture tips, and how they control the development of fractures and fracture networks in the subsurface, is critical for carbon and nuclear waste storage, resource extraction, and predicting earthquakes. Chemical processes controlling subcritical fracture in calcite are poorly understood. We demonstrate a novel approach to quantify the coupled chemical-mechanical effects on subcritical fracture. The calcite surface was indented using a Vickers-geometry indenter tip, which resulted in repeatable micron-scale fractures propagating from the indent. Individual indented samples were submerged in an array of aqueous fluids and an optical microscope was used to track the fracture growth in situ. The fracture propagation rate varied from 1.6 × 10−8 m s−1 to 2.4 × 10−10 m s−1. The rate depended on the type of aqueous ligand present, and did not correlate with the measured dissolution rate of calcite or trends in zeta-potential. We postulate that chemical complexation at the fracture tip in calcite controls the growth of subcritical fracture. Previous studies indirectly pointed to the zeta-potential being the most critical factor, while our work indicates that variation in the zeta-potential has a secondary effect.
We raise for debate and discussion what in our opinion is a growing mis-control and mis-protection of U.S. energy research. We outline the origin of this mis-control and mis-protection, and propose two guiding principles to mitigate them and instead nurture research: (1) focus on people, not projects; and (2) culturally insulate research from development, but not science from technology. Energy research is critical to continuing advances in human productivity and welfare. In this Commentary, we raise for debate and discussion what in our view is a growing mis-control and mis-protection of U.S. energy research. This flawed approach originates in natural human tendencies exacerbated by an historical misunderstanding of research and development, science and technology, and the relationships between them. We outline the origin of the mis-control and mis-protection, and propose two guiding principles to mitigate them and instead nurture research: (i) focus on people, not projects; and (ii) culturally insulate research from development, but not science from technology. Our hope is to introduce these principles into the discourse now, so they can help guide policy changes in U.S. energy research and development that are currently being driven by powerful geopolitical winds. Summary: Two foundational guiding principles are proposed to mitigate a growing mis-control and mis-protection of U.S. energy research, and instead to nurture it.
Hinks, J.A.; Hibberd, F.; Hattar, Khalid M.; Ilinov, A.; Bufford, Daniel C.; Djurabekova, F.; Greaves, G.; Kuronen, A.; Donnelly, S.E.; Nordlund, K.
Nanostructures may be exposed to irradiation during their manufacture, their engineering and whilst in-service. The consequences of such bombardment can be vastly different from those seen in the bulk. In this paper, we combine transmission electron microscopy with in situ ion irradiation with complementary computer modelling techniques to explore the physics governing the effects of 1.7 MeV Au ions on gold nanorods. Phenomena surrounding the sputtering and associated morphological changes caused by the ion irradiation have been explored. In both the experiments and the simulations, large variations in the sputter yields from individual nanorods were observed. These sputter yields have been shown to correlate with the strength of channelling directions close to the direction in which the ion beam was incident. Craters decorated by ejecta blankets were found to form due to cluster emission thus explaining the high sputter yields.
By combining optical imaging, Raman spectroscopy, kelvin probe force microscopy (KFPM), and photoemission electron microscopy (PEEM), we show that graphene's layer orientation, as well as layer thickness, measurably changes the surface potential (Φ). Detailed mapping of variable-thickness, rotationally-faulted graphene films allows us to correlate Φ with specific morphological features. Using KPFM and PEEM we measure ΔΦ up to 39 mV for layers with different twist angles, while ΔΦ ranges from 36-129 mV for different layer thicknesses. The surface potential between different twist angles or layer thicknesses is measured at the KPFM instrument resolution of ≤ 200 nm. The PEEM measured work function of 4.4 eV for graphene is consistent with doping levels on the order of 1012cm-2. We find that Φ scales linearly with Raman G-peak wavenumber shift (slope = 22.2 mV/cm-1) for all layers and twist angles, which is consistent with doping-dependent changes to graphene's Fermi energy in the 'high' doping limit. Our results here emphasize that layer orientation is equally important as layer thickness when designing multilayer two-dimensional systems where surface potential is considered.
This paper describes the theoretical and experimental investigation of interdigitated transducers capable of producing focused acoustical beams in thin film piezoelectric materials. A mathematical formalism describing focused acoustical beams, lamb beams, is presented and related to their optical counterparts in two- and three-dimensions. A novel Fourier domain transducer design methodology is developed and utilized to produce near diffraction limited focused beams within a thin film AlN membrane. The properties of the acoustic beam formed by the transducer were studied by means of Doppler vibrometry implemented with a scanning confocal balanced homodyne interferometer. The Fourier domain modal analysis confirmed that 83% of the acoustical power was delivered to the targeted focused beam which was constituted from the lowest order symmetric mode, while 1% was delivered unintentionally to the beam formed from the anti-symmetric mode, and the remaining power was isotropically scattered. The transmission properties of the acoustic beams as they interact with devices with wavelength scale features were also studied, demonstrating minimal insertion loss for devices in which a subwavelength and pinhole apertures were included. [2018-0059]
Backprojection techniques are a class of methods for detecting and locating events that have been successfully implemented at local scales for dense networks. This article develops the framework for applying a backprojection method to detect and locate a range of event sizes across a heteorogeneous regional network. This article extends previous work on the development of a backprojection method for local and regional seismic event detection, the Waveform Correlation Event Detection System (WCEDS). The improvements outlined here make the technique much more flexible for regional earthquake or explosion monitoring. We first explore how the backprojection operator can be formulated using either a travel-time model or a stack of full waveforms, showing that the former approach is much more flexible and can lead to the detection of smaller events, and to significant improvements in the resolution of event parameters. Second, we discuss the factors that influence the grid of event hypotheses used for backprojection, and develop an algorithm for generating suitable grids for networks with variable density. Third, we explore the effect of including different phases in the backprojection operator, showing that the best results for the study region can be obtained using only the Pg phase, and by including terms for penalizing early arrivals when evaluating the fit for a given event hypothesis. Fourth, we incorporate two parallel backprojection computations with different distance thresholds to enable the robust detection of both network-wide and small (sub-network-only) events. The set of improvements are outlined by applying WCEDS to four example events on the University of Utah Seismograph Stations (UUSS) network.
High-intensity lasers interacting with solid foils produce copious numbers of relativistic electrons, which in turn create strong sheath electric fields around the target. The proton beams accelerated in such fields have remarkable properties, enabling ultrafast radiography of plasma phenomena or isochoric heating of dense materials. In view of longer-term multidisciplinary purposes (e.g., spallation neutron sources or cancer therapy), the current challenge is to achieve proton energies well in excess of 100 MeV, which is commonly thought to be possible by raising the on-target laser intensity. Here we present experimental and numerical results demonstrating that magnetostatic fields self-generated on the target surface may pose a fundamental limit to sheath-driven ion acceleration for high enough laser intensities. Those fields can be strong enough (~105 T at laser intensities ~1021 W cm-2) to magnetize the sheath electrons and deflect protons off the accelerating region, hence degrading the maximum energy the latter can acquire.
Following the ISRM Suggested Method on Failure Criteria, ‘A failure criterion for rocks based on true triaxial testing’ by Chang and Haimson (2012), we attempted to obtain experiment-based Nadai (1950) and Mogi (1971) failure criteria for the aforementioned four sandstones: TCDP (Oku et al. 2007), Coconino, Bentheim (Ma and Haimson 2016; Ma et al. 2017a), and Castlegate (Ingraham et al. 2013). Here, the current work extends beyond the scope of Chang and Haimson (2012), to compare σ1 at failure (i.e., σ1,peak) from test data with predictions based on the experimentally generated Nadai and Mogi criteria. The applicability of Nadai and Mogi criteria to porous sandstones is then evaluated and discussed, considering failure mode evolution in these rocks.
A frequency mixer is a nonlinear device that combines electromagnetic waves to create waves at new frequencies. Mixers are ubiquitous components in modern radio-frequency technology and microwave signal processing. The development of versatile frequency mixers for optical frequencies remains challenging: such devices generally rely on weak nonlinear optical processes and, thus, must satisfy phase-matching conditions. Here we utilize a GaAs-based dielectric metasurface to demonstrate an optical frequency mixer that concurrently generates eleven new frequencies spanning the ultraviolet to near-infrared. The even and odd order nonlinearities of GaAs enable our observation of second-harmonic, third-harmonic, and fourth-harmonic generation, sum-frequency generation, two-photon absorption-induced photoluminescence, four-wave mixing and six-wave mixing. The simultaneous occurrence of these seven nonlinear processes is assisted by the combined effects of strong intrinsic material nonlinearities, enhanced electromagnetic fields, and relaxed phase-matching requirements. Such ultracompact optical mixers may enable a plethora of applications in biology, chemistry, sensing, communications, and quantum optics.
Compressible jet-in-crossflow interactions are difficult to simulate accurately using Reynolds-averaged Navier-Stokes (RANS) models. This could be due to simplifications inherent in RANS or the use of inappropriate RANS constants estimated by fitting to experiments of simple or canonical flows. Our previous work on Bayesian calibration of a k - ϵ model to experimental data had led to a weak hypothesis that inaccurate simulations could be due to inappropriate constants more than model-form inadequacies of RANS. In this work, Bayesian calibration of k - ϵ constants to a set of experiments that span a range of Mach numbers and jet strengths has been performed. The variation of the calibrated constants has been checked to assess the degree to which parametric estimates compensate for RANS's model-form errors. An analytical model of jet-in-crossflow interactions has also been developed, and estimates of k - ϵ constants that are free of any conflation of parametric and RANS's model-form uncertainties have been obtained. It has been found that the analytical k - ϵ constants provide mean-flow predictions that are similar to those provided by the calibrated constants. Further, both of them provide predictions that are far closer to experimental measurements than those computed using "nominal" values of these constants simply obtained from the literature. It can be concluded that the lack of predictive skill of RANS jet-in-crossflow simulations is mostly due to parametric inadequacies, and our analytical estimates may provide a simple way of obtaining predictive compressible jet-in-crossflow simulations.
Islam, Zahabul; Wang, Baoming; Hattar, Khalid M.; Gao, Huajian; Haque, Aman
Strength and ductility are mutually exclusive in metallic materials. To break this relationship, we start with nanocrystalline Zirconium with very high strength and low ductility. We then ion irradiate the specimens to introduce vacancies, which promote diffusional plasticity without reducing strength. Mechanical tests inside the Transmission Electron Microscope reveal about 300% increase in plastic strain after self ion-irradiation. Molecular dynamics simulation showed that 4.3% increase in vacancies near the grain boundaries can result in about 60% increase in plastic strain. Both experimental and computational results support our hypothesis that vacancies may enhance plasticity through higher atomic diffusivity at the grain boundaries.
We have used several configurations of the Sandia Instrumented Thermal Ignition (SITI) experiment to develop a pressure-dependent, four-step ignition model for a plastic bonded explosive (PBX 9407) consisting of 94 wt.% RDX (hexahydro-1,3,5-trinitro-1,3,5-triazine), and a 6 wt.% VCTFE binder (vinyl chloride/chlorotrifluoroethylene copolymer). The four steps include desorption of water, decomposition of RDX to form equilibrium products, pressure-dependent decomposition of RDX forming equilibrium products, and decomposition of the binder to form hydrogen chloride and a nonvolatile residue (NVR). We address drying, binder decomposition, and decomposition of the RDX component from the pristine state through the melt and into ignition. We used Latin Hypercube Sampling (LHS) of the parameters to determine the sensitivity of the model to variation in the parameters. We also successfully validated the model using one-dimensional time-to-explosion (ODTX and P-ODTX) data from a different laboratory. Our SITI test matrix included 1) different densities ranging from 0.7 to 1.63 g/cm3, 2) free gas volumes ranging from 1.2 to 38 cm3, and 3) boundary temperatures ranging from 170 to 190 °C. We measured internal temperatures using embedded thermocouples at various radial locations as well as pressure using tubing that was connected from the free gas volume (ullage) to a pressure gauge. We also measured gas flow from our vented experiments. A borescope was included to obtain in situ video during some SITI experiments. We observed significant changes in the explosive volume prior to ignition. Our model, in conjunction with data observations, imply that internal accumulation of decomposition gases in high density PBX 9407 (90% of the theoretical maximum density) can contribute to significant strain whether or not the experiment is vented or sealed.
Miller, Nicholas C.; Grupen, Matt; Beckwith, Kristian; Smithe, David; Albrecht, John D.
A detailed description and analysis of the Fermi kinetics transport (FKT) equations for simulating charge transport in semiconductor devices is presented. The fully coupled nonlinear discrete FKT equations are elaborated, as well as solution methods and work-flow for the simulation of RF electronic devices under large-signal conditions. The importance of full-wave electromagnetics is discussed in the context of high-speed device simulation, and the meshing requirements to integrate the full-wave solver with the transport equations are given in detail. The method includes full semiconductor band structure effects to capture the scattering details for the Boltzmann transport equation. The method is applied to high-speed gallium nitride devices. Finally, numerical convergence and stability examples provide insight into the mesh convergence behavior of the deterministic solver.
Brener, Igal; Nami, Mohsen; Stricklin, Isaac E.; Davico, Kenneth M.; Mishkat-Ul-Masabih, Saadat; Rishinaramangalam, Ashwin K.; Brueck, S.R.J.; Feezell, Daniel F.
In this work, we demonstrate high-performance electrically injected GaN/InGaN core-shell nanowire-based LEDs grown using selective-area epitaxy and characterize their electro-optical properties. To assess the quality of the quantum wells, we measure the internal quantum efficiency (IQE) using conventional low temperature/room temperature integrated photoluminescence. The quantum wells show a peak IQE of 62%, which is among the highest reported values for nanostructure-based LEDs. Time-resolved photoluminescence (TRPL) is also used to study the carrier dynamics and response times of the LEDs. TRPL measurements yield carrier lifetimes in the range of 1-2 ns at high excitation powers. To examine the electrical performance of the LEDs, current density-voltage (J-V) and light-current density-voltage (L-J-V) characteristics are measured. We also estimate the peak external quantum efficiency (EQE) to be 8.3% from a single side of the chip with no packaging. The LEDs have a turn-on voltage of 2.9 V and low series resistance. Based on FDTD simulations, the LEDs exhibit a relatively directional far-field emission pattern in the range of pm ± 15°. This work demonstrates that it is feasible for electrically injected nanowire-based LEDs to achieve the performance levels needed for a variety of optical device applications.
When a material that contains precipitates is deformed, the precipitates and the matrix may strain plastically by different amounts causing stresses to build up at the precipitate-matrix interfaces. If premature failure is to be avoided, it is therefore essential to reduce the difference in the plastic strain between the two phases. Here, we conduct nanoscale digital image correlation to measure a new variable that quantifies this plastic strain difference and show how its value can be used to estimate the associated interfacial stresses, which are found to be approximately three times greater in an Fe-Ni2AlTi steel than in the more ductile Ni-based superalloy CMSX-4®. It is then demonstrated that decreasing these stresses significantly improves the ability of the Fe-Ni2AlTi microstructure to deform under tensile loads without loss in strength.
In this work, we provide a method for enhancing stochastic Galerkin moment calculations to the linear elliptic equation with random diffusivity using an ensemble of Monte Carlo solutions. This hybrid approach combines the accuracy of low-order stochastic Galerkin and the computational efficiency of Monte Carlo methods to provide statistical moment estimates which are significantly more accurate than performing each method individually. The hybrid approach involves computing a low-order stochastic Galerkin solution, after which Monte Carlo techniques are used to estimate the residual. We show that the combined stochastic Galerkin solution and residual is superior in both time and accuracy for a one-dimensional test problem and a more computational intensive two-dimensional linear elliptic problem for both the mean and variance quantities.
High resolution simulation of viscous fingering can offer an accurate and detailed prediction for subsurface engineering processes involving fingering phenomena. The fully implicit discontinuous Galerkin (DG) method has been shown to be an accurate and stable method to model viscous fingering with high Peclet number and mobility ratio. In this paper, we present two techniques to speedup large scale simulations of this kind. The first technique relies on a simple p-adaptive scheme in which high order basis functions are employed only in elements near the finger fronts where the concentration has a sharp change. As a result, the number of degrees of freedom is significantly reduced and the simulation yields almost identical results to the more expensive simulation with uniform high order elements throughout the mesh. The second technique for speedup involves improving the solver efficiency. We present an algebraic multigrid (AMG) preconditioner which allows the DG matrix to leverage the robust AMG preconditioner designed for the continuous Galerkin (CG) finite element method. The resulting preconditioner works effectively for fixed order DG as well as p-adaptive DG problems. With the improvements provided by the p-adaptivity and AMG preconditioning, we can perform high resolution three-dimensional viscous fingering simulations required for miscible displacement with high Peclet number and mobility ratio in greater detail than before for well injection problems.
Deformation mechanisms in bcc metals, especially in dynamic regimes, show unusual complexity, which complicates their use in high-reliability applications. Here, we employ novel, high-velocity cylinder impact experiments to explore plastic anisotropy in single crystal specimens under high-rate loading. The bcc tantalum single crystals exhibit unusually high deformation localization and strong plastic anisotropy when compared to polycrystalline samples. Several impact orientations - [100], [110], [111] and [149] -Are characterized over a range of impact velocities to examine orientation-dependent mechanical behavior versus strain rate. Moreover, the anisotropy and localized plastic strain seen in the recovered cylinders exhibit strong axial symmetries which differed according to lattice orientation. Two-, three-, and four-fold symmetries are observed. We propose a simple crystallographic argument, based on the Schmid law, to understand the observed symmetries. These tests are the first to explore the role of single-crystal orientation in Taylor impact tests and they clearly demonstrate the importance of crystallography in high strain rate and temperature deformation regimes. These results provide critical data to allow dramatically improved high-rate crystal plasticity models and will spur renewed interest in the role of crystallography to deformation in dynamics regimes.
This article describes a new method of seismic signal detection that improves upon the conventional waveform correlation method. Recent studies suggested that a significant limiting factor in the application of waveform correlation to regional and global scale monitoring is the false alarm rate. The false alarms do not originate from detections on noise but rather from seismic arrivals with unrelated source locations. This article presents results from an approach to waveform correlation that exploits techniques from signal processing and machine learning to improve the accuracy of detecting seismic arrivals. We modify the detection model for waveform correlation such that transient signals from noncollocated seismicity are considered when designing the detectors. The new approach uses waveform templates from known catalog events to train a supervised machine learning algorithm that derives a new set of detectors to represent the unique characteristics of the template waveforms; these new detectors maximize the likelihood of detecting only the desired events, thereby minimizing false alarms. We train a waveform correlation template library for a single three-component seismic monitoring station. We then review results from applying the new detectors, known as alternate null hypothesis correlation (ANCorr) templates, to a test set of seismic waveforms. We compare ANCorr results with those from application of the conventional waveform correlation matched filter technique.
Dermal interstitial fluid (ISF) is an underutilized information-rich biofluid potentially useful in health status monitoring applications whose contents remain challenging to characterize. Here, we present a facile microneedle approach for dermal ISF extraction with minimal pain and no blistering for human subjects and rats. Extracted ISF volumes were sufficient for determining transcriptome, and proteome signatures. We noted similar profiles in ISF, serum, and plasma samples, suggesting that ISF can be a proxy for direct blood sampling. Dynamic changes in RNA-seq were recorded in ISF from induced hypoxia conditions. Finally, we report the first isolation and characterization, to our knowledge, of exosomes from dermal ISF. The ISF exosome concentration is 12–13 times more enriched when compared to plasma and serum and represents a previously unexplored biofluid for exosome isolation. This minimally invasive extraction approach can enable mechanistic studies of ISF and demonstrates the potential of ISF for real-time health monitoring applications.
The silicon metal-oxide-semiconductor (MOS) material system is a technologically important implementation of spin-based quantum information processing. However, the MOS interface is imperfect leading to concerns about 1/f trap noise and variability in the electron g-factor due to spin-orbit (SO) effects. Here we advantageously use interface-SO coupling for a critical control axis in a double-quantum-dot singlet-triplet qubit. The magnetic fieldorientation dependence of the g-factors is consistent with Rashba and Dresselhaus interface-SO contributions. The resulting all-electrical, two-Axis control is also used to probe the MOS interface noise. The measured inhomogeneous dephasing time, T2m, of 1.6 ?s is consistent with 99.95% 28Si enrichment. Furthermore, when tuned to be sensitive to exchange fluctuations, a quasi-static charge noise detuning variance of 2 μeV is observed, competitive with low-noise reports in other semiconductor qubits. This work, therefore, demonstrates that the MOS interface inherently provides properties for two-Axis qubit control, while not increasing noise relative to other material choices.
Li+ transport within a solid electrolyte interphase (SEI) in lithium ion batteries has challenged molecular dynamics (MD) studies due to limited compositional control of that layer. In recent years, experiments and ab initio simulations have identified dilithium ethylene dicarbonate (Li2EDC) as the dominant component of SEI layers. Here, we adopt a parameterized, non-polarizable MD force field for Li2EDC to study transport characteristics of Li+ in this model SEI layer at moderate temperatures over long times. The observed correlations are consistent with recent MD results using a polarizable force field, suggesting that this non-polarizable model is effective for our purposes of investigating Li+ dynamics. Mean-squared displacements distinguish three distinct Li+ transport regimes in EDC-ballistic, trapping, and diffusive. Compared to liquid ethylene carbonate (EC), the nanosecond trapping times in EDC are significantly longer and naturally decrease at higher temperatures. New materials developed for fast-charging Li-ion batteries should have a smaller trapping region. The analyses implemented in this paper can be used for testing transport of Li+ ion in novel battery materials. Non-Gaussian features of van Hove self-correlation functions for Li+ in EDC, along with the mean-squared displacements, are consistent in describing EDC as a glassy material compared with liquid EC. Vibrational modes of Li+ ion, identified by MD, characterize the trapping and are further validated by electronic structure calculations. Some of this work appeared in an extended abstract and has been reproduced with permission from ECS Transactions, 77, 1155-1162 (2017). Copyright 2017, Electrochemical Society, INC.
Metallic nanoparticles, such as gold and silver nanoparticles, can self-assemble into highly ordered arrays known as supercrystals for potential applications in areas such as optics, electronics, and sensor platforms. Here we report the formation of self-assembled 3D faceted gold nanoparticle supercrystals with controlled nanoparticle packing and unique facet-dependent optical property by using a binary solvent diffusion method. The nanoparticle packing structures from specific facets of the supercrystals are characterized by small/wide-angle X-ray scattering for detailed reconstruction of nanoparticle translation and shape orientation from mesometric to atomic levels within the supercrystals. We discover that the binary diffusion results in hexagonal close packed supercrystals whose size and quality are determined by initial nanoparticle concentration and diffusion speed. The supercrystal solids display unique facet-dependent surface plasmonic and surface-enhanced Raman characteristics. The ease of the growth of large supercrystal solids facilitates essential correlation between structure and property of nanoparticle solids for practical integrations.
Venezuelan equine encephalitis virus (VEEV) poses a major public health risk due to its amenability for use as a bioterrorism agent and its severe health consequences in humans. ML336 is a recently developed chemical inhibitor of VEEV, shown to effectively reduce VEEV infection in vitro and in vivo. However, its limited solubility and stability could hinder its clinical translation. To overcome these limitations, lipid-coated mesoporous silica nanoparticles (LC-MSNs) were employed. The large surface area of the MSN core promotes hydrophobic drug loading while the liposome coating retains the drug and enables enhanced circulation time and biocompatibility, providing an ideal ML336 delivery platform. LC-MSNs loaded 20 ± 3.4 μg ML336/mg LC-MSN and released 6.6 ± 1.3 μg/mg ML336 over 24 hours. ML336-loaded LC-MSNs significantly inhibited VEEV in vitro in a dose-dependent manner as compared to unloaded LC-MSNs controls. Moreover, cell-based studies suggested that additional release of ML336 occurs after endocytosis. In vivo safety studies were conducted in mice, and LC-MSNs were not toxic when dosed at 0.11 g LC-MSNs/kg/day for four days. ML336-loaded LC-MSNs showed significant reduction of brain viral titer in VEEV infected mice compared to PBS controls. Overall, these results highlight the utility of LC-MSNs as drug delivery vehicles to treat VEEV.
The limited flux and selectivities of current carbon dioxide membranes and the high costs associated with conventional absorption-based CO2 sequestration call for alternative CO2 separation approaches. Here we describe an enzymatically active, ultra-thin, biomimetic membrane enabling CO2 capture and separation under ambient pressure and temperature conditions. The membrane comprises a ~18-nm-thick close-packed array of 8 nm diameter hydrophilic pores that stabilize water by capillary condensation and precisely accommodate the metalloenzyme carbonic anhydrase (CA). CA catalyzes the rapid interconversion of CO2 and water into carbonic acid. By minimizing diffusional constraints, stabilizing and concentrating CA within the nanopore array to a concentration 10× greater than achievable in solution, our enzymatic liquid membrane separates CO2 at room temperature and atmospheric pressure at a rate of 2600 GPU with CO2/N2 and CO2/H2 selectivities as high as 788 and 1500, respectively, the highest combined flux and selectivity yet reported for ambient condition operation.
Kaufman, Jonas L.; Tan, Scott H.; Lau, Kirklann; Shah, Ashka; Gambee, Robert G.; Gage, Chris; Macintosh, Lupe; Dato, Albert; Saeta, Peter N.; Haskell, Richard C.; Monson, Todd
The size dependence of the dielectric constants of barium titanate or other ferroelectric particles can be explored by embedding particles into an epoxy matrix whose dielectric constant can be measured directly. However, to extract the particle dielectric constant requires a model of the composite medium. We compare a finite element model for various volume fractions and particle arrangements to several effective medium approximations, which do not consider particle arrangement explicitly. For a fixed number of particles, the composite dielectric constant increases with the degree of agglomeration, and we relate this increase to the number of regions of enhanced electric field along the applied field between particles in an agglomerate. Additionally, even for dispersed particles, we find that the composite method of assessing the particle dielectric constant may not be effective if the particle dielectric constant is too high compared to the background medium dielectric constant.
Numerous methods are used to measure contact angles (θ) in multiphase systems. The wettability and θ are primary controls on CO2 residual trapping during Geologic Carbon Storage (GCS) and determining these values within rock pores is paramount to increasing storage efficiency. One traditional experimental approach is the sessile drop method which involves θ measurements on a single image of droplets. More recent developments utilize X-ray micro-computed tomography (CT) scans which provide the resolutions necessary to image in situ θ of fluids at representative conditions; however, experimental micro-CT data is limited and varied. To further examine θ distributions in supercritical-CO2-brine-sandstone systems, a combination of manual and automated θ measurement methods were utilized to measure θ using both sessile drop and micro-CT images of two sandstone cores. The purpose of this work was threefold: (1) compare two current and two new θ measuring methods using micro-CT images of scCO2-brine-sandstone systems; (2) determine how traditional experimental method (sessile drop) θ results compare to in situ θ results (micro-CT); and (3) determine if the Matlab Contact Angle Algorithm (MCAA) from Klise et al. (2016) can be used to measure θ scCO2-brine-sandstone systems. One of the two new methods utilizing open-source software resulted in comparable average θ and θ ranges to the primary manual measuring method (Andrew et al., 2014b) reported in literature that requires commercial software to complete. An additional new method involves immersive interaction with micro-CT image volumes that no other software currently provides. Both processes are found to be promising for future work. θ measured using micro-CT images at in situ conditions result in a broader θ distribution than those measured using sessile drop images. These findings suggest some pores are intermediate-wet in an in situ sandstone system and factors other than interfacial tension influence trapping. Lastly, MCAA θ results consistently produced broader θ distributions and higher average θ than the manual θ measurements. This is a result of some automated measurements incorrectly identifying directional quantities leading to skewed results. MCAA is still promising for future work with careful attention to data interpretation.
Predictive analysis of complex computational models, such as uncertainty quantification (UQ), must often rely on using an existing database of simulation runs. In this paper we consider the task of performing low-multilinear-rank regression on such a database. Specifically we develop and analyze an efficient gradient computation that enables gradient-based optimization procedures, including stochastic gradient descent and quasi-Newton methods, for learning the parameters of a functional tensor-train (FT). We compare our algorithms with 22 other nonparametric and parametric regression methods on 10 real-world data sets and show that for many physical systems, exploiting low-rank structure facilitates efficient construction of surrogate models. We use a number of synthetic functions to build insight into behavior of our algorithms, including the rank adaptation and group-sparsity regularization procedures that we developed to reduce overfitting. Finally we conclude the paper by building a surrogate of a physical model of a propulsion plant on a naval vessel.
One of the most confounding controversies in the ductile fracture community is the large discrepancy between predicted and experimentally observed strain-to-failure values during shear-dominant loading. Currently proposed solutions focus on better accounting for how the deviatoric stress state influences void growth or on measuring strain at the microscale rather than the macroscale. While these approaches are useful, they do not address a significant aspect of the problem: the only rupture micromechanisms that are generally considered are void nucleation, growth, and coalescence (for tensile-dominated loading), and shear-localization and void coalescence (for shear-dominated loading). Current phenomenological models have thus focused on predicting the competition between these mechanisms based on the stress state and the strain-hardening capacity of the material. However, in the present study, we demonstrate that there are at least five other failure mechanisms. Because these have long been ignored, little is known about how all seven mechanisms interact with one another or the factors that control their competition. These questions are addressed by characterizing the fracture process in three high-purity face-centered cubic (FCC) metals of medium-to-high stacking fault energy: copper, nickel, and aluminum. These data demonstrate that, for a given stress state and material, several mechanisms frequently work together in a sequential manner to cause fracture. The selection of a failure mechanism is significantly affected by the plasticity-induced microstructural evolution that occurs before tearing begins, which can create or eliminate sites for void nucleation. At the macroscale, failure mechanisms that do not involve cracking or pore growth were observed to facilitate subsequent void growth and coalescence processes. While the focus of this study is on damage accumulation in pure metals, these results are also applicable to understanding failure in engineering alloys.
We present a Matlab implementation of topology optimization for compliance minimization on unstructured polygonal finite element meshes that efficiently accommodates many materials and many volume constraints. Leveraging the modular structure of the educational code, PolyTop, we extend it to the multi-material version, PolyMat, with only a few modifications. First, a design variable for each candidate material is defined in each finite element. Next, we couple a Discrete Material Optimization interpolation with the existing penalization and introduce a new parameter such that we can employ continuation and smoothly transition from a convex problem without any penalization to a non-convex problem in which material mixing and intermediate densities are penalized. Mixing that remains due to the density filter operation is eliminated via continuation on the filter radius. To accommodate flexibility in the volume constraint definition, the constraint function is modified to compute multiple volume constraints and the design variable update is modified in accordance with the Zhang-Paulino-Ramos Jr. (ZPR) update scheme, which updates the design variables associated with each constraint independently. The formulation allows for volume constraints controlling any subset of the design variables, i.e., they can be defined globally or locally for any subset of the candidate materials. Borrowing ideas for mesh generation on complex domains from PolyMesher, we determine which design variables are associated with each local constraint of arbitrary geometry. A number of examples are presented to demonstrate the many material capability, the flexibility of the volume constraint definition, the ease with which we can accommodate passive regions, and how we may use local constraints to break symmetries or achieve graded geometries.
The U.S. National Quantum Initiative places quantum computer scaling in the same category as Moore's law. While the technical basis of semiconductor scale up is well known, the equivalent principle for quantum computers is still being developed. Let's explore these new ideas.
Osborn, David L.; Shaw, Miranda F.; Sztaray, Balint; Whalley, Lisa K.; Heard, Dwayne E.; Jordan, Meredith J.T.; Kable, Scott H.
Organic acids play a key role in the troposphere, contributing to atmospheric aqueous-phase chemistry, aerosol formation, and precipitation acidity. Atmospheric models currently account for less than half the observed, globally averaged formic acid loading. Here we report that acetaldehyde photo-tautomerizes to vinyl alcohol under atmospherically relevant pressures of nitrogen, in the actinic wavelength range, λ = 300-330 nm, with measured quantum yields of 2-25%. Recent theoretical kinetics studies show hydroxyl-initiated oxidation of vinyl alcohol produces formic acid. Adding these pathways to an atmospheric chemistry box model (Master Chemical Mechanism) demonstrates increased formic acid concentrations by a factor of ∼1.7 in the polluted troposphere and a factor of ∼3 under pristine conditions. Incorporating this mechanism into the GEOS-Chem 3D global chemical transport model reveals an estimated 7% contribution to worldwide formic acid production, with up to 60% of the total modeled formic acid production over oceans arising from photo-tautomerization.
Time-resolved visualization of fast processes using high-speed digital video-cameras has been widely used in most fields of scientific research for over a decade. In many applications, high-speed imaging is used not only to record the time history of a phenomenon but also to quantify it, hence requiring dependable equipment. Important aspects of two-dimensional imaging instrumentation used to qualitatively or quantitatively assess fast-moving scenes include sensitivity, linearity, as well as signal-to-noise ratio (SNR). Under certain circumstances, the weaknesses of commercially available high-speed cameras, i.e., sensitivity, linearity, image lag, etc., render the experiment complicated and uncertain. Our study evaluated two advanced CMOS-based, continuous-recording, high-speed cameras available at the moment of writing. Various parameters, potentially important toward accurate time-resolved measurements and photonic quantification, have been measured under controlled conditions on the bench, using scientific instrumentation. Testing procedures to measure sensitivity, linearity, SNR, shutter accuracy, and image lag are proposed and detailed. The results of the tests, comparing the two high-speed cameras under study, are also presented and discussed. Results show that, with careful implementation and understanding of their performance and limitations, these high-speed cameras are reasonable alternatives to scientific CCD cameras, while also delivering time-resolved imaging data.
A surface-emitting distributed feedback (DFB) laser with second-order gratings typically excites an antisymmetric mode that has low radiative efficiency and a double-lobed far-field beam. The radiative efficiency could be increased by using curved and chirped gratings for infrared diode lasers, plasmon-assisted mode selection for mid-infrared quantum cascade lasers (QCLs), and graded photonic structures for terahertz QCLs. Here, we demonstrate a new hybrid grating scheme that uses a superposition of second and fourth-order Bragg gratings that excite a symmetric mode with much greater radiative efficiency. The scheme is implemented for terahertz QCLs with metallic waveguides. Peak power output of 170 mW with a slope-efficiency of 993 mW A-1 is detected with robust single-mode single-lobed emission for a 3.4 THz QCL operating at 62 K. The hybrid grating scheme is arguably simpler to implement than aforementioned DFB schemes and could be used to increase power output for surface-emitting DFB lasers at any wavelength.
We present and analyze three powerful long-term historical trends in the electrification of energy by free-fuel sources. These trends point toward a future in which energy is affordable, abundant, and efficiently deployed; with major economic, geo-political, and environmental benefits to humanity. We present and analyze three powerful long-term historical trends in energy, particularly electrical energy, as well as the opportunities and challenges associated with these trends. The first trend is from a world containing a diversity of energy currencies to one whose predominant currency is electricity, driven by electricity’s transportability, exchangeability, and steadily decreasing cost. The second trend is from electricity generated from a diversity of sources to electricity generated predominantly by free-fuel sources, driven by their steadily decreasing cost and long-term abundance. These trends necessitate a just-emerging third trend: from a grid in which electricity is transported unidirectionally, traded at near-static prices, and consumed under direct human control; to a grid in which electricity is transported bidirectionally, traded at dynamic prices, and consumed under human-tailored artificial agential control. These trends point toward a future in which energy is not costly, scarce, or inefficiently deployed but instead is affordable, abundant, and efficiently deployed; with major economic, geo-political, and environmental benefits to humanity.
Brocato, Terisse A.; Coker, Eric N.; Durfee, Paul N.; Lin, Yu S.; Townson, Jason; Wyckoff, Edward F.; Cristini, Vittorio; Brinker, C.J.; Wang, Zhihui
Nanoparticles have shown great promise in improving cancer treatment efficacy while reducing toxicity and treatment side effects. Predicting the treatment outcome for nanoparticle systems by measuring nanoparticle biodistribution has been challenging due to the commonly unmatched, heterogeneous distribution of nanoparticles relative to free drug distribution. We here present a proof-of-concept study that uses mathematical modeling together with experimentation to address this challenge. Individual mice with 4T1 breast cancer were treated with either nanoparticle-delivered or free doxorubicin, with results demonstrating improved cancer kill efficacy of doxorubicin loaded nanoparticles in comparison to free doxorubicin. We then developed a mathematical theory to render model predictions from measured nanoparticle biodistribution, as determined using graphite furnace atomic absorption. Model analysis finds that treatment efficacy increased exponentially with increased nanoparticle accumulation within the tumor, emphasizing the significance of developing new ways to optimize the delivery efficiency of nanoparticles to the tumor microenvironment.
Meeting technology-based policy goals without sufficient lead time may present several technology, regulatory and market-based challenges due to the speed of technological adoption in existing and emerging markets. Installing incremental amounts of technologies, e.g., cleaner fossil, renewable or transformative energy technologies throughout the coming decades, may prove to be a more attainable goal than a radical and immediate change the year before a policy goal is set to be in place. This notion of steady installation growth over acute installations of technology to meet policy goals is the core topic of discussion for this research. This research operationalizes this notion by developing the theoretical underpinnings of regulatory and market acceptance delays by building upon the common Technology Readiness Level (TRL) framework and offers two new additions to the research community. The Regulatory Readiness Level (RRL) and Market Readiness Level (MRL) frameworks were developed. These components, collectively called the Technology, Regulatory and Market (TRM) readiness level framework allow one to build new constraints into existing Integrated Assessment Models (IAMs). A system dynamics model was developed to illustrate the TRM framework. The framework helps identify the factors, and specifically the rate at which we must support technology development, necessary to meet our desired technical and policy goals in the coming decades.
This report describes the progress on the validation of the development of MELCOR Sodium Chemistry (NAC) package. The primary focus for this report is to ensure that the implementation of the CONTAIN-LMR sodium models into MELCOR is correctly done. Thus, the verification test is to conduct the code-to-code comparison with MELCOR and CONTAIN-LMR. Last year we had reported the development of NAC package which included three sodium models: spray fire, pool fire and atmospheric chemistry. The first 2 models were completed and additional improvement for these two models were done this year to allow upward spray capability and various functional capability for modeling the pool fire experiment better, respectively. This year, the atmospheric chemistry implementation has been progressed to a point for testing in the presence of water vapor (modeled as ideal gas) as a part of the two-condensable option model in the CONTAIN- LMR. The user's guide and reference manual for the NAC package including these improvements are described in a separate document being published as a part of the MELCOR 2.2 release. For this report, we would discuss the experimental validation using the implemented spray fire and pool fire models. A code-to-code comparison with CONTAIN-LMR is described for a spray fire experiment. Note that the atmospheric chemistry model has not fully implemented due to the absence of the two condensable option. Only the chemical reactions between the sodium aerosol and water vapor can be modeled. ACKNOWLEDGEMENTS This work was overseen and managed by Matthew R. Denman (Sandia National Laboratories). In addition, we appreciate that Chris Faucett for developing experimental data and provided the initial input decks as a part of the MELCOR assessment report development for U.S. Nuclear Regulatory Commission's project. This work is supported by the Office of Nuclear Energy of the U.S. Department of Energy work package number AT-17SN170204 and NT-185N05030102.