The proposed Aero-MINE technology will extract energy from wind without any exterior moving parts. Aero-MINEs can be integrated into buildings or function stand-alone, and are scalable. This gives them advantages similar to solar panels, but with the added benefit of operation in cloudy or dark conditions. Furthermore, compared to solar panels, Aero-MINEs can be manufactured at lower cost and with less environmental impact. Power generation is isolated internally by the pneumatic transmission of air and the outlet air-jet nozzles amplify the effectiveness. Multiple units can be connected to one centrally located electric generator. Aero-MINEs are ideal for the built-environment, with numerous possible configurations ranging from architectural integration to modular bolt-on products. Traditional wind turbines suffer from many fundamental challenges. The fast-moving blades produce significant aero-acoustic noise, visual disturbances, light-induced flickering and impose wildlife mortality risks. The conversion of massive mechanical torque to electricity is a challenge for gears, generators and power conversion electronics. In addition, the installation, operation and maintenance of wind turbines is required at significant height. Furthermore, wind farms are often in remote locations far from dense regions of electricity customers. These technical and logistical challenges add significantly to the cost of the electricity produced by utility-scale wind farms. In contrast, distributed wind energy eliminates many of the logistical challenges. However, solutions such as micro-turbines produce relatively small amounts of energy due to the reduction in swept area and still suffer from the motion-related disadvantages of utility-scale turbines. Aero-MINEs combine the best features of distributed generation, while eliminating the disadvantages.
Sandia National Laboratories, California (SNL/CA) is a Department of Energy (DOE) facility. The management and operations of the facility are under a contract with the DOE’s National Nuclear Security Administration (NNSA). On May 1, 2017, the name of the management and operating contractor changed from Sandia Corporation to National Technology and Engineering Solutions of Sandia, LLC (NTESS). The DOE, NNSA, Sandia Field Office administers the contract and oversees contractor operations at the site. This Site Environmental Report for 2016 was prepared in accordance with DOE Order 231.1B, Environment, Safety and Health Reporting (DOE 2012). The report provides a summary of environmental monitoring information and compliance activities that occurred at SNL/CA during calendar year 2016, unless noted otherwise. General site and environmental program information is also included.
Predicting the performance of radiation detection systems at field sites based on measured performance acquired under controlled conditions at test locations, e.g., the Nevada National Security Site (NNSS), remains an unsolved and standing issue within DNDO’s testing methodology. Detector performance can be defined in terms of the system’s ability to detect and/or identify a given source or set of sources, and depends on the signal generated by the detector for the given measurement configuration (i.e., source strength, distance, time, surrounding materials, etc.) and on the quality of the detection algorithm. Detector performance is usually evaluated in the performance and operational testing phases, where the measurement configurations are selected to represent radiation source and background configurations of interest to security applications.
The U.S. Strategic Petroleum Reserve (SPR) is a stockpile of emergency crude oil to be tapped into if a disruption in the nation's oil supply occurs. The SPR is comprised of four salt dome sites. Subsidence surveys have been conducted either annually or biennially at all four sites over the life of the program. Monitoring of surface behavior is a first line defense to detecting possible subsurface cavern integrity issues. Over the life of the Bryan Mound site, subsidence rates over abandoned Cavern 3 have continuously been the highest at the site. In an effort to try and understand the subsurface dynamics, specifically over Bryan Mound Cavern 3, historic interferometric synthetic aperture radar (InSAR) data was acquired and processed by TRE Altamira. InSAR involves the processing of multiple satellite synthetic aperture radar scenes acquired across the same location of the Earth's surface at different times to map surface deformation. The analysis of the data has the ability to detect millimeters of motion spanning days, months, year and decades, across specific sites. The intent in regards to the Bryan Mound site was (1) to confirm the higher subsidence rates recorded over abandoned Cavern 3 indicated by land survey and (2) understand the regional surface behavior. This report describes the InSAR analysis results, how those results compare to the historical collection of land survey data, and what additional information the data has provided towards understanding the response recorded at the surface.
As one recipient of the Consortium for Verification Technology (CVT) Fellowship, I spent eight days as a visiting scientist at the University of Michigan, Department of Nuclear Engineering and Radiological Sciences (NERS). During this time, I participated in multiple department and research group meetings and presentations, met with individual faculty and students, toured multiple laboratories, and taught one-half of a one-unit class on Risk Analysis in Nuclear Arms control (six 1.5 hour lectures). The following report describes some of the interactions that I had during my time as well as a brief discussion of the impact of this fellowship on members of the consortium and on me/my laboratory’s technical knowledge and network.
Plenoptic imaging is a promising emerging technology for single-camera, 3D diagnostics of particle fields. In this work, recent developments towards quantitative measurements of particle size, positions, and velocities are discussed. First, the technique is proven viable with measurements of the particle field generated by the impact of a water drop on a thin film of water. Next, well cont rolled experiments are used to verify diagnostic uncertainty. Finally, an example is presented of 3D plenoptic imaging of a laboratory scale, explosively generated fragment field.
The Federal Radiological Monitoring and Assessment Center (FRMAC) Assessment Manual is the tool used to organize and guide activities of the FRMAC Assessment Division. The mission of the FRMAC Assessment Division in a radiological emergency is to interpret radiological data and predict worker and public doses. This information is used by Decision Makers to recommend protective actions in accordance with Protection Action Guides (PAGs) issued by government agencies. This manual integrates many health physics tools and techniques used to make these assessments.
Today’s international nuclear safeguards inspectors have access to an increasing volume of supplemental information about the facilities under their purview, including commercial satellite imagery, nuclear trade data, open source information, and results from previous safeguards activities. In addition to completing traditional in-field safeguards activities, inspectors are now responsible for being able to act upon this growing corpus of supplemental safeguards-relevant data and for maintaining situational awareness of unusual activities taking place in their environment. However, cognitive science research suggests that maintaining too much information can be detrimental to a user’s understanding, and externalizing information (for example, to a mobile device) to reduce cognitive burden can decrease cognitive function related to memory, navigation, and attention. Given this dichotomy, how can international nuclear safeguards inspectors better synthesize information to enhance situational awareness, decision making, and performance in the field? This paper examines literature from the fields of cognitive science and human factors in the areas of wayfinding, situational awareness, equipment and technical assistance, and knowledge transfer, and describes the implications for the provision of, and interaction with, safeguards-relevant information for international nuclear safeguards inspectors working in the field.
The Sandia National Laboratories (Sandia Labs) tribal cyber infrastructure assurance team comprises world-class energy science and cyber engineering expertise. The team is led by technical staff who are American Indian tribal members with interdisciplinary technical expertise.
Objectives: Design a GUI that could convert .mov video files into .mp4 and/or .wmv files; Design a GUI that could extract and save a small portion of a large video file.; Develop a widget that is modular and could be easily integrated into any future project.
We review the 9th NLTE code comparison workshop, which was held in the Jussieu campus, Paris, from November 30th to December 4th, 2015. This time, the workshop was mainly focused on a systematic investigation of iron NLTE steady-state kinetics and emissivity, over a broad range of temperature and density. Through these comparisons, topics such as modeling of the dielectronic processes, density effects or the effect of an external radiation field were addressed. The K-shell spectroscopy of iron plasmas was also addressed, notably through the interpretation of tokamak and laser experimental spectra.
The international nuclear safeguards community is faced with a host of challenges in the coming years, many of which have been outlined but have not been described in terms of their urgency. Literature regarding safeguards challenges is either broad and devoid of any reference to prioritization or tailored to a specific problem and removed from the overall goals of the safeguards community. For example, developing new methods of environmental sampling, improving containment and surveillance (C/S) technologies to increase efficiency and decrease inspection time, advancing nuclear material accountancy (NMA) techniques, and planning safeguards approaches for new types of nuclear facilities are all important. They have not, however, been distinctly prioritized at a high level within the safeguards community. Based on a review of existing literature and interviews with experts on these upcoming challenges, this paper offers a high-level summary of present and future priorities in safeguards, with attention both to what is feasible and to what is most imperative. In doing so, the paper addresses the potential repercussions for failing to prioritize, with a focus on the risk of diversion of nuclear material. Within the context of shifts in the American political landscape, and keeping in mind that nonproliferation issues may take a backseat to others in the near future, a prioritized view of safeguards objectives will be vital. In the interest of expanding upon this work, the paper offers several potential conceptual models for prioritization which can be explored in greater depth upon further research.
In this paper, we assert the importance of uncertainty quantification for machine learning and sketch an initial research agenda. We define uncertainty in the context of machine learning, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues with an image analysis example. The paper concludes by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics in general.
This report analyzes the data from multi-arm caliper (MAC) surveys taken at the West Hackberry SPR site to determine any casing deviations from the as-built drawings. Radial arm data from MAC surveys were used to calculate the approximate wall thickness of each well. Results from this study indicate that well casings can have larger or smaller internal diameters which can lead to thinner or thicker wall thicknesses, respectively. Wells 009B, 106A, 109, and 115 exhibit potentially dangerous deviances from the as-built drawings as some casings have thinner, and therefore weaker, well walls. In addition, the only well surveyed more than once, WH 011B, shows inconsistent data between surveys. Additional analysis is suggested to determine the accuracy and repeatability of MAC surveys.
This National Emission Standards for Hazardous Air Pollutants (NESHAP) Annual Report has been prepared in a format to comply with the reporting requirements of 40 CFR 61.94 and the April 5, 1995 Memorandum of Agreement (MOA) between the Department of Energy (DOE) and the Environmental Protection Agency (EPA). According to the EPA approved NESHAP Monitoring Plan for the Tonopah Test Range (TTR), 40 CFR 61, subpart H, and the MOA, no additional monitoring or measurements are required at TTR in order to demonstrate compliance with the NESHAP regulation.
To demonstrate and validate the performance of the wide-area damping control system, the project plans to conduct closed-loop tests on the PDCI in spring/summer 2017. A test plan details the open and closed loop tests to be conducted on the PDCI using the wide-area damping control system. To ensure the appropriate level of preparedness, simulations were performed in order to predict and evaluate any possible unsafe operations before hardware experiments are attempted. This report contains the results from these simulations using the power system dynamics software PSLF (Power System Load Flow, trademark of GE). The simulations use the WECC (Western Electricity Coordinating Council) 2016 light summer and heavy summer base cases and the 2014 dual export base case. Because of the large volume of plots, the results were divided into three reports corresponding to the three base cases. This report contains results from the 2016 light summer base case.
To demonstrate and validate the performance of the wide-area damping control system, the project plans to conduct closed-loop tests on the PDCI in spring/summer 2017. A test plan details the open and closed loop tests to be conducted on the PDCI using the wide-area damping control system. To ensure the appropriate level of preparedness, simulations were performed in order to predict and evaluate any possible unsafe operations before hardware experiments are attempted. This report contains the results from these simulations using the power system dynamics software PSLF (Power System Load Flow, trademark of GE). The simulations use the WECC (Western Electricity Coordinating Council) 2016 light summer and heavy summer base cases and the 2014 dual export base case. Because of the large volume of plots, the results were divided into three reports corresponding to the three base cases. This report contains results from the 2016 heavy summer base case.
To demonstrate and validate the performance of the wide-area damping control system, the project plans to conduct closed-loop tests on the PDCI in spring/summer 2017. A test plan details the open and closed loop tests to be conducted on the PDCI using the wide-area damping control system. To ensure the appropriate level of preparedness, simulations were performed in order to predict and evaluate any possible unsafe operations before hardware experiments are attempted. This report contains the results from these simulations using the power system dynamics software PSLF (Power System Load Flow, trademark of GE). The simulations use the WECC (Western Electricity Coordinating Council) 2016 light summer and heavy summer base cases and the 2014 dual export base case. Because of the large volume of plots, the results were divided into three reports corresponding to the three base cases. This report contains results from the 2014 dual export base case.
Matrix-vector products are ubiquitous in high-performance scientific applications and have a growing set of occurrences in advanced data analysis activities. Achieving high performance for these kernels is therefore paramount, in part, because these operations can consume vast amounts of application execution time. In this report we document the development of several sparse-matrix vector product kernel implementations using a variety of programming models and approaches. Each kernel is run on a broad set of matrices selected to demonstrate the wide variety of matrix structure and sparsity that is possible with a single, generic kernel. For benchmarking and performance analysis, we utilize leading computing architectures for the NNSA/ASC program including Intel's Knights Landing processor and IBM's POWER8.
Sampling activities for this reporting period included two semiannual monitoring events each for groundwater, soil-vapor, and radon. Annual soil-moisture monitoring was conducted in April 2016, annual tritium surface soil sampling was conducted in August 2016, and annual biota sampling (metals and radionuclides) was conducted in September 2016. All monitoring activities were conducted in accordance with LTMMP requirements and no monitoring results exceeded LTMMP trigger levels. All monitoring results were consistent with historic MWL monitoring data. Inspections of the MWL final cover system, storm-water diversion structures, compliance monitoring systems, and security fence were performed in accordance with LTMMP requirements. Required maintenance and repairs were minor and were generally performed during the inspections. The Evapotranspirative (ET) Cover continues to meet successful revegetation criteria and is in excellent condition with even coverage of mature, native perennial grasses. Maintenance was performed during the reporting period as a best management practice for ET Cover vegetation, and included a lower level of effort relative to previous years. The purpose of ongoing ET Cover maintenance efforts is to promote the growth and health of the desired native grass species by reducing competition with weedy species for limited moisture and nutrients. Based on previous inspections, additional best management practice activities were conducted during this reporting period to improve the site and reduce long-term ET Cover and site maintenance. These activities included improvements to site access and drainage (i.e., improvements to the access and perimeter road) and the installation of erosion and burrow control measures at the ten perimeter monitoring well locations.
As concerns with cyber security and network protection increase, there is a greater need for organizations to deploy state-of-the-art technology to keep their cyber information safe. However, foolproof cyber security and network protection are a difficult feat since a security breach can be caused simply by a single employee who unknowingly succumbs to a cyber threat. It is critical for an organization’s workforce to holistically adopt cyber technologies that enable enhanced protection, help ward off cyber threats, and are efficient at encouraging human behavior towards safer cyber practices. It is also crucial for the workforce, once they have adopted cyber technologies, to remain consistent and thoughtful in their use of these technologies to keep resistance strong against cyber threats and vulnerabilities. Adoption of cyber technology can be difficult. Many organizations struggle with their workforce adopting newly-introduced cyber technologies, even when the technologies themselves have proven to be worthy solutions. Research, especially in the domain of cognitive science and the human dimension, has sought to understand how technology adoption works and can be leveraged. This paper reviews what empirical literature has found regarding cyber technology adoption, the current research gaps, and how non-research based efforts can influence adoption. Focusing on current efforts accomplished by a government-sponsored activity entitled “ACT” (Adoption of Cybersecurity Technologies), the aim of this paper is to empirically study cyber technology adoption to better understand how to influence operational adoption across the government-sector as well as how what can be done to develop a model that enables cyber technology adoption.
IEEE Transactions on Molecular, Biological, and Multi-Scale Communications
Pinar, Ali P.; Quinn, Christopher J.; Kiyavash, Negar
We propose a method to find optimal sparse connected approximations for large complex networks of nodes interacting over time. Optimality is measured by Kullback-Leibler divergence. The sparsity is controlled by the user through specifying the in-degrees. The approximations have spanning tree subgraphs, enabling them to depict flow through a network. They can also depict feedback. The approximations can capture both significant statistical dependencies in, and preserve salient topological features of, the full (unknown) network. The proposed method can be applied to networks without restrictions on the dynamics beyond stationarity. We analyze computational and sample complexity. We demonstrate their efficacy using a large-scale simulation of the turtle visual cortex and experimental data from the primate motor cortex.
The Enhanced Data Authentication System (EDAS) is a means to securely branch information from an existing measurement system or data stream to a secondary observer. In an international nuclear safeguards context, the EDAS connects to operator instrumentation, and provides a cryptographically secure copy of the information for a safeguards inspectorate. This novel capability could be a complement to inspector-owned safeguards instrumentation, offering context that is valuable for anomaly resolution and contingency. Sandia National Laboratories gathered operator and inspector requirements, and designed, developed, and fabricated prototype EDAS software and hardware. In partnership with Euratom, we performed an extended EDAS field trial at the Westinghouse Springfields nuclear fuel manufacturing facility in the United Kingdom. We inserted EDAS prototypes in operator instrumentation lines for a barcode scanner and weight scale at a portal where UF6 cylinders enter and exit the facility. The goal of the field trial was to demonstrate the utility of secure branching of operator instrumentation for nuclear safeguards, identify any unforeseen implementation and application issues, and confirm whether the approach is compatible with operator concerns and constraints. During the field trial, the data streams were collected for nine months, and the EDASs branched 698 barcode and 663 weight scale events. Our analysis found that both EDAS units accurately branched 100% of the data that flowed through the instrumentation lines when we compared them to the recorded operator data. With multiple deployed EDASs we found that it is possible to correlate the branched data and create a more holistic narrative of facility activities. Euratom reported the field trial as a full success due to the continuous, correct, and secure branching of safeguards relevant data. At the same time, the operator is satisfied that EDAS did not interfere with plant operations in any way. The success of this field trial is an important step toward illustrating the potential and utility of EDAS as a safeguards tool.
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This paper explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume, and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. The result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.
Niobium and niobium nitride thin films are transitioning from fundamental research toward wafer scale manufacturing with technology drivers that include superconducting circuits and electronics, optical single photon detectors, logic, and memory. Successful microfabrication requires precise control over the properties of sputtered superconducting films, including oxidation. Previous work has demonstrated the mechanism in oxidation of Nb and how film structure could have deleterious effects upon the superconducting properties. This study provides an examination of atmospheric oxidation of NbN films. By examination of the room temperature sheet resistance of NbN bulk oxidation was identified and confirmed by secondary ion mass spectrometry. Meissner magnetic measurements confirmed the bulk oxidation not observed with simple cryogenic resistivity measurements.
High-speed, time-resolved particle image velocimetry with a pulse-burst laser was used to measure the gas-phase velocity upstream and downstream of a shock wave-particle curtain interaction at three shock Mach numbers (1.22, 1.40, and 1.45) at a repetition rate of 37.5 kHz. The particle curtain was formed from free-falling soda-lime particles resulting in volume fractions of 9% or 23% at mid-height, depending on particle diameter (106-125 and 300-355 μm, respectively). Following impingement by a shock wave, a pressure difference was created between the upstream and downstream sides of the curtain, which accelerated flow through the curtain. Jetting of flow through the curtain was observed downstream once deformation of the curtain began, demonstrating a long-term unsteady effect. Using a control volume approach, the unsteady drag on the curtain was estimated from velocity and pressure data. The drag imposed on the curtain has a strong volume fraction dependence with a prolonged unsteadiness following initial shock impingement. In addition, the data suggest that the resulting pressure difference following the propagation of the reflected and transmitted shock waves is the primary component to curtain drag.
A 7.2 kW (electric input) solar simulator was designed in order to perform accelerated testing on absorber materials for concentrating solar power (CSP) technologies. COMPUTER-AIDED DESIGN (CAD) software integrating a ray-Tracing tool was used to select appropriate components and optimize their positioning in order to achieve the desired concentration. The simulator comprises four identical units, each made out of an ellipsoidal reflector, a metal halide lamp, and an adjustable holding system. A single unit was characterized and shows an experimental average irradiance of 257 kWm-2 on a 25.4mm (1 in) diameter spot. Shape, spot size, and average irradiance are in good agreement with the model predictions, provided the emitting arc element model is realistic. The innovative four-lamp solar simulator potentially demonstrates peak irradiance of 1140kWm-2 and average irradiance of 878kWm-2 over a 25.4mm diameter area. The electric-To-radiative efficiency is about 0.86. The costs per radiative and electric watt are calculated at $2.31 W-1 and $1.99 W-1, respectively. An upgraded installation including a sturdier structure, computer-controlled lamps, a more reliable lamp holding system, and safety equipment yields a cost per electric watt of about $3.60 W-1 excluding labor costs.
Electrochemical atomic layer deposition (E-ALD) is a method for the formation of nanofilms of materials, one atomic layer at a time. It uses the galvanic exchange of a less noble metal, deposited using underpotential deposition (UPD), to produce an atomic layer of a more noble element by reduction of its ions. This process is referred to as surface limited redox replacement and can be repeated in a cycle to grow thicker deposits. It was previously performed on nanoparticles and planar substrates. In the present report, E-ALD is applied for coating a submicron-sized powder substrate, making use of a new flow cell design. E-ALD is used to coat a Pd powder substrate with different thicknesses of Rh by exchanging it for Cu UPD. Cyclic voltammetry and X-ray photoelectron spectroscopy indicate an increasing Rh coverage with increasing numbers of deposition cycles performed, in a manner consistent with the atomic layer deposition (ALD) mechanism. Cyclic voltammetry also indicated increased kinetics of H sorption and desorption in and out of the Pd powder with Rh present, relative to unmodified Pd.
Infection with Mycobacterium Tuberculosis represents a significant threat to people with immune disorders, such as HIV-positive individuals, and can result in significant health complications or death if not diagnosed and treated early. We present a centrifugal microfluidic platform for multiplexed detection of tuberculosis and HIV biomarkers in human whole blood with minimal sample preparation and a sample-to-answer time of 30 minutes. This multiplexed assay was developed for the detection of two M.tuberculosis secreted proteins, whose secretion represents an active and ongoing infection, as well as detection of HIV p24 protein and human anti-p24 antibodies. The limit of detection for this multiplex assay is in the pg/mL range for both HIV and M.tuberculosis proteins, making this assay potentially useful in the clinical diagnosis of both HIV and Tuberculosis proteins indicative of active infection. Antigen detection for the HIV assay sensitivity was 89%, the specificity 85%. Serological detection had 100% sensitivity and specificity for the limited sample pool. The centrifugal microfluidic platform presented here offers the potential for a portable, fast and inexpensive multiplexed diagnostic device that can be used in resource-limited settings for diagnosis of TB and HIV.
We demonstrate a silicon photonic transceiver circuit for high-speed discrete variable quantum key distribution that employs a common structure for transmit and receive functions. The device is intended for use in polarization-based quantum cryptographic protocols, such as BB84. Our characterization indicates that the circuit can generate the four BB84 states (TE/TM/45°/135° linear polarizations) with >30 dB polarization extinction ratios and gigabit per second modulation speed, and is capable of decoding any polarization bases differing by 90° with high extinction ratios.
We have examined graphene absorption in a range of graphene-based infrared devices that combine either monolayer or bilayer graphene with three different gate dielectrics. Electromagnetic simulations show that the optical absorption in graphene in these devices, an important factor in a functional graphene-based detector, is strongly dielectricdependent. These simulations reveal that plasmonic excitation in graphene can significantly influence the percentage of light absorbed in the entire device, as well as the graphene layer itself, with graphene absorption exceeding 25% in regions where plasmonic excitation occurs. Notably, the dielectric environment of graphene has a dramatic influence on the strength and wavelength range over which the plasmons can be excited, making dielectric choice paramount to final detector tunability and sensitivity.
Devices based on GaN have shown great promise for high power electronics, including their potential use as radiation tolerant components. An important step to realizing high power diodes is the design and implementation of an edge termination to mitigate field crowding, which can lead to premature breakdown. However, little is known about the effects of radiation on edge termination functionality. We experimentally examine the effects of proton irradiation on multiple field ring edge terminations in high power vertical GaN pin diodes using in operando imaging with electron beam induced current (EBIC). We find that exposure to proton irradiation influences field spreading in the edge termination as well as carrier transport near the anode. By using depth-dependent EBIC measurements of hole diffusion length in homoepitaxial n-GaN we demonstrate that the carrier transport effect is due to a reduction in hole diffusion length following proton irradiation.
This work represents a thorough investigation of the thermal conductivity (κ) in both thin film and bulk PbZr1-xTixO3 (PZT) across the compositional phase diagram. Given the technological importance of PZT as a superb piezoelectric and ferroelectric material in devices and systems impacting a wide array of industries, this research serves to fill the gap in knowledge regarding the thermal properties. The thermal conductivities of both thin film and bulk PZT are found to vary by a considerable margin as a function of composition x. Additionally, we observe a discontinuity in κ in the vicinity of the morphotropic phase boundary (MPB, x = 0.48) where there is a 20%-25% decrease in κ in our thin film data, similar to that found in literature data for bulk PZT. The comparison between bulk and thin film materials highlights the sensitivity of κ to size effects such as film thickness and grain size even in disordered alloy/solid-solution materials. A model for the thermal conductivity of PZT as a function of composition (κ (x)) is presented, which enables the application of the virtual crystal approximation for alloy-type material systems with very different crystals structures, resulting in differing temperature trends for κ. We show that in the case of crystalline solid-solutions where the thermal conductivity of one of the parent materials exhibits glass-like temperature trends the compositional dependence of thermal conductivity is relatively constant for most values of x. This is in stark contrast with the typical trends of thermal conductivity with x in alloys, where the thermal conductivity increases dramatically as the composition of the alloy or solid-solution approaches that of a pure parent materials (i.e., as x = 0 or 1).
The controlled creation of defect centre - nanocavity systems is one of the outstanding challenges for efficiently interfacing spin quantum memories with photons for photon-based entanglement operations in a quantum network. Here we demonstrate direct, maskless creation of atom-like single silicon vacancy (SiV) centres in diamond nanostructures via focused ion beam implantation with ∼32 nm lateral precision and <50 nm positioning accuracy relative to a nanocavity. We determine the Si+ ion to SiV centre conversion yield to be ∼2.5% and observe a 10-fold conversion yield increase by additional electron irradiation. Low-temperature spectroscopy reveals inhomogeneously broadened ensemble emission linewidths of ∼51 GHz and close to lifetime-limited single-emitter transition linewidths down to 126±13 MHz corresponding to ∼1.4 times the natural linewidth. This method for the targeted generation of nearly transform-limited quantum emitters should facilitate the development of scalable solid-state quantum information processors.
A novel concept for coupling a thermochemical cycle with an electrochemical separation device for the generation of hydrogen from steam is reported and a thermodynamic analysis of the system is presented. In a conventional thermochemical cycle, an oxygen carrier material is thermally reduced, cooled, and then reoxidized in steam thereby generating hydrogen. However, this process often requires high temperatures (>1700 K) and/or low oxygen partial pressures (<0.001 atm) in order to meet thermodynamic requirements. Such extreme conditions can adversely affect the stability of the reactive oxides, reactor materials, and system efficiency. In our proposed technology, we seek to decrease the required reduction temperature by several hundred degrees Kelvin by relaxing the requirement for spontaneous oxidation reaction at atmospheric pressure. This is accomplished by incorporating a proton-conducting membrane (PCM) to separate hydrogen produced at equilibrium concentrations from reactant steam. We also suggest the use of mixed ionic-electronic conducting (MIEC) oxygen carrier materials that reduce through a continuum of oxidation states at lower temperatures (∼1200 °C). This concept allows the generation of a high-quality hydrogen stream while avoiding the challenging high temperatures/low partial pressures required in conventional water-splitting reaction schemes.
Modern systems, such as physical security systems, are often designed to involve complex interactions of technological and human elements. Evaluation of the performance of these systems often overlooks the human element. A method is proposed here to expand the concept of sensitivity—as denoted by d’—from signal detection theory (Green & Swets 1966; Macmillan & Creelman 2005), which came out of the field of psychophysics, to cover not only human threat detection but also other human functions plus the performance of technical systems in a physical security system, thereby including humans in the overall evaluation of system performance. New in this method is the idea that probabilities of hits (accurate identification of threats) and false alarms (saying “threat” when there is not one), which are used to calculate d’ of the system, can be applied to technologies and, furthermore, to different functions in the system beyond simple yes-no threat detection. At the most succinct level, the method returns a single number that represents the effectiveness of a physical security system; specifically, the balance between the handling of actual threats and the distraction of false alarms. The method can be automated, and the constituent parts revealed, such that given an interaction graph that indicates the functional associations of system elements and the individual probabilities of hits and false alarms for those elements, it will return the d’ of the entire system as well as d’ values for individual parts. The method can also return a measure of the response bias* of the system. One finding of this work is that the d’ for a physical security system can be relatively poor in spite of having excellent d’s for each of its individual functional elements.
Electron transport through a nanostructure can be characterized in part using concepts from classical fluid dynamics. Hence, it is natural to ask how far the analogy can be taken and whether the electron liquid can exhibit nonlinear dynamical effects such as turbulence. Here we present an ab initio study of the electron dynamics in nanojunctions which reveals that the latter indeed exhibits behavior quite similar to that of a classical fluid. In particular, we find that a transition from laminar to turbulent flow occurs with increasing current, corresponding to increasing Reynolds numbers. These findings reveal unexpected features of electron dynamics and shed new light on our understanding of transport properties of nanoscale systems.
We investigate the formation of extended defects during molecular-dynamics (MD) simulations of GaN and InGaN growth on (0001) and ( 11 2 ¯ 0 ) wurtzite-GaN surfaces. The simulated growths are conducted on an atypically large scale by sequentially injecting nearly a million individual vapor-phase atoms towards a fixed GaN surface; we apply time-and-position-dependent boundary constraints that vary the ensemble treatments of the vapor-phase, the near-surface solid-phase, and the bulk-like regions of the growing layer. The simulations employ newly optimized Stillinger-Weber In-Ga-N-system potentials, wherein multiple binary and ternary structures are included in the underlying density-functional-theory training sets, allowing improved treatment of In-Ga-related atomic interactions. To examine the effect of growth conditions, we study a matrix of >30 different MD-growth simulations for a range of InxGa1-xN-alloy compositions (0 ≤ x ≤ 0.4) and homologous growth temperatures [0.50 ≤ T/T*m(x) ≤ 0.90], where T*m(x) is the simulated melting point. Growths conducted on polar (0001) GaN substrates exhibit the formation of various extended defects including stacking faults/polymorphism, associated domain boundaries, surface roughness, dislocations, and voids. In contrast, selected growths conducted on semi-polar ( 11 2 ¯ 0 ) GaN, where the wurtzite-phase stacking sequence is revealed at the surface, exhibit the formation of far fewer stacking faults. We discuss variations in the defect formation with the MD growth conditions, and we compare the resulting simulated films to existing experimental observations in InGaN/GaN. While the palette of defects observed by MD closely resembles those observed in the past experiments, further work is needed to achieve truly predictive large-scale simulations of InGaN/GaN crystal growth using MD methodologies.
In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data.
We observe suitably located energy storage systems are able to collect significant revenue through spatiotemporal arbitrage in congested transmission networks. However, transmission capacity expansion can significantly reduce or eliminate this source of revenue. Investment decisions by merchant storage operators must, therefore, account for the consequences of potential investments in transmission capacity by central planners. This paper presents a tri-level model to co-optimize merchant electrochemical storage siting and sizing with centralized transmission expansion planning. The upper level takes the merchant storage owner's perspective and aims to maximize the lifetime profits of the storage, while ensuring a given rate of return on investments. The middle level optimizes centralized decisions about transmission expansion. The lower level simulates market clearing. The proposed model is recast as a bi-level equivalent, which is solved using the column-and-constraint generation technique. A case study based on a 240-bus, 448-line testbed of the Western Electricity Coordinating Council interconnection demonstrates the usefulness of the proposed tri-level model.
Our results for the two sets of impact experiments are reported here. In order to assist with model development using the impact data reported, the materials are mechanically characterized using a series of standard experiments. The first set of impact data comes from a series of coefficient of restitution experiments, in which a 2 meter long pendulum is used to study "in context" measurements of the coefficient of restitution for eight different materials (6061-T6 Aluminum, Phosphor Bronze alloy 510, Hiperco, Nitronic 60A, Stainless Steel 304, Titanium, Copper, and Annealed Copper). The coefficient of restitution is measured via two different techniques: digital image correlation and laser Doppler vibrometry. Due to the strong agreement of the two different methods, only results from the digital image correlation are reported. The coefficient of restitution experiments are "in context" as the scales of the geometry and impact velocities are representative of common features in the motivating application for this research. Finally, a series of compliance measurements are detailed for the same set of materials. Furthermore, the compliance measurements are conducted using both nano-indentation and micro-indentation machines, providing sub-nm displacement resolution and uN force resolution. Good agreement is seen for load levels spanned by both machines. As the transition from elastic to plastic behavior occurs at contact displacements on the order of 30 nm, this data set provides a unique insight into the transitionary region.