Advantage of Machine Learning over Maximum Likelihood in Limited-Angle Low-Photon X-Ray Tomography
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Semiconductor quantum dot devices can be challenging to configure into a regime where they are suitable for qubit operation. This challenge arises from variations in gate control of quantum dot electron occupation and tunnel coupling between quantum dots on a single device or across several devices. Furthermore, a single control gate usually has capacitive coupling to multiple quantum dots and tunnel barriers between dots. If the device operator, be it human or machine, has quantitative knowledge of how gates control the electrostatic and dynamic properties of multiqubit devices, the operator can more quickly and easily navigate the multidimensional gate space to find a qubit operating regime. We have developed and applied image analysis techniques to quantitatively detect where charge offsets from different quantum dots intersect, so called anticrossings. In this document we outline the details of our algorithm for detecting single anticrossings, which has been used to fine-tune the inter-dot tunnel rates for a three quantum dot system. Additionally, we show that our algorithm can detect multiple anticrossings in the same dataset, which can aid in the coarse tuning the electron occupation of multiple quantum dots. We also include an application of cross correlation to the imaging of magnetic fields using nitrogen vacancies.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Applied Surface Science
HfC has shown promise as a material for field emission due to the low work function of the (100) surface and a high melting point. Recently, HfC tips have exhibited unexpected failure after field emission at 2200 K. Characterization of the HfC tips identified faceting of the parabolic tip dominated by coexisting (100) and (111) surfaces. To investigate this phenomenon, we used density functional theory (DFT) simulations to identify the role of defects and impurities (Ta, N, O) on HfC surface properties. Carbon vacancies increased the surface energy of the (100) surface from 2.35 J/m2 to 4.75 J/m2 and decreased the surface energy of the carbon terminated (111) surface from 8.75 J/m2 to 3.48 J/m2. Once 60% of the carbon on the (100) surface have been removed the hafnium terminated (111) surface becomes the lowest energy surface, suggesting that carbon depletion may cause these surfaces to coexist. The addition of Ta and N impurities to the surface are energetically favorable and decrease the work function, making them candidate impurities for improving field emission at high temperatures. Overall, DFT simulations have demonstrated the importance of understanding the role of defects on the surface structure and properties of HfC.
Applied Physics Letters
As with any quantum computing platform, semiconductor quantum dot devices require sophisticated hardware and controls for operation. The increasing complexity of quantum dot devices necessitates the advancement of automated control software and image recognition techniques for rapidly evaluating charge stability diagrams. We use an image analysis toolbox developed in Python to automate the calibration of virtual gates, a process that previously involved a large amount of user intervention. Moreover, we show that straightforward feedback protocols can be used to simultaneously tune multiple tunnel couplings in a triple quantum dot in a computer automated fashion. Finally, we adopt the use of a "tunnel coupling lever arm" to model the interdot barrier gate response and discuss how it can be used to more rapidly tune interdot tunnel couplings to the gigahertz values that are compatible with exchange gates.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Statistical Methods for Materials Science: The Data Science of Microstructure Characterization
This chapter considers the collection of sparse samples in electron microscopy, either by modification of the sampling methods utilized on existing microscopes, or with new microscope concepts that are specifically designed and optimized for collection of sparse samples. It explores potential embodiments of a multi-beam compressive sensing electron microscope. Sparse measurement matrices offer an advantage of efficient image recovery, since each iteration of the process becomes a simple multiplication by a sparse matrix. Electron microscopy is well suited to compressed or sparse sampling due to the difficulty of building electron microscopes that can accurately record more than one electron signal at a time. Sparse sampling in electron microscopy has been considered for dose reduction, improving three-dimensional reconstructions and accelerating data acquisition. For sparse sampling, variations of scanning transmission electron microscopy (STEM) are typically used. In STEM, the electron probe is scanned across the specimen, and the detector measurement is recorded as a function of probe location.
Abstract not provided.
Abstract not provided.
Abstract not provided.
TMS Annual Meeting
Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image square centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited. imaging time is proportional to the number of measurements taken of each sample; in a traditional SEM. large collections can lead to weeks of around-the-clock imaging time. We previously reported a single-beam sparse sampling approach that we have demonstrated on an operational SEM for collecting "smooth" images. In this paper, we analyze how measurements from a hypothetical multi-beam system would compare to the single-beam approach in a compressed sensing framework. To that end. multi-beam measurements are synthesized on a single-beam SEM. and fidelity of reconstructed images are compared to the previously demonstrated approach. Since taking fewer measurements comes at the cost of reduced SNR, image fidelity as a function of undersampling ratio is reported.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.
Abstract not provided.
Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.
Reliability Engineering and System Safety
The Waste Isolation Pilot Plant (WIPP) is a US Department of Energy (DOE) facility for the permanent disposal of defense-related transuranic (TRU) waste. US Environmental Protection Agency (EPA) regulations specify that the DOE must demonstrate on a sound basis that the WIPP disposal system will effectively contain long-lived alpha-emitting radionuclides within its boundaries for 10,000 years following closure. In 1996, the DOE submitted the ''40 CFR Part 191 Compliance Certification Application for the Waste Isolation Pilot Plant'' (CCA) to the EPA. The CCA proposed that the WIPP site complies with EPA's regulatory requirements. Contained within the CCA are descriptions of the scientific research conducted to characterize the properties of the WIPP site and the probabilistic performance assessment (PA) conducted to predict the containment properties of the WIPP disposal system. In May 1998, the EPA certified that the TRU waste disposal at the WIPP complies with its regulations. Waste disposal operations at WIPP commenced on March 28, 1999. The 1996 WIPP PA model of the disposal system included conceptual and mathematical representations of key hydrologic and geochemical processes. These key processes were identified over a 22-year period involving data collection, data interpretation, computer models, and sensitivity studies to evaluate the importance of uncertainty and of processes that were difficult to evaluate by other means. Key developments in the area of geochemistry were the evaluation of gas generation mechanisms in the repository; development of a model of chemical conditions in the repository and actinide concentrations in brine; selecting MgO backfill and demonstrating its effects experimentally; and determining the chemical retardation capability of the Culebra. Key developments in the area of hydrology were evacuating the potential for groundwater to dissolve the Salado Formation (the repository host formation), development of a regional model for hydrologic conditions, development of a stochastic, probabilistic representation of hydraulic properties in the Culebra Member of the Rustler Formation; characterization of physical transport in the Culebra, and the evaluation of brine and gas flow in the Salado. Additional confidence in the conceptual models used in the 1996 WIPP PA was gained through independent peer review in many stages of their development.
The Waste Isolation Pilot Plant (WIPP) is a US Department of Energy (DOE) facility for the permanent disposal of transuranic waste from defense activities. In 1996, the DOE submitted the Title 40 CFR Part 191 Compliance Certification Application for the Waste Isolation Pilot Plant (CCA) to the US Environmental Protection Agency (EPA). The CCA included a probabilistic performance assessment (PA) conducted by Sandia National Laboratories to establish compliance with the quantitative release limits defined in 40 CFR 191.13. An experimental program to collect data relevant to the actinide source term began around 1989, which eventually supported the 1996 CCA PA actinide source term model. The actinide source term provided an estimate of mobile dissolved and colloidal Pu, Am, U, Th, and Np concentrations in their stable oxidation states, and accounted for effects of uncertainty in the chemistry of brines in waste disposal areas. The experimental program and the actinide source term included in the CCA PA underwent EPA review lasting more than 1 year. Experiments were initially conducted to develop data relevant to the wide range of potential future conditions in waste disposal areas. Interim, preliminary performance assessments and actinide source term models provided insight allowing refinement of experiments and models. Expert peer review provided additional feedback and confidence in the evolving experimental program. By 1995, the chemical database and PA predictions of WIPP performance were considered reliable enough to support the decision to add an MgO backfill to waste rooms to control chemical conditions and reduce uncertainty in actinide concentrations, especially for Pu and Am. Important lessons learned through the characterization, PA modeling, and regulatory review of the actinide source term are (1) experimental characterization and PA should evolve together, with neither activity completely dominating the other, (2) the understanding of physical processes required to develop conceptual models is greater than can be represented in PA models, (3) experimentalists should be directly involved in model and parameter abstraction and simplification for PA, and (4) external expert review should be incorporated early in a project to increase confidence long before regulatory reviews begin.