Publications

12 Results

Search results

Jump to search filters

Auto-Curation of Seismic Event Data for Signal Denoising

Fox, Dylan T.; Hammond, Patrick H.; Gonzales, Antonio G.; Lewis, Phillip J.

Denoising contaminated seismic signals for later processing is a fundamental problem in seismic signals analysis. Neural network approaches have shown success denoising local signals when trained on short-time Fourier transform spectrograms. One challenge of this approach is the onerous process of hand-labeling event signals for training. By leveraging the SCALODEEP seismic event detector, we develop an automated set of techniques for labeling event data. Despite region specific challenges, training the neural network denoiser on machine curated events shows comparable performance to the neural network trained on hand curated events. We showcase our technique with two experiments, one using Utah regional data and one using regional data from the Korean peninsula.

More Details

Semi-supervised Bayesian Low-shot Learning

Adams, Jason R.; Goode, Katherine J.; Michalenko, Joshua J.; Lewis, Phillip J.; Ries, Daniel R.

Deep neural networks (NNs) typically outperform traditional machine learning (ML) approaches for complicated, non-linear tasks. It is expected that deep learning (DL) should offer superior performance for the important non-proliferation task of predicting explosive device configuration based upon observed optical signature, a task which human experts struggle with. However, supervised machine learning is difficult to apply in this mission space because most recorded signatures are not associated with the corresponding device description, or “truth labels.” This is challenging for NNs, which traditionally require many samples for strong performance. Semi-supervised learning (SSL), low-shot learning (LSL), and uncertainty quantification (UQ) for NNs are emerging approaches that could bridge the mission gaps of few labels and rare samples of importance. NN explainability techniques are important in gaining insight into the inferential feature importance of such a complex model. In this work, SSL, LSL, and UQ are merged into a single framework, a significant technical hurdle not previously demonstrated. Exponential Average Adversarial Training (EAAT) and Pairwise Neural Networks (PNNs) are chosen as the SSL and LSL methods of choice. Permutation feature importance (PFI) for functional data is used to provide explainability via the Variable importance Explainable Elastic Shape Analysis (VEESA) pipeline. A variety of uncertainty quantification approaches are explored: Bayesian Neural Networks (BNNs), ensemble methods, concrete dropout, and evidential deep learning. Two final approaches, one utilizing ensemble methods and one utilizing evidential learning, are constructed and compared using a well-quantified synthetic 2D dataset along with the DIRSIG Megascene.

More Details

Evaluating Scalograms for Seismic Event Denoising

Lewis, Phillip J.; Gonzales, Antonio G.; Hammond, Patrick H.

Denoising contaminated seismic signals for later processing is a fundamental problem in seismic signals analysis. The most straightforward denoising approach, using spectral filtering, is not effective when noise and seismic signal occupy the same frequency range. Neural network approaches have shown success denoising local signal when trained on short-time Fourier transform spectrograms (Zhu et al 2018; Tibi et al 2021). Scalograms, a wavelet-based transform, achieved ~15% better reconstruction as measured by dynamic time warping on a seismic waveform test set than spectrograms, suggesting their use as an alternative for denoising. We train a deep neural network on a scalogram dataset derived from waveforms recorded by the University of Utah Seismograph Stations network. We find that initial results are no better than a spectrogram approach, with additional overhead imposed by the significantly larger size of scalograms. A robust exploration of neural network hyperparameters and network architecture was not performed, which could be done in follow on work.

More Details

Image Processing Algorithms for Tuning Quantum Devices and Nitrogen-Vacancy Imaging

Monical, Cara P.; Lewis, Phillip J.; Agron, Abrielle; Larson, K.W.; Mounce, Andrew M.

Semiconductor quantum dot devices can be challenging to configure into a regime where they are suitable for qubit operation. This challenge arises from variations in gate control of quantum dot electron occupation and tunnel coupling between quantum dots on a single device or across several devices. Furthermore, a single control gate usually has capacitive coupling to multiple quantum dots and tunnel barriers between dots. If the device operator, be it human or machine, has quantitative knowledge of how gates control the electrostatic and dynamic properties of multiqubit devices, the operator can more quickly and easily navigate the multidimensional gate space to find a qubit operating regime. We have developed and applied image analysis techniques to quantitatively detect where charge offsets from different quantum dots intersect, so called anticrossings. In this document we outline the details of our algorithm for detecting single anticrossings, which has been used to fine-tune the inter-dot tunnel rates for a three quantum dot system. Additionally, we show that our algorithm can detect multiple anticrossings in the same dataset, which can aid in the coarse tuning the electron occupation of multiple quantum dots. We also include an application of cross correlation to the imaging of magnetic fields using nitrogen vacancies.

More Details

Computer-automated tuning procedures for semiconductor quantum dot arrays

Applied Physics Letters

Mills, A.R.; Feldman, M.M.; Monical, Cara P.; Lewis, Phillip J.; Larson, K.W.; Mounce, Andrew M.; Petta, J.R.

As with any quantum computing platform, semiconductor quantum dot devices require sophisticated hardware and controls for operation. The increasing complexity of quantum dot devices necessitates the advancement of automated control software and image recognition techniques for rapidly evaluating charge stability diagrams. We use an image analysis toolbox developed in Python to automate the calibration of virtual gates, a process that previously involved a large amount of user intervention. Moreover, we show that straightforward feedback protocols can be used to simultaneously tune multiple tunnel couplings in a triple quantum dot in a computer automated fashion. Finally, we adopt the use of a "tunnel coupling lever arm" to model the interdot barrier gate response and discuss how it can be used to more rapidly tune interdot tunnel couplings to the gigahertz values that are compatible with exchange gates.

More Details

Application specific compression : final report

Melgaard, David K.; Lewis, Phillip J.; Lee, David S.; Carlson, Jeffrey J.; Byrne, Raymond H.; Harrison, Carol D.

With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

More Details
12 Results
12 Results