Publications

Results 1–25 of 73
Skip to search filters

Semisupervised learning for seismic monitoring applications

Seismological Research Letters

Linville, Lisa L.; Anderson, Dylan Z.; Michalenko, Joshua J.; Galasso, Jennifer G.; Draelos, Timothy J.

The impressive performance that deep neural networks demonstrate on a range of seismic monitoring tasks depends largely on the availability of event catalogs that have been manually curated over many years or decades. However, the quality, duration, and availability of seismic event catalogs vary significantly across the range of monitoring operations, regions, and objectives. Semisupervised learning (SSL) enables learning from both labeled and unlabeled data and provides a framework to leverage the abundance of unreviewed seismic data for training deep neural networks on a variety of target tasks. We apply two SSL algorithms (mean-teacher and virtual adversarial training) as well as a novel hybrid technique (exponential average adversarial training) to seismic event classification to examine how unlabeled data with SSL can enhance model performance. In general, we find that SSL can perform as well as supervised learning with fewer labels. We also observe in some scenarios that almost half of the benefits of SSL are the result of the meaningful regularization enforced through SSL techniques and may not be attributable to unlabeled data directly. Lastly, the benefits from unlabeled data scale with the difficulty of the predictive task when we evaluate the use of unlabeled data to characterize sources in new geographic regions. In geographic areas where supervised model performance is low, SSL significantly increases the accuracy of source-type classification using unlabeled data.

More Details

Deep Learning Models Augment Analyst Decisions for Event Discrimination

Geophysical Research Letters

Linville, Lisa L.; Pankow, Kristine; Draelos, Timothy J.

Long-term seismic monitoring networks are well positioned to leverage advances in machine learning because of the abundance of labeled training data that curated event catalogs provide. We explore the use of convolutional and recurrent neural networks to accomplish discrimination of explosive and tectonic sources for local distances. Using a 5-year event catalog generated by the University of Utah Seismograph Stations, we train models to produce automated event labels using 90-s event spectrograms from three-component and single-channel sensors. Both network architectures are able to replicate analyst labels above 98%. Most commonly, model error is the result of label error (70% of cases). Accounting for mislabeled events (~1% of the catalog) model accuracy for both models increases to above 99%. Classification accuracy remains above 98% for shallow tectonic events, indicating that spectral characteristics controlled by event depth do not play a dominant role in event discrimination.

More Details

Posters for AA/CE Reception

Kuether, Robert J.; Allensworth, Brooke M.; Backer, Adam B.; Chen, Elton Y.; Dingreville, Remi P.; Forrest, Eric C.; Knepper, Robert; Tappan, Alexander S.; Marquez, Michael P.; Vasiliauskas, Jonathan G.; Rupper, Stephen G.; Grant, Michael J.; Atencio, Lauren C.; Hipple, Tyler J.; Maes, Danae M.; Timlin, Jerilyn A.; Ma, Tian J.; Garcia, Rudy J.; Danford, Forest L.; Patrizi, Laura P.; Galasso, Jennifer G.; Draelos, Timothy J.; Gunda, Thushara G.; Venezuela, Otoniel V.; Brooks, Wesley A.; Anthony, Stephen M.; Carson, Bryan C.; Reeves, Michael J.; Roach, Matthew R.; Maines, Erin M.; Lavin, Judith M.; Whetten, Shaun R.; Swiler, Laura P.

Abstract not provided.

Dynamic tuning of seismic signal detector trigger levels for local networks

Bulletin of the Seismological Society of America

Draelos, Timothy J.; Peterson, Matthew G.; Knox, Hunter A.; Lawry, Benjamin J.; Phillips-Alonge, Kristin E.; Ziegler, Abra E.; Chael, Eric P.; Young, Christopher J.; Faust, Aleksandra

The quality of automatic signal detections from sensor networks depends on individual detector trigger levels (TLs) from each sensor. The largely manual process of identifying effective TLs is painstaking and does not guarantee optimal configuration settings, yet achieving superior automatic detection of signals and ultimately, events, is closely related to these parameters. We present a Dynamic Detector Tuning (DDT) system that automatically adjusts effective TL settings for signal detectors to the current state of the environment by leveraging cooperation within a local neighborhood of network sensors. After a stabilization period, the DDT algorithm can adapt in near-real time to changing conditions and automatically tune a signal detector to identify (detect) signals from only events of interest. Our current work focuses on reducing false signal detections early in the seismic signal processing pipeline, which leads to fewer false events and has a significant impact on reducing analyst time and effort. This system provides an important new method to automatically tune detector TLs for a network of sensors and is applicable to both existing sensor performance boosting and new sensor deployment. With ground truth on detections from a local neighborhood of seismic sensors within a network monitoring the Mount Erebus volcano in Antarctica, we show that DDT reduces the number of false detections by 18% and the number of missed detections by 11% when compared with optimal fixed TLs for all sensors.

More Details

Temporal Cyber Attack Detection

Ingram, Joey; Draelos, Timothy J.; Sahakian, Meghan A.; Doak, Justin E.

Rigorous characterization of the performance and generalization ability of cyber defense systems is extremely difficult, making it hard to gauge uncertainty, and thus, confidence. This difficulty largely stems from a lack of labeled attack data that fully explores the potential adversarial space. Currently, performance of cyber defense systems is typically evaluated in a qualitative manner by manually inspecting the results of the system on live data and adjusting as needed. Additionally, machine learning has shown promise in deriving models that automatically learn indicators of compromise that are more robust than analyst-derived detectors. However, to generate these models, most algorithms require large amounts of labeled data (i.e., examples of attacks). Algorithms that do not require annotated data to derive models are similarly at a disadvantage, because labeled data is still necessary when evaluating performance. In this work, we explore the use of temporal generative models to learn cyber attack graph representations and automatically generate data for experimentation and evaluation. Training and evaluating cyber systems and machine learning models requires significant, annotated data, which is typically collected and labeled by hand for one-off experiments. Automatically generating such data helps derive/evaluate detection models and ensures reproducibility of results. Experimentally, we demonstrate the efficacy of generative sequence analysis techniques on learning the structure of attack graphs, based on a realistic example. These derived models can then be used to generate more data. Additionally, we provide a roadmap for future research efforts in this area.

More Details
Results 1–25 of 73
Results 1–25 of 73