Publications

Results 1–25 of 42

Search results

Jump to search filters

Using advanced data structures to enable responsive security monitoring

Cluster Computing

Kroeger, Thomas M.; Vorobyeva, Janet; Delayo, Daniel R.; Bender, Michael A.; Farach-Colton, Martin; Pandey, Prashant; Phillips, Cynthia A.; Singh, Shikha; Thomas, Eric D.

Write-optimized data structures (WODS), offer the potential to keep up with cyberstream event rates and give sub-second query response for key items like IP addresses. These data structures organize logs as the events are observed. To work in a real-world environment and not fill up the disk, WODS must efficiently expire older events. As the basis for our research into organizing security monitoring data, we implemented a tool, called Diventi, to index IP addresses in connection logs using RocksDB (a write-optimized LSM tree). We extended Diventi to automatically expire data as part of the data structures’ normal operations. We guarantee that Diventi always tracks the N most recent events and tracks no more than N+ k events for a parameter k< N, while ensuring the index is opportunistically pruned. To test Diventi at scale in a controlled environment, we used anonymized traces of IP communications collected at SuperComputing 2019. We synthetically extended the 2.4 billion connection events to 100 billion events. We tested Diventi vs. Elasticsearch, a common log indexing tool. In our test environment, Elasticsearch saw an ingestion rate of at best 37,000 events/s while Diventi sustained ingestion rates greater than 171,000 events/s. Our query response times were as much as 100 times faster, typically answering queries in under 80 ms. Furthermore, we saw no noticeable degradation in Diventi from expiration. We have deployed Diventi for many months where it has performed well and supported new security analysis capabilities.

More Details

Timely Reporting of Heavy Hitters Using External Memory

ACM Transactions on Database Systems

Singh, Shikha; Pandey, Prashant; Bender, Michael A.; Berry, Jonathan W.; Farach-Colton, Martin; Johnson, Rob; Kroeger, Thomas M.; Phillips, Cynthia A.

Given an input stream S of size N, a φ-heavy hitter is an item that occurs at least φN times in S. The problem of finding heavy-hitters is extensively studied in the database literature.We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = φN-th occurrence (and hence it becomes a heavy hitter). We call this the Timely Event Detection (TED) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams with a low reporting threshold (high sensitivity).Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (ω (N) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes).We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable tradeoff between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead.We implement and validate our data structures empirically using the Firehose streaming benchmark. Multi-threaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device's random I/O throughput, i.e., ≈100K observations per second.

More Details

Timely Reporting of Heavy Hitters using External Memory

Proceedings of the ACM SIGMOD International Conference on Management of Data

Pandey, Prashant; Singh, Shikha; Bender, Michael A.; Berry, Jonathan W.; Farach-Colton, Martin; Johnson, Rob; Kroeger, Thomas M.; Phillips, Cynthia A.

Given an input stream of size N, a †-heavy hitter is an item that occurs at least † N times in S. The problem of finding heavy-hitters is extensively studied in the database literature. We study a real-time heavy-hitters variant in which an element must be reported shortly after we see its T = † N-th occurrence (and hence becomes a heavy hitter). We call this the Timely Event Detection (TED) Problem. The TED problem models the needs of many real-world monitoring systems, which demand accurate (i.e., no false negatives) and timely reporting of all events from large, high-speed streams, and with a low reporting threshold (high sensitivity). Like the classic heavy-hitters problem, solving the TED problem without false-positives requires large space (ω(N) words). Thus in-RAM heavy-hitters algorithms typically sacrifice accuracy (i.e., allow false positives), sensitivity, or timeliness (i.e., use multiple passes). We show how to adapt heavy-hitters algorithms to external memory to solve the TED problem on large high-speed streams while guaranteeing accuracy, sensitivity, and timeliness. Our data structures are limited only by I/O-bandwidth (not latency) and support a tunable trade-off between reporting delay and I/O overhead. With a small bounded reporting delay, our algorithms incur only a logarithmic I/O overhead. We implement and validate our data structures empirically using the Firehose streaming benchmark. Multi-threaded versions of our structures can scale to process 11M observations per second before becoming CPU bound. In comparison, a naive adaptation of the standard heavy-hitters algorithm to external memory would be limited by the storage device's random I/O throughput, i.e., ∼100K observations per second.

More Details

Quantifying Uncertainty in Emulations (LDRD Report)

Crussell, Jonathan C.; Brown, Aaron B.; Jennings, Jeremy K.; Kavaler, David; Kroeger, Thomas M.; Phillips, Cynthia A.

This report summarizes the work performed under the project “Quantifying Uncertainty in Emulations.” Emulation can be used to model real-world systems, typically using virtualization to run the real software on virtualized hardware. Emulations are increasingly used to answer mission-oriented questions, but how well they represent the real-world systems is still an open area of research. The goal of the project was to quantify where and how emulations differ from the real world. To do so, we ran a representative workload on both, and collected and compared metrics to identify differences. We aimed to capture behaviors, rather than performance, differences as the latter is more well-understood in the literature.

More Details

Virtually the Same: Comparing Physical and Virtual Testbeds

2019 International Conference on Computing, Networking and Communications, ICNC 2019

Crussell, Jonathan C.; Kroeger, Thomas M.; Brown, Aaron B.; Phillips, Cynthia A.

Network designers, planners, and security professionals increasingly rely on large-scale testbeds based on virtualization to emulate networks and make decisions about real-world deployments. However, there has been limited research on how well these virtual testbeds match their physical counterparts. Specifically, does the virtualization that these testbeds depend on actually capture real-world behaviors sufficiently well to support decisions?As a first step, we perform simple experiments on both physical and virtual testbeds to begin to understand where and how the testbeds differ. We set up a web service on one host and run ApacheBench against this service from a different host, instrumenting each system during these tests.We define an initial repeatable methodology (algorithm) to quantitatively compare physical and virtual testbeds. Specifically we compare the testbeds at three levels of abstraction: application, operating system (OS) and network. For the application level, we use the ApacheBench results. For OS behavior, we compare patterns of system call orderings using Markov chains. This provides a unique visual representation of the workload and OS behavior in our testbeds. We also drill down into read-system-call behaviors and show how at one level both systems are deterministic and identical, but as we move up in abstractions that consistency declines. Finally, we use packet captures to compare network behaviors and performance. We reconstruct flows and compare per-flow and per-experiment statistics.From these comparisons, we find that the behavior of the workload in the testbeds is similar but that the underlying processes to support it do vary. The low-level network behavior can vary quite widely in packetization depending on the virtual network driver. While these differences can be important, and knowing about them will help experiment designers, the core application and OS behaviors still represent similar processes.

More Details

Lessons learned from 10k experiments to compare virtual and physical testbeds

12th USENIX Workshop on Cyber Security Experimentation and Test, CSET 2019, co-located with USENIX Security 2019

Crussell, Jonathan C.; Kroeger, Thomas M.; Kavaler, David; Brown, Aaron B.; Phillips, Cynthia A.

Virtual testbeds are a core component of cyber experimentation as they allow for fast and relatively inexpensive modeling of computer systems. Unlike simulations, virtual testbeds run real software on virtual hardware which allows them to capture unknown or complex behaviors. However, virtualization is known to increase latency and decrease throughput. Could these and other artifacts from virtualization undermine the experiments that we wish to run? For the past three years, we have attempted to quantify where and how virtual testbeds differ from their physical counterparts to address this concern. While performance differences have been widely studied, we aim to uncover behavioral differences. We have run over 10,000 experiments and processed over half a petabyte of data. Complete details of our methodology and our experimental results from applying that methodology are published in previous work. In this paper, we describe our lessons learned in the process of constructing and instrumenting both physical and virtual testbeds and analyzing the results from each.

More Details

Advanced Data Structures for Improved Cyber Resilience and Awareness in Untrusted Environments: LDRD Report

Bender, Michael A.; Berry, Jonathan W.; Farach-Colton, Martin; Jacobs, Justin; Johnson, Rob; Kroeger, Thomas M.; Mayer, Tyler; Mccauley, Samuel; Pandey, Prashant; Phillips, Cynthia A.; Porter, Alexandra; Singh, Shikha; Raizes, Justin; Xu, Helen; Zage, David

This report summarizes the work performed under the project "Advanced Data Structures for Improved Cyber Resilience and Awareness in Untrusted Environments." The goal of the project was to design, analyze, and test new data structures for cybersecurity applications. We had two major thrusts: 1) using new/improved write-optimized data structures and/or algorithms to better man- age and analyze high-speed massive-volume cyberstreams, and 2) adding security features to data structures at minimum cost. Write optimization allows data structures to better use secondary memory to store and search a larger amount of useful information. Secondary memory is large compared to main memory, but data movement to and from secondary memory must be carefully managed to run quickly enough to keep up with fast streams. The first thrust included managing cyberstreams in parallel, both multi-threaded and distributed, and improving the benchmarking infrastructure for testing new streaming data structures. We considered both (near) real-time discovery of particular patterns, and improved logging for improved forensics. We considered two kinds of security-feature problem. The first was high-performance history- independent external-memory data structures. These provide certain protections to data if a disk is stolen. We also prove some trade-offs between speed and security in this setting. The second data-security problem is more secure data look-up in secret-shared data bases. This report summarizes the project's major accomplishments, with the background to under- stand these accomplishments. It gathers the abstracts and references for the six refereed publications that have appeared as part of this work. We summarize several accomplishments that will be submitted for publication. We then archive one piece of partial work that is not likely to be published in the near future: validation of history-independent data structure implementations.

More Details

Anti-persistence on persistent storage: History-independent sparse tables and dictionaries

Proceedings of the ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems

Bender, Michael A.; Berry, Jonathan W.; Johnson, Rob; Kroeger, Thomas M.; Mccauley, Samuel; Phillips, Cynthia A.; Simon, Bertrand; Singh, Shikha; Zage, David J.

We present history-independent alternatives to a B-tree, the primary indexing data structure used in databases. A data structure is history independent (HI) if it is impossible to deduce any information by examining the bit representation of the data structure that is not already available through the API. We show how to build a history-independent cache-oblivious B-tree and a history-independent external-memory skip list. One of the main contributions is a data structure we build on the way - a history-independent packed-memory array (PMA). The PMA supports efficient range queries, one of the most important operations for answering database queries. Our HI PMA matches the asymptotic bounds of prior non-HI packed-memory arrays and sparse tables. Specifically, a PMA maintains a dynamic set of elements in sorted order in a linearsized array. Inserts and deletes take an amortized O(log2 N) element moves with high probability. Simple experiments with our implementation of HI PMAs corroborate our theoretical analysis. Comparisons to regular PMAs give preliminary indications that the practical cost of adding history-independence is not too large. Our HI cache-oblivious B-tree bounds match those of prior non-HI cache-oblivious B-trees. Searches take O(logB N) I/Os; inserts and deletes take O(log2N/B + logB N) amortized I/Os with high probability; and range queries returning k elements take O(logB N + k/B) I/Os. Our HI external-memory skip list achieves optimal bounds with high probability, analogous to in-memory skip lists: O(logB N) I/Os for point queries and amortized O(logB N) I/Os for inserts/deletes. Range queries returning k elements run in O(logB N + k/B) I/Os. In contrast, the best possible high-probability bounds for inserting into the folklore B-skip list, which promotes elements with probability 1/B, is just Θ(log N) I/Os. This is no better than the bounds one gets from running an inmemory skip list in external memory.

More Details
Results 1–25 of 42
Results 1–25 of 42