Publications

Results 1–25 of 32

Search results

Jump to search filters

No tImplementing transition--edge sensors in a tabletop edge sensors in a tabletop xx--ray CT system for imaging applicationsray CT system for imaging applicationsitle

Alpert, Bradley; Becker, Daniel; Bennett, Douglas; Doriese, W.; Durkin, Malcolm; Fowler, Joseph; Gard, Johnathon; Imrek, Jozsef; Levine, Zachary; Mates, John; Miaja-Avila, Luis; Morgan, Kelsey; Nakamura, Nathan; O'Neil, Galen; Ortiz, Nathan; Reintsema, Carl; Schmidt, Daniel; Swetz, Daniel; Szypryt, Paul; Ullom, Joel; Vale, Leila; Weber, Joel; Wessels, Abigail; Dagel, Amber L.; Dalton, Gabriella D.; Laros, James H.; Jimenez, Edward S.; McArthur, Daniel M.; Thompson, Kyle R.; Walker, Christopher W.; Wheeler, Jason W.; Ablerto, Julien; Griveau, Damien; Silvent, Jeremie

Abstract not provided.

CHARACTERIZING HUMAN PERFORMANCE: DETECTING TARGETS AT HIGH FALSE ALARM RATES

Proceedings of the 2021 International Topical Meeting on Probabilistic Safety Assessment and Analysis, PSA 2021

Speed, Ann S.; Wheeler, Jason W.; Russell, John L.; Oppel, Fred; Sanchez, Danielle; Silva, Austin R.; Chavez, Anna

The prevalence effect is the observation that, in visual search tasks as the signal (target) to noise (non-target) ratio becomes smaller, humans are more likely to miss the target when it does occur. Studied extensively in the basic literature [e.g., 1, 2], this effect has implications for real-world settings such as security guards monitoring physical facilities for attacks. Importantly, what seems to drive the effect is the development of a response bias based on learned sensitivity to the statistical likelihood of a target [e.g., 3-5]. This paper presents results from two experiments aimed at understanding how the target prevalence impacts the ability for individuals to detect a target on the 1,000th trial of a series of 1000 trials. The first experiment employed the traditional prevalence effect paradigm. This paradigm involves search for a perfect capital letter T amidst imperfect Ts. In a between-subjects design, our subjects experienced target prevalence rates of 50/50, 1/10, 1/100, or 1/1000. In all conditions, the final trial was always a target. The second (ongoing) experiment replicates this design using a notional physical facility in a mod/sim environment. This simulation enables triggering different intrusion detection sensors by simulated characters and events (e.g., people, animals, weather). In this experiment, subjects viewed 1000 “alarm” events and were asked to characterize each as either a nuisance alarm (e.g., set off by an animal) or an attack. As with the basic visual search study, the final trial was always an attack.

More Details

Sparse Sampling in Microscopy

Statistical Methods for Materials Science: The Data Science of Microstructure Characterization

Larson, K.W.; Anderson, Hyrum; Wheeler, Jason W.

This chapter considers the collection of sparse samples in electron microscopy, either by modification of the sampling methods utilized on existing microscopes, or with new microscope concepts that are specifically designed and optimized for collection of sparse samples. It explores potential embodiments of a multi-beam compressive sensing electron microscope. Sparse measurement matrices offer an advantage of efficient image recovery, since each iteration of the process becomes a simple multiplication by a sparse matrix. Electron microscopy is well suited to compressed or sparse sampling due to the difficulty of building electron microscopes that can accurately record more than one electron signal at a time. Sparse sampling in electron microscopy has been considered for dose reduction, improving three-dimensional reconstructions and accelerating data acquisition. For sparse sampling, variations of scanning transmission electron microscopy (STEM) are typically used. In STEM, the electron probe is scanned across the specimen, and the detector measurement is recorded as a function of probe location.

More Details

Sparse coding for N-gram feature extraction and training for file fragment classification

IEEE Transactions on Information Forensics and Security

Wang, Felix W.; Quach, Tu-Thach Q.; Wheeler, Jason W.; Aimone, James B.; James, Conrad D.

File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.

More Details

Compressed sensing for fast electron microscopy

TMS Annual Meeting

Anderson, Hyrum A.; Wheeler, Jason W.; Larson, K.W.

Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image square centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited. imaging time is proportional to the number of measurements taken of each sample; in a traditional SEM. large collections can lead to weeks of around-the-clock imaging time. We previously reported a single-beam sparse sampling approach that we have demonstrated on an operational SEM for collecting "smooth" images. In this paper, we analyze how measurements from a hypothetical multi-beam system would compare to the single-beam approach in a compressed sensing framework. To that end. multi-beam measurements are synthesized on a single-beam SEM. and fidelity of reconstructed images are compared to the previously demonstrated approach. Since taking fewer measurements comes at the cost of reduced SNR, image fidelity as a function of undersampling ratio is reported.

More Details
Results 1–25 of 32
Results 1–25 of 32