Publications

6 Results

Search results

Jump to search filters

Seascape Interface Control Document

Moore, Emily R.; Pitts, Todd A.; Laros, James H.; Qiu, Henry Q.; Ross, Leon C.; Danford, Forest L.; Pitts, Christopher W.

This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source software, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.

More Details

Seascape Interface Control Document

Moore, Emily R.; Pitts, Todd A.; Laros, James H.; Qiu, Henry Q.; Ross, Leon C.; Danford, Forest L.; Pitts, Christopher W.

This all-inclusive document describes the components, installation, and usage of the Seascape system. Additionally, this manual outlines the step-by-step processes for setting up your own local instance of Seascape, incorporating new datasets and algorithms into Seascape, and how to use the system itself. A brief overview of Seascape is provided in Section 1.2. System components and the various roles of the intended users of the system are described in Section 1.3. Next, steps on how each role uses Seascape are explained in Section 2.1. Finally, the steps to incorporate data into Seascape-DB and an algorithm into Seascape-VV are outlined in Sections 2.2 and 2.3, respectively. Steps to set up an instance of Seascape can be found in Appendix A.1. Finally, Seascape usage can be found in Section 2.1. The appendix includes code examples, frequently asked questions, terminology, and a list of acronyms.

More Details

Machine Learning for Correlated Intelligence. LDRD SAND Report

Moore, Emily R.; Proudfoot, Oliver S.; Qiu, Henry Q.; Ganter, Tyler G.; Lemon, Brandon; Pitts, Todd A.; Moon, Todd K.

The Machine Learning for Correlated Intelligence Laboratory Directed Research & Development (LDRD) Project explored competing a variety of machine learning (ML) classification techniques against a known, open source dataset through the use of a rapid and automated algorithm research & development (RD) infrastructure. This approach relied heavily on creating an infrastructure in which to provide a pipeline for automatic target recognition (ATR) ML algorithm competition. Results are presented for nine ML classifiers against a primary dataset using the pipeline infrastructure developed for this project. New approaches to feature set extraction are presented and discussed as well.

More Details

Seascape: A Due-Diligence Framework For Algorithm Acquisition

Proceedings of SPIE - The International Society for Optical Engineering

Pitts, Christopher W.; Danford, Forest L.; Moore, Emily R.; Marchetto, William; Qiu, Henry Q.; Ross, Leon C.; Pitts, Todd A.

Any program tasked with the evaluation and acquisition of algorithms for use in deployed scenarios must have an impartial, repeatable, and auditable means of benchmarking both candidate and fielded algorithms. Success in this endeavor requires a body of representative sensor data, data labels indicating the proper algorithmic response to the data as adjudicated by subject matter experts, a means of executing algorithms under review against the data, and the ability to automatically score and report algorithm performance. Each of these capabilities should be constructed in support of program and mission goals. By curating and maintaining data, labels, tests, and scoring methodology, a program can understand and continually improve the relationship between benchmarked and fielded performance of acquired algorithms. A system supporting these program needs, deployed in an environment with sufficient computational power and necessary security controls is a powerful tool for ensuring due diligence in evaluation and acquisition of mission critical algorithms. This paper describes the Seascape system and its place in such a process.

More Details

Seascape Interface Control Document (V. 2)

Moore, Emily R.; Pitts, Todd A.; Laros, James H.; Qiu, Henry Q.; Ross, Leon C.; Danford, Forest L.; Pitts, Christopher W.

This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source codes, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.

More Details

Seascape Interface Control Document (V.1)

Moore, Emily R.; Pitts, Todd A.; Laros, James H.; Qiu, Henry Q.; Ross, Leon C.; Danford, Forest L.; Pitts, Christopher W.

This paper serves as the Interface Control Document (ICD) for the Seascape automated test harness developed at Sandia National Laboratories. The primary purposes of the Seascape system are: (1) provide a place for accruing large, curated, labeled data sets useful for developing and evaluating detection and classification algorithms (including, but not limited to, supervised machine learning applications) (2) provide an automated structure for specifying, running and generating reports on algorithm performance. Seascape uses GitLab, Nexus, Solr, and Banana, open source codes, together with code written in the Python language, to automatically provision and configure computational nodes, queue up jobs to accomplish algorithms test runs against the stored data sets, gather the results and generate reports which are then stored in the Nexus artifact server.

More Details
6 Results
6 Results