Publications

4 Results

Search results

Jump to search filters

Implementing wide baseline matching algorithms on a graphics processing unit

Myers, Daniel S.; Gonzales, Antonio G.; Rothganger, Fredrick R.; Larson, K.W.

Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.

More Details

The synaptic morphological perceptron

Proceedings of SPIE - The International Society for Optical Engineering

Myers, Daniel S.

In recent years, several researchers have constructed novel neural network models based on lattice algebra. Because of computational similarities to operations in the system of image morphology, these models are often called morphological neural networks. One neural model that has been successfully applied to many pattern recognition problems is the single-layer morphological perceptron with dendritic structure (SLMP). In this model, the fundamental computations are performed at dendrites connected to the body of a single neuron. Current training algorithms for the SLMP work by enclosing the target patterns in a set of hyperboxes orthogonal to the axes of the data space. This work introduces an alternate model of the SLMP, dubbed the synoptic morphological perceptron (SMP). In this model, each dendrite has one or more synapses that receive connections from inputs. The SMP can learn any region of space determined by an arbitrary configuration of hyperplanes, and is not restricted to forming hyperboxes during training. Thus, it represents a more general form of the morphological perceptron than previous architectures.

More Details
4 Results
4 Results