Publications

10 Results
Skip to search filters

Identifying generalities in data sets using periodic Hopfield networks : initial status report

Link, Hamilton E.; Backer, Alejandro B.

We present a novel class of dynamic neural networks that is capable of learning, in an unsupervised manner, attractors that correspond to generalities in a data set. Upon presentation of a test stimulus, the networks follow a sequence of attractors that correspond to subsets of increasing size or generality in the original data set. The networks, inspired by those of the insect antennal lobe, build upon a modified Hopfield network in which nodes are periodically suppressed, global inhibition is gradually strengthened, and the weight of input neurons is gradually decreased relative to recurrent connections. This allows the networks to converge on a Hopfield network's equilibrium within each suppression cycle, and to switch between attractors in between cycles. The fast mutually reinforcing excitatory connections that dominate dynamics within cycles ensures the robust error-tolerant behavior that characterizes Hopfield networks. The cyclic inhibition releases the network from what would otherwise be stable equilibriums or attractors. Increasing global inhibition and decreasing dependence on the input leads successive attractors to differ, and to display increasing generality. As the network is faced with stronger inhibition, only neurons connected with stronger mutually excitatory connections will remain on; successive attractors will consist of sets of neurons that are more strongly correlated, and will tend to select increasingly generic characteristics of the data. Using artificial data, we were able to identify configurations of the network that appeared to produce a sequence of increasingly general results. The next logical steps are to apply these networks to suitable real-world data that can be characterized by a hierarchy of increasing generality and observe the network's performance. This report describes the work, data, and results, the current understanding of the results, and how the work could be continued. The code, data, and preliminary results are included and are available as an archive.

More Details

Optimal neuronal tuning for finite stimulus spaces

Proposed for publication in Neural computation.

Brown, William M.; Backer, Alejandro B.

The efficiency of neuronal encoding in sensory and motor systems has been proposed as a first principle governing response properties within the central nervous system. We present a continuation of a theoretical study presented by Zhang and Sejnowski, where the influence of neuronal tuning properties on encoding accuracy is analyzed using information theory. When a finite stimulus space is considered, we show that the encoding accuracy improves with narrow tuning for one- and two-dimensional stimuli. For three dimensions and higher, there is an optimal tuning width.

More Details
10 Results
10 Results