Phase diagrams of self-organizing maps

H.-U. Bauer, M. Riesenhuber, T. Geisel
1996 Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics  
We present a method which allows the analytic determination of phase diagrams in the self-organizing map, a model for the formation of topographic projection patterns in the brain and in signal processing applications. The method only requires an ansatz for the tesselation of the data space induced by the map, not for the explicit state of the map. We analytically obtain phase diagrams for various examples, including models for the development of orientation and ocular-dominance maps. The
more » ... nce maps. The latter phase diagram exhibits transitions to broadening ocular-dominance patterns as observed in a recent experiment. ͓S1063-651X͑96͒00109-2͔ PACS number͑s͒: 87.10.ϩe, 89.70.ϩc, 05.90.ϩm Topographic maps occur in many areas of the brain where sensory and other information is represented topographically, as well as in signal processing applications where data points are projected from one space to another in a neighborhood preserving fashion. An archetypical example involves the projection of oriented edge elements to the visual cortex where neighboring neurons respond to edges of similar orientation, at neighboring positions in the visual field ͓1͔. Topographic maps were found to be most often generated or refined by externally driven self-organization processes ͓2͔. Among the numerous models ͓3-6͔ for these pattern formation phenomena, Kohonen's self-organizing map ͑SOM͒ ͓7-9͔ has found particularly wide distribution. In the domain of technical signal processing, the low-dimensional "feature map" variant of the SOM is utilized for neighborhood preserving vector quantization, motor control, ͓10,11͔ data visualization, ͓12͔, or speech data preprocessing ͑for many further examples see ͓8,9͔͒. Studies of self-organization in the brain are often based on the slightly different highdimensional SOM version. Here stimuli and receptive fields are described not in terms of prespecified "features," but in terms of ͑high-dimensional͒ activity, respectively, weight distributions ͓13-15͔. This allows for a simultaneous selforganization not only of the map topography, but also of the shapes of individual receptive fields. The popularity of the SOM is based on its simple formulation, its numerical robustness, and the empirical success of its applications. However, due to a strong nonlinearity of this model, a general analytical treatment of the corresponding pattern formation process has been lacking. In particular, the conditions on map and data set parameters for patterns to occur are found most often only empirically, an unsatisfactory and numerically costly procedure. In this paper we present a method to analytically relate map and data set parameters to specific states of SOMs, i.e., to calculate phase diagrams of SOMs. The method is based on a comparison of the distortions of different data space tesselations, i.e., of different ways to distribute the data points among the map elements. Even though the method is applicable to the low-dimensional as well as to the high-dimensional variant of the SOM, it achieves its full potential in the latter case, where an ansatz for the tesselation is comparatively easy, but an ansatz for the map itself is unfeasible. We first apply our method to a tutorial mapping example and then solve two models for map formation in the visual cortex which previously could be investigated only numerically. A self-organizing map consists of nodes ͑neurons͒ characterized by a position r in the map output space lattice and a weight vector ͑receptive field͒ w r in the map input space, the data space. A data point v is mapped onto that node s whose weight vector w s matches v best. This amounts to a winner-take-all rule, a strong nonlinearity which in a biological context is explained as a consequence of lateral inhibition ͓8͔. In the context of technical applications this projection rule is identical to that of a regular vector quantizer ͓16͔. The map results as a stationary state of a self-organization process, which successively changes all vectors w r , ⌬w r ϭ⑀h rs ͑ v؊w r ͒, h rs ϭe Ϫ͉͉r؊s͉͉ 2 /2 2 , ͑1͒ following the presentation of stimuli v. ⑀ controls the size of learning steps. The neighborhood function h rs enforces neighboring neurons to align their receptive fields, imposing the property of topography on the SOM. In the general case, data points v and receptive field vectors w r are activity and weight distributions across M input channels, respectively ͑normalized to a constant total activity, ͚ i M v i ϭS). The winner s is determined by The input channels correspond, e.g., to the sensors in a sensory layer, like the retinal ganglion cells as input channels to a visual map. The typically large number of such channels warrants the notion of a "high-dimensional" map. In case the distributions v and w r are replaced by ͑a small number of͒ features ṽ, w r , like the centers of gravity of v and w r , one arrives at the low-dimensional SOM variant. This replacement precludes a self-organization of an internal shape of the w r , but has the advantage of a drastically reduced numerical expense.
doi:10.1103/physreve.54.2807 pmid:9965396 fatcat:wknhbunhsrhnvpozcea3nyola4