Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Tatyana O. Sharpee
2007 Neural Information Processing Systems  
This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, Rényi divergences of different orders [1, 2] . We show that maximizing one of them, Rényi divergence of order 2, is equivalent to
more » ... least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing Rényi divergences of arbitrary order in the asymptotic limit of large spike numbers. We find that the smallest errors are obtained with Rényi divergence of order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2] . We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3] , one of examples where informationtheoretic measures are no more data limited than those derived from least squares.
dblp:conf/nips/Sharpee07 fatcat:m65qgfyvcfh5df6nzvsxuurxlu