Energy-Based Models in Document Recognition and Computer Vision

Y. LeCun, S. Chopra, M. Ranzato, F.-J. Huang
<span title="">2007</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xtwegnbfind35bohrbjjlwcv7i" style="color: black;">Proceedings of the International Conference on Document Analysis and Recognition</a> </i> &nbsp;
The Machine Learning and Pattern Recognition communities are facing two challenges: solving the normalization problem, and solving the deep learning problem. The normalization problem is related to the difficulty of training probabilistic models over large spaces while keeping them properly normalized. In recent years, the ML and Natural Language communities have devoted considerable efforts to circumventing this problem by developing "unnormalized" learning models for tasks in which the output
more &raquo; ... is highly structured (e.g. English sentences). This class of models was in fact originally developed during the 90's in the handwriting recognition community, and includes Graph Transformer Networks, Conditional Random Fields, Hidden Markov SVMs, and Maximum Margin Markov Networks. We describe these models within the unifying framework of "Energy-Based Models" (EBM). The Deep Learning Problem is related to the issue of training all the levels of a recognition system (e.g. segmentation, feature extraction, recognition, etc) in an integrated fashion. We first consider "traditional" methods for deep learning, such as convolutional networks and backpropagation, and show that, although they produce very low error rates for handwriting and object recognition, they require many training samples. We show that using unsupervised learning to initialize the layers of a deep network dramatically reduces the required number of training samples, particularly for such tasks as the recognition of everyday objects at the category level.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icdar.2007.4378728">doi:10.1109/icdar.2007.4378728</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icdar/LeCunCRH07.html">dblp:conf/icdar/LeCunCRH07</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yoejktkyifc3je2bt7n6sbypme">fatcat:yoejktkyifc3je2bt7n6sbypme</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170810085417/http://yann.lecun.com/exdb//publis/pdf/lecun-icdar-keynote-07.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c0/9c/c09c49d92d10a84e6efdbaf67d979bab2c22be3e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icdar.2007.4378728"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>