Cyclic Autoencoder for Multimodal Data Alignment Using Custom Datasets

Zhenyu Tang, Jin Liu, Chao Yu, Y. Ken Wang
<span title="">2021</span> <i title="Computers, Materials and Continua (Tech Science Press)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/pu36lprddffgdbdb5n2qwfvaxa" style="color: black;">Computer systems science and engineering</a> </i> &nbsp;
The subtitle recognition under multimodal data fusion in this paper aims to recognize text lines from image and audio data. Most existing multimodal fusion methods tend to be associated with pre-fusion as well as post-fusion, which is not reasonable and difficult to interpret. We believe that fusing images and audio before the decision layer, i.e., intermediate fusion, to take advantage of the complementary multimodal data, will benefit text line recognition. To this end, we propose: (i) a
more &raquo; ... cyclic autoencoder based on convolutional neural network. The feature dimensions of the two modal data are aligned under the premise of stabilizing the compressed image features, thus the high-dimensional features of different modal data are fused at the shallow level of the model. (ii) A residual attention mechanism that helps us improve the performance of the recognition. Regions of interest in the image are enhanced and regions of disinterest are weakened, thus we can extract the features of the text regions without further increasing the depth of the model (iii) a fully convolutional network for video subtitle recognition. We choose DenseNet-121 as the backbone network for feature extraction, which effectively enabling the recognition of video subtitles in complex backgrounds. The experiments are performed on our custom datasets, and the automatic and manual evaluation results show that our method reaches the state-of-the-art.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32604/csse.2021.017230">doi:10.32604/csse.2021.017230</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nxj75oyf4jaavohw5aprjnm53q">fatcat:nxj75oyf4jaavohw5aprjnm53q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220225045158/https://www.techscience.com/ueditor/files/csse/TSP_CSSE-39-1/TSP_CSSE_17230/TSP_CSSE_17230.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/31/04/31047d67e821df13b7672896f7e9b38cb1294e78.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32604/csse.2021.017230"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>