A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit <a rel="external noopener" href="https://www.ijitee.org/wp-content/uploads/papers/v8i10/J91850881019.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<i title="Blue Eyes Intelligence Engineering and Sciences Engineering and Sciences Publication - BEIESP">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/cj3bm7tgcffurfop7xzswxuks4" style="color: black;">VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE</a>
The dynamic hand gesture is an essential and important research topic in human-computer interaction. Recently, Deep convolutional neural network gives excellent performance in this area and gets promising results. But the Researcher had focused less attention on the feature extraction process, unification of frame, various fusion scheme and sequence-to-sequence prediction of a frame. Therefore, in this paper, we have presented an effective 2D CNN architecture with three stream networks and<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.35940/ijitee.j9185.0881019">doi:10.35940/ijitee.j9185.0881019</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kqx3cymemjb3lkt2ix7lmndogm">fatcat:kqx3cymemjb3lkt2ix7lmndogm</a> </span>
more »... ces weighted feature fusion scheme with the gated recurrent network for dynamic hand gesture recognition. To obtain enough and useful information we have converted each RGB-D video to 30-frame and 45-frame for input. We have calculated an optical flow for frame-to-frame by given RGB video and extract dense motion features. After finding proper motion path, we have assigned more weight to optical flow features and fuse this information to the next stage and gets a comparable result. We have also added a newest Gated recurrent network for temporal recognition of frame and minimize training time with improved accuracy. Our proposed architecture gives 85% accuracy on the standard VIVA dataset
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220121004709/https://www.ijitee.org/wp-content/uploads/papers/v8i10/J91850881019.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fe/b0/feb027605c59b350a5cfdfb6321e33e651ce2949.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.35940/ijitee.j9185.0881019"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>