Filters








24,716 Hits in 4.9 sec

On the Behavior of Convolutional Nets for Feature Extraction [article]

Dario Garcia-Gasulla, Ferran Parés, Armand Vilalta, Jonatan Moreno, Eduard Ayguadé, Jesús Labarta, Ulises Cortés, Toyotaro Suzumura
<span title="2018-01-29">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We seek to provide new insights into the behavior of CNN features, particularly the ones from convolutional layers, as this can be relevant for their application to knowledge representation and reasoning  ...  Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors  ...  -P project and by the Generalitat de Catalunya (contracts 2014-SGR-1051), and by the Core Research for Evolutional Science and Technology (CREST) program of Japan Science and Technology Agency (JST).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1703.01127v4">arXiv:1703.01127v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5ildynkpojcf7iwltgiqvhedqu">fatcat:5ildynkpojcf7iwltgiqvhedqu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191014001122/https://arxiv.org/pdf/1703.01127v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/12/21/12214864dd2677e1d0aae727fe2bfe1dfe94ba31.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1703.01127v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On the Behavior of Convolutional Nets for Feature Extraction

Dario Garcia-Gasulla, Ferran Parés, Armand Vilalta, Jonatan Moreno, Eduard Ayguadé, Jesús Labarta, Ulises Cortés, Toyotaro Suzumura
<span title="2018-03-20">2018</span> <i title="AI Access Foundation"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/4ax4efcwajcgvidb6hcg6mwx4a" style="color: black;">The Journal of Artificial Intelligence Research</a> </i> &nbsp;
We seek to provide new insights into the behavior of CNN features, particularly the ones from convolutional layers, as this can be relevant for their application to knowledge representation and reasoning  ...  Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purposes is a field of interest, as it provides access to the visual descriptors  ...  -P project and by the Generalitat de Catalunya (contracts 2014-SGR-1051), and by the Core Research for Evolutional Science and Technology (CREST) program of Japan Science and Technology Agency (JST).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1613/jair.5756">doi:10.1613/jair.5756</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7nbtlwdwzjartgd72cvyguxmqe">fatcat:7nbtlwdwzjartgd72cvyguxmqe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190503064406/https://upcommons.upc.edu/bitstream/handle/2117/115533/On%20the%20Behavior%20of%20Convolutional%20Nets%20for%20Feature%20Extraction.pdf;jsessionid=34AD8E73652FD5BAE26EF68551F290A7?sequence=1" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/cb/6e/cb6ea3cec6cd543ebc4755c74d0d3ad30848bcea.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1613/jair.5756"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Action Recognition Based on Two-Stream Convolutional Networks with Long-Short-Term Spatiotemporal Features

Yanqin Wan, Zujun Yu, Yao Wang, Xingxin Li
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
The network is mainly composed of two subnetworks. One is long-term spatiotemporal features extraction network (LT-Net) that takes the stacked RGB images as inputs.  ...  Another one is short-term spatiotemporal features extraction network (ST-Net) that takes the optical flow as input, which is estimated from two adjacent frames.  ...  One is long-term spatiotemporal features extraction network (LT-Net), and another one is short-term spatiotemporal features extraction network (ST-Net).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2993227">doi:10.1109/access.2020.2993227</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/epmzsynrcrgezkxrtdgb3foqoy">fatcat:epmzsynrcrgezkxrtdgb3foqoy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108143429/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09089852.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ba/85/ba85b9d5f7f31690e9981a04916ef5baccf71e73.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2993227"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Spatiotemporal representation learning for video anomaly detection

Zhaoyan Li, Yaoshun Li, Zhisheng Gao
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
Firstly, the spatial-temporal features of the video are extracted by the constructed multi-scale 3D convolutional neural network.  ...  Finally, the accurate position of anomalous behavior in the video data is achieved by calculating the position of the last output feature, that is, the position of the receptive field.  ...  CONSTRUCTING A NO-ANOMALOUS BEHAVIOR MODEL The spatiotemporal features of the video clips can be extracted by the STF-Net model as f d l .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2970497">doi:10.1109/access.2020.2970497</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tlaxd6pxxngczkvzm3sz3aqcvu">fatcat:tlaxd6pxxngczkvzm3sz3aqcvu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201107194448/https://ieeexplore.ieee.org/ielx7/6287639/8948470/08976183.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d0/d7/d0d787654d949cac95966a472a4aacc72a6df711.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2970497"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

HAR-Net:Fusing Deep Representation and Hand-crafted Features for Human Activity Recognition [article]

Mingtao Dong, Jindong Han
<span title="2018-10-25">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The HAR-Net fusing the hand-crafted features and high-level features extracted from convolutional network to make prediction.  ...  The experimental results on the UCI dataset demonstrate that fusing the two kinds of features can make up for the shortage of traditional feature engineering and deep learning techniques.  ...  Approach Prediction Accuracy HAR-Net based on Conventional Convolution 95.2% HAR-Net based on Separable Convolution 96.9% Figure 4 : The HAR-Net without separable convolution Table 3 : Comparison among  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.10929v1">arXiv:1810.10929v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gt2erzxob5hizdjeldnbxxf5cq">fatcat:gt2erzxob5hizdjeldnbxxf5cq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191019042239/https://arxiv.org/pdf/1810.10929v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/61/8e/618edac7c32f0bedab7a88beba8fa3379135070e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1810.10929v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

TrouSPI-Net: Spatio-temporal attention on parallel atrous convolutions and U-GRUs for skeletal pedestrian crossing prediction [article]

Joseph Gesnouin, Steve Pechberti, Bogdan Stanciulescu, Fabien Moutarde
<span title="2021-09-07">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Understanding the behaviors and intentions of pedestrians is still one of the main challenges for vehicle autonomy, as accurate predictions of their intentions can guarantee their safety and driving comfort  ...  TrouSPI-Net extracts spatio-temporal features for different time resolutions by encoding pseudo-images sequences of skeletal joints' positions and processes them with parallel attention modules and atrous  ...  Finally, we explore the impact of adding a second modality to TrouSPI-Net by using 3D convolutions [44] on the local box feature available in the data-set. B.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.00953v2">arXiv:2109.00953v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/x2hwvok6urbbjpzkf7gqpl55fu">fatcat:x2hwvok6urbbjpzkf7gqpl55fu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210904192910/https://arxiv.org/pdf/2109.00953v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/56/81/56817e01877615ce448194ad9cb4344c47dcd906.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2109.00953v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Driving Fatigue Detection Based on the Combination of Multi-Branch 3D-CNN and Attention Mechanism

Wenbin Xiang, Xuncheng Wu, Chuanchang Li, Weiwei Zhang, Feiyang Li
<span title="2022-05-06">2022</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/smrngspzhzce7dy6ofycrfxbim" style="color: black;">Applied Sciences</a> </i> &nbsp;
The temporal and spatial information contained in the feature map is extracted by three-dimensional convolution, after which the feature map is fed to the attention mechanism module to optimize the feature  ...  Fatigue driving is one of the main causes of traffic accidents today.  ...  Acknowledgments: The authors would like to express their appreciation to the developers of Pytorch and OpenCV, the authors of Grad-CAM and dataset provider School of Information Science, Beijing Language  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/app12094689">doi:10.3390/app12094689</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j4eweh2ucjghdnoonm4t3ieowy">fatcat:j4eweh2ucjghdnoonm4t3ieowy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220508021645/https://mdpi-res.com/d_attachment/applsci/applsci-12-04689/article_deploy/applsci-12-04689.pdf?version=1651847491" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8b/8f/8b8f8d593ab1734293f3e0efe77486d154e2aa26.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/app12094689"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Action Unit Detection with Joint Adaptive Attention and Graph Relation [article]

Chenggong Zhang and Juan Song and Qingyang Zhang and Weilong Dong and Ruomeng Ding and Zhilei Liu
<span title="2021-07-09">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The proposed method uses the pre-trained JAA model as the feature extractor, and extracts global features, face alignment features and AU local features on the basis of multi-scale features.  ...  We take the AU local features as the input of the graph convolution to further consider the correlation between AU, and finally use the fused features to classify AU.  ...  It consists of an ordinary convolutional layer and another three hierarchical partitioned convolutional layers, which are more suitable for the extraction of facial features than ordinary convolutional  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.04389v1">arXiv:2107.04389v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e3wa5aa5enc5hmwbn45pb5hd3y">fatcat:e3wa5aa5enc5hmwbn45pb5hd3y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210714120424/https://arxiv.org/pdf/2107.04389v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c6/85/c685d51a3e15c234a33601f5a960fb3e343505c2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.04389v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Battlefield Target Aggregation Behavior Recognition Model Based on Multi-Scale Feature Fusion

Haiyang Jiang, Yaozong Pan, Jian Zhang, Haitao Yang
<span title="2019-06-05">2019</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nzoj5rayr5hutlurimhzyjlory" style="color: black;">Symmetry</a> </i> &nbsp;
To this end, we propose a novel 3D-CNN (3D Convolutional Neural Networks) model, which extends the idea of multi-scale feature fusion to the spatio-temporal domain, and enhances the feature extraction  ...  ability of the network by combining feature maps of different convolutional layers.  ...  In recent years, researchers have proposed a number of methods for video behavior recognition, which are mainly divided into traditional feature extraction methods [1] [2] [3] and the method based on  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/sym11060761">doi:10.3390/sym11060761</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4ro577fcsrhp5kot5z7g2upsd4">fatcat:4ro577fcsrhp5kot5z7g2upsd4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200214182652/https://res.mdpi.com/d_attachment/symmetry/symmetry-11-00761/article_deploy/symmetry-11-00761.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ce/7f/ce7f023978190559ecd9b1da5a26d0f16d07a7ef.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/sym11060761"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Driver Drowsiness Detection Using Ensemble Convolutional Neural Networks on YawDD [article]

Rais Mohammad Salman, Mahbubur Rashid, Rupal Roy, Md Manjurul Ahsan, Zahed Siddique
<span title="2021-12-20">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Driver drowsiness detection using videos/images is one of the most essential areas in today's time for driver safety.  ...  In this work, we have applied four different Convolutional Neural Network (CNN) techniques on the YawDD dataset to detect and examine the extent of drowsiness depending on the yawning frequency with specific  ...  The extracted features would then be used to look for similar matching features of other images.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.10298v1">arXiv:2112.10298v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oby4edm33baxbmbc5ytm4evj5m">fatcat:oby4edm33baxbmbc5ytm4evj5m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211230150228/https://arxiv.org/pdf/2112.10298v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/e5/e0e5194770a74a345a4928f5dada5edb3bfaf201.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.10298v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Design of Intelligent Mosquito Nets Based on Deep Learning Algorithms

Yuzhen Liu, Xiaoliang Wang, Xinghui She, Ming Yi, Yuelong Li, Frank Jiang
<span title="">2021</span> <i title="Computers, Materials and Continua (Tech Science Press)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/amujz7fcqna6do727z6ev3ueo4" style="color: black;">Computers Materials &amp; Continua</a> </i> &nbsp;
We used optical flow to extract pressure map features, and they were fed to a 3-dimensional convolutional neural network (3D-CNN) classification model subsequently.  ...  An intelligent mosquito net employing deep learning has been one of the hotspots in the field of Internet of Things as it can reduce significantly the spread of pathogens carried by mosquitoes, and help  ...  Acknowledgement: We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32604/cmc.2021.015501">doi:10.32604/cmc.2021.015501</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xq7lijhhbvctvn3rnj66vd5va4">fatcat:xq7lijhhbvctvn3rnj66vd5va4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220224230150/https://www.techscience.com/ueditor/files/cmc/TSP_CMC_69-2/TSP_CMC_15501/TSP_CMC_15501.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7c/29/7c29906cbd336f2562b29039cf6f0bc36165fbdc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.32604/cmc.2021.015501"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Face Recognition based on Convoluted Neural Networks: Technical Review

Basil Ismail Mirghani Shakkak, SARA ALI K. M. AL MAZRUII
<span title="2022-04-16">2022</span> <i title="The International Applied Computing &amp; Applications Publisher"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q75d6irihvbm3j7behz4hjdm4m" style="color: black;">Applied computing Journal</a> </i> &nbsp;
One of the most sought after methods in field image processing for face recognition is CNN (Convoluted Neural Networks).  ...  As such in this paper the architecture of CNN is presented. Then different techniques for face detection and face recognition based on CNNs are reviewed.  ...  Acknowledgment The research leading to these results has received No Research Project Grant Funding.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.52098/acj.202247">doi:10.52098/acj.202247</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/chqwz3sovveu3pqdi7tminjxxe">fatcat:chqwz3sovveu3pqdi7tminjxxe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220429212028/https://acaa-p.com/index.php/acj/article/download/47/36" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1c/64/1c64378cfd34da093f4ac528bdb308a9fec92013.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.52098/acj.202247"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Forged Copy-Move Recognition Using Convolutional Neural Network

Ayat Fadhel Homady Sewan, Department of Computers Science, Collage of Science, Al-Nahrain University, Baghdad, Iraq, Mohammed Sahib Mahdi Altaei, Department of Computers Science, Collage of Science, Al-Nahrain University, Baghdad, Iraq
<span title="2021-03-01">2021</span> <i title="Al-Nahrain Journal of Science"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/gtwuze6tffac5elh5cm63h36ym" style="color: black;">Al-Nahrain Journal of Science</a> </i> &nbsp;
The proposed module used the buster net of three neural networks that basically adopted the principle of training by using Convolution Neural Network (CNN) to extract the most important features in the  ...  The present work deals with one important research module is the recognition of forged part that applied on copy move forgery images.  ...  Similarity Detection Behavior Like manipulation detection branch, similarity detection section begins on features represent through the CNN features extracting.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.22401/anjs.24.1.08">doi:10.22401/anjs.24.1.08</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/677gh2qgx5c6xoeuc7kijhwzym">fatcat:677gh2qgx5c6xoeuc7kijhwzym</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210716145302/https://anjs.edu.iq/index.php/anjs/article/download/2335/1837/" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4b/95/4b958ce777737557233b72d7a13ea88e03a86bac.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.22401/anjs.24.1.08"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Classification of Action Based Video using Heterogeneous Feature Extraction and SVM

<span title="2019-09-10">2019</span> <i title="Blue Eyes Intelligence Engineering and Sciences Engineering and Sciences Publication - BEIESP"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/cj3bm7tgcffurfop7xzswxuks4" style="color: black;">VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE</a> </i> &nbsp;
In the proposed system, RGB frames and optical flow frames are used for AR with the help of Convolutional Neural Network (CNN) pre-trained model Alex-Net extract features from fc7 layer.  ...  Using SVM classifier, extracted features are used for classification and achieved best result 95.6% accuracy as compared to other techniques of the state-of- art.v  ...  In the proposed system, RGB frames and optical flow frames are used for AR with the help of Convolutional Neural Network (CNN) pre-trained model Alex-Net extract features from fc7 layer.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.35940/ijitee.k2089.0981119">doi:10.35940/ijitee.k2089.0981119</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xmtlxj4nxzhd7g3hoyq5td4pre">fatcat:xmtlxj4nxzhd7g3hoyq5td4pre</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220303213911/https://www.ijitee.org/wp-content/uploads/papers/v8i11/K20890981119.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7e/ab/7eabe73579ea67c4f01b30a2313feb86fa90f2df.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.35940/ijitee.k2089.0981119"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Convolutional Dictionary Pair Learning Network for Image Representation Learning [article]

Zhao Zhang, Yulin Sun, Yang Wang, Zhengjun Zha, Shuicheng Yan, Meng Wang
<span title="2020-01-15">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Extensive simulations on real databases show that our CDPL-Net can deliver enhanced performance over other state-of-the-art methods.  ...  Generally, the architecture of CDPL-Net includes two convolutional/pooling layers and two dictionary pair learn-ing (DPL) layers in the representation learning module.  ...  for Central Universities of China (JZ2019HGPA0102).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.12138v3">arXiv:1912.12138v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xqueqykknjfq7kemsk5eaweiiy">fatcat:xqueqykknjfq7kemsk5eaweiiy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321015916/https://arxiv.org/ftp/arxiv/papers/1912/1912.12138.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.12138v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 24,716 results