Filters








430 Hits in 1.5 sec

Discrete Optimal Graph Clustering [article]

Yudong Han, Lei Zhu, Zhiyong Cheng, Jingjing Li, Xiaobai Liu
<span title="2019-04-25">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Graph based clustering is one of the major clustering methods. Most of it work in three separate steps: similarity graph construction, clustering label relaxing and label discretization with k-means. Such common practice has three disadvantages: 1) the predefined similarity graph is often fixed and may not be optimal for the subsequent clustering. 2) the relaxing process of cluster labels may cause significant information loss. 3) label discretization may deviate from the real clustering result
more &raquo; ... since k-means is sensitive to the initialization of cluster centroids. To tackle these problems, in this paper, we propose an effective discrete optimal graph clustering (DOGC) framework. A structured similarity graph that is theoretically optimal for clustering performance is adaptively learned with a guidance of reasonable rank constraint. Besides, to avoid the information loss, we explicitly enforce a discrete transformation on the intermediate continuous label, which derives a tractable optimization problem with discrete solution. Further, to compensate the unreliability of the learned labels and enhance the clustering accuracy, we design an adaptive robust module that learns prediction function for the unseen data based on the learned discrete cluster labels. Finally, an iterative optimization strategy guaranteed with convergence is developed to directly solve the clustering results. Extensive experiments conducted on both real and synthetic datasets demonstrate the superiority of our proposed methods compared with several state-of-the-art clustering approaches.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.11266v1">arXiv:1904.11266v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q4cyfzh3yvefhaguxmatxtuhdy">fatcat:q4cyfzh3yvefhaguxmatxtuhdy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191014010812/https://arxiv.org/pdf/1904.11266v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/93/32/9332f80b240966f86a145b9a00823fa7d9611814.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.11266v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Review of Face Presentation Attack Detection Competitions [article]

Zitong Yu, Jukka Komulainen, Xiaobai Li, Guoying Zhao
<span title="2021-12-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
München 16 ReadFace Zhijun Tong ReadFace 17 LsyL6 Dongxiao Li Zhejiang University 18 HighC Minzhe Huang Akuvox (Xiamen) Networks Co., Ltd. .1.  ...  Li-Ren Hou, Chunghwa Telecom 9 Wgqtmac Guoqing Wang, ICT 10 Hulking Yang, Qing, Intel 11 Dqiu Qiudi Ranking Team Name Affiliation 1 BOBO Zitong Yu, University of Oulu 2 Super Zhihua Huang, USTC 3 Hulking  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.11290v1">arXiv:2112.11290v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xx3d64b3cre7zkgy3e2vvzmmsm">fatcat:xx3d64b3cre7zkgy3e2vvzmmsm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211230091237/https://arxiv.org/pdf/2112.11290v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9a/4d/9a4d116538a47e3fb1be9e72dc59dfbe0d3f11d0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.11290v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Treatment Technologies for Organic Wastewater [chapter]

Chunli Zheng, Ling Zhao, Xiaobai Zhou, Zhimin Fu, An Li
<span title="2013-01-16">2013</span> <i title="InTech"> Water Treatment </i> &nbsp;
Zhou The Environmental Monitoring Center of Jiangsu Province, Nanjing, China An Li School of Petrochemical Engineering, Lanzhou University of Technology, China Figure 1 . 1 The scheme of the activated  ...  Author details Chunli Zheng School of Energy and Power Engineering, Xi'an Jiaotong University, China Ling Zhao and Zhimin Fu College of Environment & Resources of Inner Mongolia University, China Xiaobai  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5772/52665">doi:10.5772/52665</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a6rnp3i3lzfkbnzalb2mpjgaqu">fatcat:a6rnp3i3lzfkbnzalb2mpjgaqu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180724031054/https://api.intechopen.com/chapter/pdf-download/41953" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/dc/ba/dcba61f35801d26b702e505e2652ea70e4f33d0e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5772/52665"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Face Anti-Spoofing with Human Material Perception [article]

Zitong Yu, Xiaobai Li, Xuesong Niu, Jingang Shi, Guoying Zhao
<span title="2020-07-04">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks. Most existing FAS methods capture various cues (e.g., texture, depth and reflection) to distinguish the live faces from the spoofing faces. All these cues are based on the discrepancy among physical materials (e.g., skin, glass, paper and silicone). In this paper we rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception
more &raquo; ... , intending to extract discriminative and robust features for FAS. To this end, we propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns via aggregating multi-level bilateral macro- and micro- information. Furthermore, Multi-level Feature Refinement Module (MFRM) and multi-head supervision are utilized to learn more robust features. Comprehensive experiments are performed on six benchmark datasets, and the proposed method achieves superior performance on both intra- and cross-dataset testings. One highlight is that we achieve overall 11.3±9.5% EER for cross-type testing in SiW-M dataset, which significantly outperforms previous results. We hope this work will facilitate future cooperation between FAS and material communities.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.02157v1">arXiv:2007.02157v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wawoio7p4jfvbnhiu252feiuzy">fatcat:wawoio7p4jfvbnhiu252feiuzy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928072438/https://arxiv.org/pdf/2007.02157v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7c/03/7c0374a8d7b40b1ca1bef9992294d977fa0ec599.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.02157v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Automatic Micro-Expression Analysis: Open Challenges

Guoying Zhao, Xiaobai Li
<span title="2019-08-07">2019</span> <i title="Frontiers Media SA"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/5r5ojcju2repjbmmjeu5oyawti" style="color: black;">Frontiers in Psychology</a> </i> &nbsp;
A number of works have been contributing to the automatic micro-expression analysis from the aspects of new datasets collection (from emotion level annotation to action unit level annotation; Li et al  ...  ., 2013; Davison et al., 2018) , micro-expression recognition (from signal apex frame recognition to whole video recognition; Wang et al., 2015; Liu et al., 2016; Li Y. et al., 2018; Huang et al., 2019  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fpsyg.2019.01833">doi:10.3389/fpsyg.2019.01833</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/31447752">pmid:31447752</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC6692451/">pmcid:PMC6692451</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7xsv5s7nq5bsrlpzw27kkbltg4">fatcat:7xsv5s7nq5bsrlpzw27kkbltg4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200211215514/https://fjfsdata01prod.blob.core.windows.net/articles/files/465651/pubmed-zip/.versions/1/.package-entries/fpsyg-10-01833/fpsyg-10-01833.pdf?sv=2015-12-11&amp;sr=b&amp;sig=ZnvYeqg%2F3YTqTJPHmBfPEwZPKW6ZdTd0AflTHLoVSbk%3D&amp;se=2020-02-11T21%3A55%3A44Z&amp;sp=r&amp;rscd=attachment%3B%20filename%2A%3DUTF-8%27%27fpsyg-10-01833.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/76/21/76214bf3d19fa8fc8a7b773153fbf793c4d861c2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fpsyg.2019.01833"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> frontiersin.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6692451" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

A Refined Nonlocal Strain Gradient Theory For Assessing Scaling-Dependent Vibration Behavior Of Microbeams

Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding
<span title="2017-02-04">2017</span> <i title="Zenodo"> Zenodo </i> &nbsp;
Li et al.  ...  Xiaobai Li is with State Key Laboratory of Digital Manufacturing Equipment and Technology, School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.1129208">doi:10.5281/zenodo.1129208</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/weguszvjfvg6jcnndytubzchrq">fatcat:weguszvjfvg6jcnndytubzchrq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201224221324/https://zenodo.org/record/1129209/files/10006602.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/52/48/52480db5fcd0247e3a42f9ed6e06f9e12c4220e1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.1129208"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> zenodo.org </button> </a>

Radar Signal Modulation Recognition Based on Deep Joint Learning

Dongjin Li, Ruijuan Yang, Xiaobai Li, Shengkun Zhu
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
XIAOBAI LI was born in Longxi, China, in 1983. He received the B.S., M.S., and Ph.D. degrees from Radar Academy, Wuhan, China, in 2006, 2009, and 2013, respectively.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2978875">doi:10.1109/access.2020.2978875</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5o3b7p5apfeopjl7e7sbmailba">fatcat:5o3b7p5apfeopjl7e7sbmailba</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108043330/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09026885.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0d/65/0d65319c51aee98c0edba15800c911526c076dfa.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2978875"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Revisiting Pixel-Wise Supervision for Face Anti-Spoofing [article]

Zitong Yu, Xiaobai Li, Jingang Shi, Zhaoqiang Xia, Guoying Zhao
<span title="2020-11-24">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In order to reduce the redundancy from the dense depth map, Li et al. [30] used a sparse 3D point cloud map to efficiently supervise the lightweight models.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.12032v1">arXiv:2011.12032v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cvuvpuoy3nbbzdun6twjxipze4">fatcat:cvuvpuoy3nbbzdun6twjxipze4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201128224331/https://arxiv.org/pdf/2011.12032v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/48/05/4805d6e48f1b354ede50f00618d953ea68c349c1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.12032v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Dual-Cross Central Difference Network for Face Anti-Spoofing [article]

Zitong Yu, Yunxiao Qin, Hengshuang Zhao, Xiaobai Li, Guoying Zhao
<span title="2021-05-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems. Recently, central difference convolution (CDC) has shown its excellent representation capacity for the FAS task via leveraging local gradient features. However, aggregating central difference clues from all neighbors/directions simultaneously makes the CDC redundant and sub-optimized in the training phase. In this paper, we propose two Cross Central Difference Convolutions (C-CDC), which exploit the difference of
more &raquo; ... he center and surround sparse local features from the horizontal/vertical and diagonal directions, respectively. It is interesting to find that, with only five ninth parameters and less computational cost, C-CDC even outperforms the full directional CDC. Based on these two decoupled C-CDC, a powerful Dual-Cross Central Difference Network (DC-CDN) is established with Cross Feature Interaction Modules (CFIM) for mutual relation mining and local detailed representation enhancement. Furthermore, a novel Patch Exchange (PE) augmentation strategy for FAS is proposed via simply exchanging the face patches as well as their dense labels from random samples. Thus, the augmented samples contain richer live/spoof patterns and diverse domain distributions, which benefits the intrinsic and robust feature learning. Comprehensive experiments are performed on four benchmark datasets with three testing protocols to demonstrate our state-of-the-art performance.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.01290v1">arXiv:2105.01290v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6j3ddmfbdzedbolj5zyl2w777u">fatcat:6j3ddmfbdzedbolj5zyl2w777u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210506025738/https://arxiv.org/pdf/2105.01290v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ae/df/aedff3bc4290aba85a1e17e0cf7a08a94ddfb45c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.01290v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning [article]

Hongjun Wang, Guanbin Li, Xiaobai Liu, Liang Lin
<span title="2020-10-15">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
(Corresponding author: Guanbin Li) H. Wang, G. Li and L.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07849v1">arXiv:2010.07849v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yeuv5nd4sreonmhehpxyent3n4">fatcat:yeuv5nd4sreonmhehpxyent3n4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201024010622/https://arxiv.org/pdf/2010.07849v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8e/7b/8e7bc6ea46f0ccb3c86c7795af95a33799f71883.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.07849v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Micro-expression spotting: A new benchmark [article]

Thuong-Khanh Tran, Quang-Nhat Vo, Xiaopeng Hong, Xiaobai Li, Guoying Zhao
<span title="2020-12-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In 2013, Li et al.  ...  For example, in the research of Li et al. [2] , the performance of ME recognition is more than 65% when evaluated on the manually processed ME samples.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.12421v2">arXiv:2007.12421v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yucdutshp5atnoxjboowhxorwa">fatcat:yucdutshp5atnoxjboowhxorwa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823111700/https://arxiv.org/pdf/2007.12421v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e2/01/e2012e0d0578178de13244eb3efc33a3c8379f71.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.12421v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Comparative analysis of the complete chloroplast genome sequences in psammophyticHaloxylonspecies (Amaranthaceae)

Wenpan Dong, Chao Xu, Delu Li, Xiaobai Jin, Ruili Li, Qi Lu, Zhili Suo
<span title="2016-11-10">2016</span> <i title="PeerJ"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/eyfkjqp7sva5bbnwatk5zazi7q" style="color: black;">PeerJ</a> </i> &nbsp;
There are two published complete cp genome sequences (Spinacia oleracea and Beta vulgaris subsp. vulgaris) from members of the Amaranthaceae family (Li et al., 2014; Schmitz-Linneweber et al., 2001) .  ...  (Schmitz-Linneweber et al., 2001) and B. vulgaris subsp. vulgaris (GenBank accession number KJ081864.1, Beta vulgaris subsp. vulgaris) (Li et al., 2014) , two closely related species in the Amaranthaceae  ...  . • Delu Li contributed reagents/materials/analysis tools, reviewed drafts of the paper. • Xiaobai Jin wrote the paper, reviewed drafts of the paper. • Ruili Li prepared figures and/or tables, reviewed  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.7717/peerj.2699">doi:10.7717/peerj.2699</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/27867769">pmid:27867769</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC5111891/">pmcid:PMC5111891</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j7bgi5c2zzac7hsj4hwpcfmfgm">fatcat:j7bgi5c2zzac7hsj4hwpcfmfgm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170922043511/https://peerj.com/articles/2699.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d5/c5/d5c508877c1f0f04636f05ae10bcdb25b862105e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.7717/peerj.2699"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5111891" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Recognising spontaneous facial micro-expressions

Tomas Pfister, Xiaobai Li, Guoying Zhao, Matti Pietikainen
<span title="">2011</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/753trptklbb4nj6jquqadzwwdu" style="color: black;">2011 International Conference on Computer Vision</a> </i> &nbsp;
Such a pipelined system could be used to detect lies by requiring MKL-PHASE1(K) = micro ∧ MKL-PHASE2(K) = lie.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2011.6126401">doi:10.1109/iccv.2011.6126401</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccv/PfisterLZP11.html">dblp:conf/iccv/PfisterLZP11</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zlxhp5o4lzbrdoxt3x6ulmysae">fatcat:zlxhp5o4lzbrdoxt3x6ulmysae</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190728164902/http://tomas.pfister.fi:80/files/pfister11microexpressions.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b2/d5/b2d5db039a59995ea46aa5beed427fc9e57d89b5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccv.2011.6126401"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Video-based Remote Physiological Measurement via Cross-verified Feature Disentangling [article]

Xuesong Niu, Zitong Yu, Hu Han, Xiaobai Li, Shiguang Shan, Guoying Zhao
<span title="2020-07-16">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Remote physiological measurements, e.g., remote photoplethysmography (rPPG) based heart rate (HR), heart rate variability (HRV) and respiration frequency (RF) measuring, are playing more and more important roles under the application scenarios where contact measurement is inconvenient or impossible. Since the amplitude of the physiological signals is very small, they can be easily affected by head movements, lighting conditions, and sensor diversities. To address these challenges, we propose a
more &raquo; ... ross-verified feature disentangling strategy to disentangle the physiological features with non-physiological representations, and then use the distilled physiological features for robust multi-task physiological measurements. We first transform the input face videos into a multi-scale spatial-temporal map (MSTmap), which can suppress the irrelevant background and noise features while retaining most of the temporal characteristics of the periodic physiological signals. Then we take pairwise MSTmaps as inputs to an autoencoder architecture with two encoders (one for physiological signals and the other for non-physiological information) and use a cross-verified scheme to obtain physiological features disentangled with the non-physiological features. The disentangled features are finally used for the joint prediction of multiple physiological signals like average HR values and rPPG signals. Comprehensive experiments on different large-scale public datasets of multiple physiological measurement tasks as well as the cross-database testing demonstrate the robustness of our approach.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.08213v1">arXiv:2007.08213v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jfcpcti2ubdw7ist5yxr7r44zu">fatcat:jfcpcti2ubdw7ist5yxr7r44zu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200718015054/https://arxiv.org/pdf/2007.08213v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/00/0f/000f47c6bb00732dfe5302f85b64bc8896fc5457.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.08213v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Modal Face Anti-Spoofing Based on Central Difference Networks [article]

Zitong Yu, Yunxiao Qin, Xiaobai Li, Zezheng Wang, Chenxu Zhao, Zhen Lei, Guoying Zhao
<span title="2020-04-17">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks. Existing multi-modal FAS methods rely on stacked vanilla convolutions, which is weak in describing detailed intrinsic information from modalities and easily being ineffective when the domain shifts (e.g., cross attack and cross ethnicity). In this paper, we extend the central difference convolutional networks (CDCN) to a multi-modal version, intending to capture intrinsic spoofing
more &raquo; ... among three modalities (RGB, depth and infrared). Meanwhile, we also give an elaborate study about single-modal based CDCN. Our approach won the first place in "Track Multi-Modal" as well as the second place in "Track Single-Modal (RGB)" of ChaLearn Face Anti-spoofing Attack Detection Challenge@CVPR2020 . Our final submission obtains 1.02±0.59% and 4.84±1.79% ACER in "Track Multi-Modal" and "Track Single-Modal (RGB)", respectively. The codes are available athttps://github.com/ZitongYu/CDCN.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.08388v1">arXiv:2004.08388v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5ej47ypxefd5rc4darlvcyx64i">fatcat:5ej47ypxefd5rc4darlvcyx64i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200507053206/https://arxiv.org/pdf/2004.08388v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0e/e7/0ee72b79da6a7e0695a23070656688a4b1221b6b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.08388v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 430 results