Filters








84 Hits in 3.4 sec

Semantic-Aware Domain Generalized Segmentation [article]

Duo Peng, Yinjie Lei, Munawar Hayat, Yulan Guo, Wen Li
<span title="2022-04-02">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep models trained on source domain lack generalization when evaluated on unseen target domains with different data distributions. The problem becomes even more pronounced when we have no access to target domain samples for adaptation. In this paper, we address domain generalized semantic segmentation, where a segmentation model is trained to be domain-invariant without using any target domain data. Existing approaches to tackle this problem standardize data into a unified distribution. We
more &raquo; ... e that while such a standardization promotes global normalization, the resulting features are not discriminative enough to get clear segmentation boundaries. To enhance separation between categories while simultaneously promoting domain invariance, we propose a framework including two novel modules: Semantic-Aware Normalization (SAN) and Semantic-Aware Whitening (SAW). Specifically, SAN focuses on category-level center alignment between features from different image styles, while SAW enforces distributed alignment for the already center-aligned features. With the help of SAN and SAW, we encourage both intra-category compactness and inter-category separability. We validate our approach through extensive experiments on widely-used datasets (i.e. GTAV, SYNTHIA, Cityscapes, Mapillary and BDDS). Our approach shows significant improvements over existing state-of-the-art on various backbone networks. Code is available at https://github.com/leolyj/SAN-SAW
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.00822v1">arXiv:2204.00822v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r6exzrn3dzf5tjpdfw2bddxpbi">fatcat:r6exzrn3dzf5tjpdfw2bddxpbi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220406203922/https://arxiv.org/pdf/2204.00822v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e8/b2/e8b2d52d703dd66a460a614348679307692b6147.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.00822v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Overcome Anterograde Forgetting with Cycled Memory Networks [article]

Jian Peng, Dingqi Ye, Bo Tang, Yinjie Lei, Yu Liu, Haifeng Li
<span title="2021-12-04">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Lei 3 , Yu Liu 4 , Haifeng Li 1,* 1 School of Geosciences and Info-Physics, Central  ...  Overcome Anterograde Forgetting with Cycle Memory Networks Jian Peng 1 , Dingqi Ye 1 , Bo Tang2 , Yinjie  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.02342v1">arXiv:2112.02342v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l56n5dkdqzdrzdz4niyoqzl7oi">fatcat:l56n5dkdqzdrzdz4niyoqzl7oi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211208130622/https://arxiv.org/pdf/2112.02342v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/84/af8412107aee0f1ff80421229d73b69804c7cae1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.02342v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards Using Count-level Weak Supervision for Crowd Counting [article]

Yinjie Lei, Yan Liu, Pingping Zhang, Lingqiao Liu
<span title="2020-02-29">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Most existing crowd counting methods require object location-level annotation, i.e., placing a dot at the center of an object. While being simpler than the bounding-box or pixel-level annotation, obtaining this annotation is still labor-intensive and time-consuming especially for images with highly crowded scenes. On the other hand, weaker annotations that only know the total count of objects can be almost effortless in many practical scenarios. Thus, it is desirable to develop a learning
more &raquo; ... that can effectively train models from count-level annotations. To this end, this paper studies the problem of weakly-supervised crowd counting which learns a model from only a small amount of location-level annotations (fully-supervised) but a large amount of count-level annotations (weakly-supervised). To perform effective training in this scenario, we observe that the direct solution of regressing the integral of density map to the object count is not sufficient and it is beneficial to introduce stronger regularizations on the predicted density map of weakly-annotated images. We devise a simple-yet-effective training strategy, namely Multiple Auxiliary Tasks Training (MATT), to construct regularizes for restricting the freedom of the generated density maps. Through extensive experiments on existing datasets and a newly proposed dataset, we validate the effectiveness of the proposed weakly-supervised method and demonstrate its superior performance over existing solutions.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.00164v1">arXiv:2003.00164v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xds74t5x75faxaa4c3rqro2l2i">fatcat:xds74t5x75faxaa4c3rqro2l2i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200322142957/https://arxiv.org/pdf/2003.00164v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.00164v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Mask-aware networks for crowd counting [article]

Shengqin Jiang, Xiaobo Lu, Yinjie Lei, Lingqiao Liu
<span title="2019-06-20">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Lei is with the College of Electronics and Information Engineering, Sichuan University, Chengdu 610064, China (e-mail: yinjie@scu.edu.cn). L.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.00039v2">arXiv:1901.00039v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q35r56ojwbfwje6x2teo6jky5e">fatcat:q35r56ojwbfwje6x2teo6jky5e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930044005/https://arxiv.org/pdf/1901.00039v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/38/9b/389b217408db3c6d013207ede37590513cad28c1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.00039v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Hierarchical Paired Channel Fusion Network for Street Scene Change Detection [article]

Yinjie Lei and Duo Peng and Pingping Zhang and Qiuhong Ke and Haifeng Li
<span title="2020-10-19">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Street Scene Change Detection (SSCD) aims to locate the changed regions between a given street-view image pair captured at different times, which is an important yet challenging task in the computer vision community. The intuitive way to solve the SSCD task is to fuse the extracted image feature pairs, and then directly measure the dissimilarity parts for producing a change map. Therefore, the key for the SSCD task is to design an effective feature fusion method that can improve the accuracy of
more &raquo; ... the corresponding change maps. To this end, we present a novel Hierarchical Paired Channel Fusion Network (HPCFNet), which utilizes the adaptive fusion of paired feature channels. Specifically, the features of a given image pair are jointly extracted by a Siamese Convolutional Neural Network (SCNN) and hierarchically combined by exploring the fusion of channel pairs at multiple feature levels. In addition, based on the observation that the distribution of scene changes is diverse, we further propose a Multi-Part Feature Learning (MPFL) strategy to detect diverse changes. Based on the MPFL strategy, our framework achieves a novel approach to adapt to the scale and location diversities of the scene change regions. Extensive experiments on three public datasets (i.e., PCD, VL-CMU-CD and CDnet2014) demonstrate that the proposed framework achieves superior performance which outperforms other state-of-the-art methods with a considerable margin.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.09925v1">arXiv:2010.09925v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/il6izqgpejdfbn7ny3dc7vglyi">fatcat:il6izqgpejdfbn7ny3dc7vglyi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201024235817/https://arxiv.org/pdf/2010.09925v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/94/89/9489cac7ebc853d230195936682bf681bc1eab76.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2010.09925v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Box2Seg: Learning Semantics of 3D Point Clouds with Box-Level Supervision [article]

Yan Liu, Qingyong Hu, Yinjie Lei, Kai Xu, Jonathan Li, Yulan Guo
<span title="2022-01-09">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Learning dense point-wise semantics from unstructured 3D point clouds with fewer labels, although a realistic problem, has been under-explored in literature. While existing weakly supervised methods can effectively learn semantics with only a small fraction of point-level annotations, we find that the vanilla bounding box-level annotation is also informative for semantic segmentation of large-scale 3D point clouds. In this paper, we introduce a neural architecture, termed Box2Seg, to learn
more &raquo; ... -level semantics of 3D point clouds with bounding box-level supervision. The key to our approach is to generate accurate pseudo labels by exploring the geometric and topological structure inside and outside each bounding box. Specifically, an attention-based self-training (AST) technique and Point Class Activation Mapping (PCAM) are utilized to estimate pseudo-labels. The network is further trained and refined with pseudo labels. Experiments on two large-scale benchmarks including S3DIS and ScanNet demonstrate the competitive performance of the proposed method. In particular, the proposed network can be trained with cheap, or even off-the-shelf bounding box-level annotations and subcloud-level tags.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.02963v1">arXiv:2201.02963v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yfn3uqzqqfhgtkmj4m7jo6bs5e">fatcat:yfn3uqzqqfhgtkmj4m7jo6bs5e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220112024911/https://arxiv.org/pdf/2201.02963v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/be/93/be9326b258a6b1cf10cfc30e3cb2646ad3e88634.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2201.02963v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An HMM-SVM-Based Automatic Image Annotation Approach [chapter]

Yinjie Lei, Wilson Wong, Wei Liu, Mohammed Bennamoun
<span title="">2011</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
This paper presents a novel approach to Automatic Image Annotation (AIA) which combines both Hidden Markov Model (HMM) and Support Vector Machine (SVM). Typical image annotation methods directly map low-level features to high-level concepts and overlook the importance to mining the contextual information among the annotated keywords. The proposed HMM-SVM based approach comprises two different kinds of HMMs based on image color and texture features as the first-stage mapping scheme and an SVM
more &raquo; ... ch is based on the prediction results from the two HMMs as a so-called high-level classifier for final keywording. Our proposed approach assigns 1-5 keywords to each testing image. Using the Corel image dataset, Our experiments have shown that the combination of a discriminative classification and a generative model is beneficial in image annotation
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-19282-1_10">doi:10.1007/978-3-642-19282-1_10</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/a3unxcyk3fatdh3a5oeuedzpye">fatcat:a3unxcyk3fatdh3a5oeuedzpye</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170810164851/http://goanna.cs.rmit.edu.au/~e87368/paper/233282226.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/87/c1/87c1bb53ee711caa2aaa1b729a0ac4df44a24680.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-19282-1_10"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Deep Multiphase Level Set for Scene Parsing [article]

Pingping Zhang and Wei Liu and Yinjie Lei and Hongyu Wang and Huchuan Lu
<span title="2019-10-13">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recently, Fully Convolutional Network (FCN) seems to be the go-to architecture for image segmentation, including semantic scene parsing. However, it is difficult for a generic FCN to discriminate pixels around the object boundaries, thus FCN based methods may output parsing results with inaccurate boundaries. Meanwhile, level set based active contours are superior to the boundary estimation due to the sub-pixel accuracy that they achieve. However, they are quite sensitive to initial settings.
more &raquo; ... address these limitations, in this paper we propose a novel Deep Multiphase Level Set (DMLS) method for semantic scene parsing, which efficiently incorporates multiphase level sets into deep neural networks. The proposed method consists of three modules, i.e., recurrent FCNs, adaptive multiphase level set, and deeply supervised learning. More specifically, recurrent FCNs learn multi-level representations of input images with different contexts. Adaptive multiphase level set drives the discriminative contour for each semantic class, which makes use of the advantages of both global and local information. In each time-step of the recurrent FCNs, deeply supervised learning is incorporated for model training. Extensive experiments on three public benchmarks have shown that our proposed method achieves new state-of-the-art performances.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.03166v2">arXiv:1910.03166v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7viff6ta75hy7fgg5rnxcya2wa">fatcat:7viff6ta75hy7fgg5rnxcya2wa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200905164803/https://arxiv.org/pdf/1910.03166v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/08/8408a1f1da62f0ba15edcb03b2d2b9910b22f92d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1910.03166v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deformation and Correspondence Aware Unsupervised Synthetic-to-Real Scene Flow Estimation for Point Clouds [article]

Zhao Jin, Yinjie Lei, Naveed Akhtar, Haifeng Li, Munawar Hayat
<span title="2022-03-31">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With the recent advances in 3D sensing and data driven technologies, learning scene flow directly from point clouds has * Corresponding Author: Yinjie Lei (yinjie@scu.edu.cn) First Frame Second Frame  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.16895v1">arXiv:2203.16895v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ted2fygjj5ayxghumrblcarrci">fatcat:ted2fygjj5ayxghumrblcarrci</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220519200022/https://arxiv.org/pdf/2203.16895v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/25/7c/257c1be4cd8990a2727c7341b47de176d5545eef.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.16895v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Improving Distant Supervised Relation Extraction by Dynamic Neural Network [article]

Yanjie Gou, Yinjie Lei, Lingqiao Liu, Pingping Zhang, Xi Peng
<span title="2019-12-13">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Distant Supervised Relation Extraction (DSRE) is usually formulated as a problem of classifying a bag of sentences that contain two query entities, into the predefined relation classes. Most existing methods consider those relation classes as distinct semantic categories while ignoring their potential connection to query entities. In this paper, we propose to leverage this connection to improve the relation extraction accuracy. Our key ideas are twofold: (1) For sentences belonging to the same
more &raquo; ... elation class, the expression style, i.e. words choice, can vary according to the query entities. To account for this style shift, the model should adjust its parameters in accordance with entity types. (2) Some relation classes are semantically similar, and the entity types appear in one relation may also appear in others. Therefore, it can be trained cross different relation classes and further enhance those classes with few samples, i.e., long-tail classes. To unify these two arguments, we developed a novel Dynamic Neural Network for Relation Extraction (DNNRE). The network adopts a novel dynamic parameter generator that dynamically generates the network parameters according to the query entity types and relation classes. By using this mechanism, the network can simultaneously handle the style shift problem and enhance the prediction accuracy for long-tail classes. Through our experimental study, we demonstrate the effectiveness of the proposed method and show that it can achieve superior performance over the state-of-the-art methods.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.06489v2">arXiv:1911.06489v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hl4ovrxnhja55dzwk3ehjifgyi">fatcat:hl4ovrxnhja55dzwk3ehjifgyi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200915024428/https://arxiv.org/pdf/1911.06489v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/26/de/26de41cd2c8edf278753a2bc07805ca3f770a46c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.06489v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Reviewing continual learning from the perspective of human-level intelligence [article]

Yifan Chang, Wenbo Li, Jian Peng, Bo Tang, Yu Kang, Yinjie Lei, Yuanmiao Gui, Qing Zhu, Yu Liu, Haifeng Li
<span title="2021-11-23">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Humans' continual learning (CL) ability is closely related to Stability Versus Plasticity Dilemma that describes how humans achieve ongoing learning capacity and preservation for learned information. The notion of CL has always been present in artificial intelligence (AI) since its births. This paper proposes a comprehensive review of CL. Different from previous reviews that mainly focus on the catastrophic forgetting phenomenon in CL, this paper surveys CL from a more macroscopic perspective
more &raquo; ... sed on the Stability Versus Plasticity mechanism. Analogous to biological counterpart, "smart" AI agents are supposed to i) remember previously learned information (information retrospection); ii) infer on new information continuously (information prospection:); iii) transfer useful information (information transfer), to achieve high-level CL. According to the taxonomy, evaluation metrics, algorithms, applications as well as some open issues are then introduced. Our main contributions concern i) rechecking CL from the level of artificial general intelligence; ii) providing a detailed and extensive overview on CL topics; iii) presenting some novel ideas on the potential development of CL.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.11964v1">arXiv:2111.11964v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/je5lyidbongfxj4v67zxs2a3bi">fatcat:je5lyidbongfxj4v67zxs2a3bi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211130070745/https://arxiv.org/pdf/2111.11964v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c4/34/c434e46a598d2d8faeee5b863b99d598a245dd8d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2111.11964v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recognizing Objects in 3D Point Clouds with Multi-Scale Local Features

Min Lu, Yulan Guo, Jun Zhang, Yanxin Ma, Yinjie Lei
<span title="2014-12-15">2014</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
The draft of this article was initially written by Min Lu and Yulan Guo and further revised by Jun Zhang and Yinjie Lei. Yinjie Lei also contributed to the experimental setup.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s141224156">doi:10.3390/s141224156</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/25517694">pmid:25517694</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC4299104/">pmcid:PMC4299104</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2baxtude6japxpbmn7jo2gzzay">fatcat:2baxtude6japxpbmn7jo2gzzay</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20150324004703/http://www.mdpi.com:80/1424-8220/14/12/24156/pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/45/8d/458d4c5b8d16abfbbb2c60a423b334d111249cb7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s141224156"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4299104" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Semi-Supervised Crowd Counting via Self-Training on Surrogate Tasks [article]

Yan Liu, Lingqiao Liu, Peng Wang, Pingping Zhang, Yinjie Lei
<span title="2020-07-18">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The corresponding author: Yinjie Lei (Email: yinijie@scu.edu.cn).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.03207v2">arXiv:2007.03207v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fgah5xk37fdk3auj62gkpo4zqq">fatcat:fgah5xk37fdk3auj62gkpo4zqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829063828/https://arxiv.org/pdf/2007.03207v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/16/d2/16d20b54d806b41efced56a5343ae572bc2d481b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.03207v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Designing Parallel Adaptive Laplacian Smoothing for Improving Tetrahedral Mesh Quality on the GPU

Ning Xi, Yinjie Sun, Lei Xiao, Gang Mei
<span title="2021-06-15">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/smrngspzhzce7dy6ofycrfxbim" style="color: black;">Applied Sciences</a> </i> &nbsp;
Mesh quality is a critical issue in numerical computing because it directly impacts both computational efficiency and accuracy. Tetrahedral meshes are widely used in various engineering and science applications. However, in large-scale and complicated application scenarios, there are a large number of tetrahedrons, and in this case, the improvement of mesh quality is computationally expensive. Laplacian mesh smoothing is a simple mesh optimization method that improves mesh quality by changing
more &raquo; ... e locations of nodes. In this paper, by exploiting the parallelism features of the modern graphics processing unit (GPU), we specifically designed a parallel adaptive Laplacian smoothing algorithm for improving the quality of large-scale tetrahedral meshes. In the proposed adaptive algorithm, we defined the aspect ratio as a metric to judge the mesh quality after each iteration to ensure that every smoothing improves the mesh quality. The adaptive algorithm avoids the shortcoming of the ordinary Laplacian algorithm to create potential invalid elements in the concave area. We conducted 5 groups of comparative experimental tests to evaluate the performance of the proposed parallel algorithm. The results demonstrated that the proposed adaptive algorithm is up to 23 times faster than the serial algorithms; and the accuracy of the tetrahedral mesh is satisfactorily improved after adaptive Laplacian mesh smoothing. Compared with the ordinary Laplacian algorithm, the proposed adaptive Laplacian algorithm is more applicable, and can effectively deal with those tetrahedrons with extremely poor quality. This indicates that the proposed parallel algorithm can be applied to improve the mesh quality in large-scale and complicated application scenarios.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/app11125543">doi:10.3390/app11125543</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rg64et6fyndkncjcm65xdz5rzq">fatcat:rg64et6fyndkncjcm65xdz5rzq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210616104857/https://res.mdpi.com/d_attachment/applsci/applsci-11-05543/article_deploy/applsci-11-05543-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/aa/77/aa771968887f870a0238f6dda0f9a7ac308c16d1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/app11125543"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Deep point-to-subspace metric learning for sketch-based 3D shape retrieval

Yinjie Lei, Ziqin Zhou, Pingping Zhang, Yulan Guo, Zijun Ma, Lingqiao Liu
<span title="">2019</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jm6w2xclfzguxnhmnmq5omebpi" style="color: black;">Pattern Recognition</a> </i> &nbsp;
One key issue in managing a large scale 3D shape dataset is to identify an effective way to retrieve a shape-of-interest. The sketch-based query, which enjoys the flexibility in representing the user's intention, has received growing interests in recent years due to the popularization of the touchscreen technology. Essentially, the sketch depicts an abstraction of a shape in a certain view while the shape contains the full 3D information. Matching between them is a cross-modality retrieval
more &raquo; ... em, and the state-of-the-art solution is to project the sketch and the 3D shape into a common space with which the cross-modality similarity can be calculated by the feature similarity/distance within. However, for a given query, only part of the viewpoints of the 3D shape is representative. Thus, blindly projecting a 3D shape into a feature vector without considering what is the query will inevitably bring query-unrepresentative information. To handle this issue, in this work we propose a Deep Point-to-Subspace Metric Learning (DPSML) framework to project a sketch into a feature vector and a 3D shape into a subspace spanned by a few selected basis feature vectors. The similarity between them is defined as the distance between the query feature vector and its closest point in the subspace by solving an optimization problem on the fly. Note that, the closest point is query-adaptive and can reflect the viewpoint information that is representative to the given query. To efficiently learn such a deep model, we formulate it as a classification problem with a special classifier design. To reduce the redundancy of 3D shapes, we also introduce a Representative-View Selection (RVS) module to select the most representative views of a 3D shape. By conducting extensive experiments on various datasets, we show that the proposed method can achieve superior performance over its competitive baseline methods and attain the state-of-the-art performance.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patcog.2019.106981">doi:10.1016/j.patcog.2019.106981</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kp4rqfodmbbtxcomm3rykhii6y">fatcat:kp4rqfodmbbtxcomm3rykhii6y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715193837/https://digital.library.adelaide.edu.au/dspace/bitstream/2440/127442/2/hdl_127442.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ee/a6/eea61df5b9257c0d5df27c20ce8f22c9fadb35ab.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patcog.2019.106981"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 84 results