650 Hits in 1.5 sec

Pyramid Mask Text Detector [article]

Jingchao Liu, Xuebo Liu, Jie Sheng, Ding Liang, Xin Li, Qingjie Liu
2019 arXiv   pre-print
Scene text detection, an essential step of scene text recognition system, is to locate text instances in natural scene images automatically. Some recent attempts benefiting from Mask R-CNN formulate scene text detection task as an instance segmentation problem and achieve remarkable performance. In this paper, we present a new Mask R-CNN based framework named Pyramid Mask Text Detector (PMTD) to handle the scene text detection. Instead of binary text mask generated by the existing Mask R-CNN
more » ... ed methods, our PMTD performs pixel-level regression under the guidance of location-aware supervision, yielding a more informative soft text mask for each text instance. As for the generation of text boxes, PMTD reinterprets the obtained 2D soft mask into 3D space and introduces a novel plane clustering algorithm to derive the optimal text box on the basis of 3D shape. Experiments on standard datasets demonstrate that the proposed PMTD brings consistent and noticeable gain and clearly outperforms state-of-the-art methods. Specifically, it achieves an F-measure of 80.13% on ICDAR 2017 MLT dataset.
arXiv:1903.11800v1 fatcat:zleabbqd3fcznhsok5dzx5tosi

Counting dense objects in remote sensing images [article]

Guangshuai Gao, Qingjie Liu, Yunhong Wang
2020 arXiv   pre-print
Estimating accurate number of interested objects from a given image is a challenging yet important task. Significant efforts have been made to address this problem and achieve great progress, yet counting number of ground objects from remote sensing images is barely studied. In this paper, we are interested in counting dense objects from remote sensing images. Compared with object counting in natural scene, this task is challenging in following factors: large scale variation, complex cluttered
more » ... ackground and orientation arbitrariness. More importantly, the scarcity of data severely limits the development of research in this field. To address these issues, we first construct a large-scale object counting dataset based on remote sensing images, which contains four kinds of objects: buildings, crowded ships in harbor, large-vehicles and small-vehicles in parking lot. We then benchmark the dataset by designing a novel neural network which can generate density map of an input image. The proposed network consists of three parts namely convolution block attention module (CBAM), scale pyramid module (SPM) and deformable convolution module (DCM). Experiments on the proposed dataset and comparisons with state of the art methods demonstrate the challenging of the proposed dataset, and superiority and effectiveness of our method.
arXiv:2002.05928v1 fatcat:vezcj6ersfgw3bft3wipdqsbiy

SparseTT: Visual Tracking with Sparse Transformers [article]

Zhihong Fu, Zehua Fu, Qingjie Liu, Wenrui Cai, Yunhong Wang
2022 arXiv   pre-print
Transformers have been successfully applied to the visual tracking task and significantly promote tracking performance. The self-attention mechanism designed to model long-range dependencies is the key to the success of Transformers. However, self-attention lacks focusing on the most relevant information in the search regions, making it easy to be distracted by background. In this paper, we relieve this issue with a sparse attention mechanism by focusing the most relevant information in the
more » ... ch regions, which enables a much accurate tracking. Furthermore, we introduce a double-head predictor to boost the accuracy of foreground-background classification and regression of target bounding boxes, which further improve the tracking performance. Extensive experiments show that, without bells and whistles, our method significantly outperforms the state-of-the-art approaches on LaSOT, GOT-10k, TrackingNet, and UAV123, while running at 40 FPS. Notably, the training time of our method is reduced by 75% compared to that of TransT. The source code and models are available at
arXiv:2205.03776v1 fatcat:djchdede4vbrxgq5gdp32yisza

EDTER: Edge Detection with Transformer [article]

Mengyang Pu and Yaping Huang and Yuming Liu and Qingji Guan and Haibin Ling
2022 arXiv   pre-print
Convolutional neural networks have made significant progresses in edge detection by progressively exploring the context and semantic features. However, local details are gradually suppressed with the enlarging of receptive fields. Recently, vision transformer has shown excellent capability in capturing long-range dependencies. Inspired by this, we propose a novel transformer-based edge detector, Edge Detection TransformER (EDTER), to extract clear and crisp object boundaries and meaningful
more » ... by exploiting the full image context information and detailed local cues simultaneously. EDTER works in two stages. In Stage I, a global transformer encoder is used to capture long-range global context on coarse-grained image patches. Then in Stage II, a local transformer encoder works on fine-grained patches to excavate the short-range local cues. Each transformer encoder is followed by an elaborately designed Bi-directional Multi-Level Aggregation decoder to achieve high-resolution features. Finally, the global context and local cues are combined by a Feature Fusion Module and fed into a decision head for edge prediction. Extensive experiments on BSDS500, NYUDv2, and Multicue demonstrate the superiority of EDTER in comparison with state-of-the-arts.
arXiv:2203.08566v1 fatcat:vb2gjughizglxnnx3itvpi6muq

Dynamical switching of lasing emission by exceptional point modulation in coupled microcavities [article]

Yicong Zhang, Weiwei Liu, Qingjie Liu, Bing Wang, Peixiang Lu
2019 arXiv   pre-print
In a non-Hermitian optical system with loss and gain, an exceptional point (EP) will arise under specific parameters where the eigenvalues and eigenstates exhibit simultaneous coalescence. Here we report a dynamical switching of lasing behavior in a non-Hermitian system composed of coupled microcavities by modulating the EPs. Utilizing the effect of gain, loss and coupling on the eigenstates of coupled microcavities, the evolution path of the eigenvalues related to the laser emission
more » ... tics can be modulated. As a result, the lasing emission property of the coupled cavities exhibits an dynamical switching behavior, which can also be effectively controlled by tuning the gain and loss of the cavities. Moreover, the evolution behavior in a more complicated system composed of three coupled microcavities is investigated, which shows a better tunability compared with the two-microcavity system. Our results have correlated the EPs in non-Hermitian system with lasing emission in complex microcavity systems, which shows great potential for realizing dynamical, ultrafast and multifunctional optoelectronic devices for on-chip integrations.
arXiv:1912.11765v1 fatcat:vbyprhkfofgerfw5vafhbf2iri

Unsupervised Change Detection for Multispectral Remote Sensing Images Using Random Walks

Qingjie Liu, Lining Liu, Yunhong Wang
2017 Remote Sensing  
Lining Liu and Qingjie Liu performed the experiments; Qingjie Liu analyzed the data; Yunhong Wang contributed analysis tools; Qingjie Liu and Lining Liu wrote the paper.Conflicts of Interest:The authors  ...  Author Contributions: Qingjie Liu and Lining Liu conceived and designed the experiments.  ... 
doi:10.3390/rs9050438 fatcat:g66xktqcrragbc4vbxstvxny7e

Feature Map Pooling for Cross-View Gait Recognition Based on Silhouette Sequence Images [article]

Qiang Chen, Yunhong Wang, Zheng Liu, Qingjie Liu, Di Huang
2017 arXiv   pre-print
In this paper, we develop a novel convolutional neural network based approach to extract and aggregate useful information from gait silhouette sequence images instead of simply representing the gait process by averaging silhouette images. The network takes a pair of arbitrary length sequence images as inputs and extracts features for each silhouette independently. Then a feature map pooling strategy is adopted to aggregate sequence features. Subsequently, a network which is similar to Siamese
more » ... twork is designed to perform recognition. The proposed network is simple and easy to implement and can be trained in an end-to-end manner. Cross-view gait recognition experiments are conducted on OU-ISIR large population dataset. The results demonstrate that our network can extract and aggregate features from silhouette sequence effectively. It also achieves significant equal error rates and comparable identification rates when compared with the state of the art.
arXiv:1711.09358v1 fatcat:6jljlh6w6zes7fklkpmrwjpewm

Ultrasound Video Summarization using Deep Reinforcement Learning [article]

Tianrui Liu, Qingjie Meng, Athanasios Vlontzos, Jeremy Tan, Daniel Rueckert, Bernhard Kainz
2020 arXiv   pre-print
Video is an essential imaging modality for diagnostics, e.g. in ultrasound imaging, for endoscopy, or movement assessment. However, video hasn't received a lot of attention in the medical image analysis community. In the clinical practice, it is challenging to utilise raw diagnostic video data efficiently as video data takes a long time to process, annotate or audit. In this paper we introduce a novel, fully automatic video summarization method that is tailored to the needs of medical video
more » ... . Our approach is framed as reinforcement learning problem and produces agents focusing on the preservation of important diagnostic information. We evaluate our method on videos from fetal ultrasound screening, where commonly only a small amount of the recorded data is used diagnostically. We show that our method is superior to alternative video summarization methods and that it preserves essential information required by clinical diagnostic standards.
arXiv:2005.09531v1 fatcat:7smho4n6lber3ccxzwvvfac6yi

Co-Saliency Detection with Co-Attention Fully Convolutional Network [article]

Guangshuai Gao, Wenting Zhao, Qingjie Liu, Yunhong Wang
2020 arXiv   pre-print
(Corresponding author: Qingjie Liu) Guangshuai Gao, Qingjie Liu and Yunhong Wang are with the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Xueyuan Road, Haidian District  ...  Liu et al.  ... 
arXiv:2008.08909v1 fatcat:ykgayihxrvcp5hac3gv545h7li

Tunnel surrounding rock stability prediction using improved KNN algorithm

Qingjie Qi, Shuai Huang, Jianzhong Liu, Wengang Liu
2020 Journal of Vibroengineering  
Liu et al.  ... 
doi:10.21595/jve.2020.21427 fatcat:oh4ljcxu5jem3dla3aqgtafx54

Isolation and identification of a halophilic and alkaliphilic microalgal strain

Chenxi Liu, Jiali Liu, Songmiao Hu, Xin Wang, Xuhui Wang, Qingjie Guan
2019 PeerJ  
., 2011; Wei, Takano & Liu, 2012) .  ...  Full-size  DOI: 10.7717/peerj.7189/fig-4 Liu et al. (2019), PeerJ, DOI 10.7717/peerj.7189 8/10 Liu et al. (2019), PeerJ, DOI 10.7717/peerj.7189 10/10  ...  Qingjie Guan conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved  ... 
doi:10.7717/peerj.7189 pmid:31275763 pmcid:PMC6596407 fatcat:ykxrhodmuzbo7cbnuyaqkxzaxm

Study on Flow in Fractured Porous Media Using Pore-Fracture Network Modeling

Haijiao Liu, Xuhui Zhang, Xiaobing Lu, Qingjie Liu
2017 Energies  
Liu et al. [36] developed a fluid-solid coupling model for low-permeability fractured reservoirs.  ...  Author Contributions: Xuhui Zhang, Xiaobing Lu and Qingjie Liu conceived of the presented idea. Haijiao Liu performed the computations and verified the methods.  ...  Haijiao Liu and Xuhui Zhang wrote and revised the article. All authors discussed the results and contributed to the final manuscript.  ... 
doi:10.3390/en10121984 fatcat:jmlnjpevxfayfdtwbs7fod6yhm

Visual and Textual Sentiment Analysis Using Deep Fusion Convolutional Neural Networks [article]

Xingyue Chen, Yunhong Wang, Qingjie Liu
2017 arXiv   pre-print
Sentiment analysis is attracting more and more attentions and has become a very hot research topic due to its potential applications in personalized recommendation, opinion mining, etc. Most of the existing methods are based on either textual or visual data and can not achieve satisfactory results, as it is very hard to extract sufficient information from only one single modality data. Inspired by the observation that there exists strong semantic correlation between visual and textual data in
more » ... cial medias, we propose an end-to-end deep fusion convolutional neural network to jointly learn textual and visual sentiment representations from training examples. The two modality information are fused together in a pooling layer and fed into fully-connected layers to predict the sentiment polarity. We evaluate the proposed approach on two widely used data sets. Results show that our method achieves promising result compared with the state-of-the-art methods which clearly demonstrate its competency.
arXiv:1711.07798v1 fatcat:wwg7euw24fcldbzozdseokerc4

Qingjie Fuzheng granules inhibit colorectal cancer cell growth by the PI3K/AKT and ERK pathways

Hong Yang, Jian-Xin Liu, Hai-Xia Shang, Shan Lin, Jin-Yan Zhao, Jiu-Mao Lin
2019 World Journal of Gastrointestinal Oncology  
Qingjie Fuzheng granules (QFGs) are part of a traditional Chinese medicine formula, which has been widely used and found to be clinically effective with few side effects in various cancer treatments, including  ...  QFGs: Qingjie Fuzheng granules; FACS: Fluorescence activated cell sorting.  ...  QFGs: Qingjie Fuzheng granules; PI: Propidium iodide; FITC: Fluorescein isothiocyanate.  ... 
doi:10.4251/wjgo.v11.i5.377 pmid:31139308 pmcid:PMC6522764 fatcat:26eplsija5efrdd3jmlb4ubd7m

CNN-based Density Estimation and Crowd Counting: A Survey [article]

Guangshuai Gao, Junyu Gao, Qingjie Liu, Qi Wang, Yunhong Wang
2020 arXiv   pre-print
Furthermore, Liu et al.  ...  ), Northwestern Polytechnical University, Xi'an 710072, Shanxi, China (email:; * Corresponding author: Qingjie Liu Hangzhou, 310051,China (email: gaoguangshuai1990  ... 
arXiv:2003.12783v1 fatcat:uqsoismxkzft7audwvdpr3dt7q
« Previous Showing results 1 — 15 out of 650 results