Filters








504 Hits in 1.9 sec

Differentiable Linearized ADMM [article]

Xingyu Xie, Jianlong Wu, Zhisheng Zhong, Guangcan Liu, Zhouchen Lin
2019 arXiv   pre-print
In Proceedings of the AAAI Conference on Artificial Intelligence, 2018.Xingyu Xie * 1 Jianlong Wu * 1 Zhisheng Zhong 1 Guangcan Liu 2 Zhouchen Lin 1 Differentiable Linearized ADMM (Supplementary Material  ... 
arXiv:1905.06179v1 fatcat:upuwu2zsijam3ic3lti6qdhfmq

W-core inverses in rings with involution [article]

Huihui Zhu, Liyun Wu, Jianlong Chen
2022 arXiv   pre-print
Let R be a unital *-ring and let a,w,v∈ R. The initial goal of this work is to introduce two new classes of generalized inverses, called the w-core inverse and the dual v-core inverse. It is shown that the core inverse and the pseudo core inverse belongs to the defined w-core inverse, and the dual core inverse and the dual pseudo core inverse are instances of the dual v-core inverse. An element a∈ R is w-core invertible if there exists some x∈ R such that awx^2=x, xawa=a and (awx)^*=awx. Such
more » ... x is called a w-core inverse of a. The dual v-core inverse of a is defined by the existence of y∈ R satisfying y^2va=y, avay=a and (yva)^*=yva. Several characterizations of them are given, and the expression of the w-core inverse (resp. the dual v-core inverse) of a is given by the inverse of w along a and {1,3}-inverses of a (resp. the inverse of v along a and {1,4}-inverses of a). Also, the characterization for both w-core invertible and dual v-core invertible elements is given by units. Finally, the relationships among the w-core inverse, the dual v-core inverse and other generalized inverses are given.
arXiv:2205.00181v1 fatcat:vl3gnkk4hfcm7jkfyyq6t746vi

Maximum-and-Concatenation Networks [article]

Xingyu Xie, Hao Kong, Jianlong Wu, Wayne Zhang, Guangcan Liu, Zhouchen Lin
2020 arXiv   pre-print
., batch normalization (Ioffe & Szegedy, 2015) , group normalization (Wu & He, 2018) , dropout (Srivastava et al., 2014) , etc.  ... 
arXiv:2007.04630v1 fatcat:3yitm2jlenddxmwvuaquvexrle

Matrix Recovery with Implicitly Low-Rank Data [article]

Xingyu Xie, Jianlong Wu, Guangcan Liu, Jun Wang
2018 arXiv   pre-print
In this paper, we study the problem of matrix recovery, which aims to restore a target matrix of authentic samples from grossly corrupted observations. Most of the existing methods, such as the well-known Robust Principal Component Analysis (RPCA), assume that the target matrix we wish to recover is low-rank. However, the underlying data structure is often non-linear in practice, therefore the low-rankness assumption could be violated. To tackle this issue, we propose a novel method for matrix
more » ... ecovery in this paper, which could well handle the case where the target matrix is low-rank in an implicit feature space but high-rank or even full-rank in its original form. Namely, our method pursues the low-rank structure of the target matrix in an implicit feature space. By making use of the specifics of an accelerated proximal gradient based optimization algorithm, the proposed method could recover the target matrix with non-linear structures from its corrupted version. Comprehensive experiments on both synthetic and real datasets demonstrate the superiority of our method.
arXiv:1811.03945v1 fatcat:unxcilrdfna4ng6o6oz2dr3ssy

Mechanical Creep Instability of Nanocrystalline Methane Hydrates [article]

Pinqiang Cao, Jianlong Sheng, Jianyang Wu, Fulong Ning
2020 arXiv   pre-print
Mechanical creep behaviors of natural gas hydrates (NGHs) are of importance for understanding mechanical instability of gas hydrate-bearing sediments on Earth. Limited by the experimental challenges, intrinsic creep mechanisms of nanocrystalline methane hydrates remain largely unknown yet at molecular scale. Herein, using large-scale molecular dynamics (MD) simulations, mechanical creep behaviors of nanocrystalline methane hydrates are investigated. It is revealed that mechanical creep
more » ... are greatly dictated by internal microstructures of crystalline grain size and external conditions of temperature and static stress. Interestingly, a long steady-state creep is observed in nanocrystalline methane hydrates, which can be described by a modified constitutive Bird-Dorn-Mukherjee model. Microstructural analysis show that deformations of crystalline grains, grain boundary (GB) diffusion and GB sliding collectively govern the mechanical creep behaviors of nanocrystalline methane hydrates. Furthermore, structural transformation also appears important in their mechanical creep mechanisms. This study sheds new insights into understanding the mechanical creep scenarios of gas hydrates.
arXiv:2011.07786v1 fatcat:4cwbygabonaslc42wfvigndacm

Reduction Disintegration Behavior of Lump Ore in COREX Shaft Furnace

Shengli Wu, Xinliang Liu, Jianlong Wu
2015 ISIJ International  
The heavy disintegration of lump ores would produce plenty of small particles in COREX shaft furnace, which would decrease the gas permeability and productivity of the shaft furnace, thus the proportion of lump ores in the burden of COREX shaft furnace is limited to a low level. In this work, the reduction disintegration behavior of lump ore samples was studied by simulating the reduction process of COREX shaft furnace. The influence of temperature, reduction time and gas composition on the
more » ... ction disintegration index (RDI -6.3 ) of lump ore samples were also evaluated. The results showed that the disintegration behavior of lump ores in COREX shaft furnace could be generally divided into three steps and the disintegration mainly occurred in the second step, which was in the temperature zone from 450°C to 650°C with low reduction degree. Meanwhile, the RDI -6.3 of lump ore samples all presented the tendency of "inverted V-shape" in the temperature range from 450°C to 650°C under different reduction time. However, the mutual promotion of reduction reaction and carbon deposition reaction (CDR) was attributed to the main reason for the heavy disintegration of lump ores in COREX shaft furnace. In addition, increasing H 2 concentration in reducing gas and rapid reducing at higher temperature would decrease the disintegration degree of lump ores in COREX shaft furnace.
doi:10.2355/isijinternational.isijint-2014-417 fatcat:526c2nw3gbazxcb2dg2cingacy

Nano watermill driven by the revolving charge [article]

Xiaoyan Zhou, Jianlong Kou, Xuechuan Nie, Fengmin Wu, Yang Liu and Hangjun Lu
2015 arXiv   pre-print
Using molecular dynamics simulations, we propose a novel nanoscale watermill for unidirectional transport of water molecules through a curved single-walled carbon nanotube (SWNT). In this nanoscale system, a revolving charge is introduced to drive water chain confined inside the SWNT, which is served as nano waterwheel and nano engine. A resonance-like phenomenon is found that the revolving frequency of the charge plays a key role in pumping water chain. The water flux across the SWNT increases
more » ... with respect to the revolving frequency of the external charge and reaches the maximum when the frequency is 4 THz. Correspondingly, the number of the hydrogen bonds of water chain inside the SWNT decreases dramatically with the frequency ranging from 4 THz to 25 THz. The mechanism behind the resonant phenomenon has been investigated systematically. Our findings are helpful for designing nanoscale fluidic devices and energy converters.
arXiv:1502.03832v1 fatcat:pnt26qflb5gm5apsf7vqg2fgfm

Image Inspired Poetry Generation in XiaoIce [article]

Wen-Feng Cheng, Chao-Chung Wu, Ruihua Song, Jianlong Fu, Xing Xie, Jian-Yun Nie
2018 arXiv   pre-print
(Tosa, Obara, and Minoh 2008) and (Wu, Tosa, and Nakatsu 2009) developed an interactive system for traditional Japanese Poetry.  ... 
arXiv:1808.03090v1 fatcat:utiescavojecza6oehcvuxbia4

Semantic-aware Modular Capsule Routing for Visual Question Answering [article]

Yudong Han, Jianhua Yin, Jianlong Wu, Yinwei Wei, Liqiang Nie
2022 arXiv   pre-print
:ZR2019JQ23; Yudong Han, Jianhua Yin, and Jianlong Wu are with the School of Computer Science and Technology, Shandong University, Qingdao, 266237, China.  ... 
arXiv:2207.10404v1 fatcat:725u4d4jgfahzjarbg4ggu6cbq

SOGNet: Scene Overlap Graph Network for Panoptic Segmentation [article]

Yibo Yang, Hongyang Li, Xia Li, Qijie Zhao, Jianlong Wu, Zhouchen Lin
2019 arXiv   pre-print
The panoptic segmentation task requires a unified result from semantic and instance segmentation outputs that may contain overlaps. However, current studies widely ignore modeling overlaps. In this study, we aim to model overlap relations among instances and resolve them for panoptic segmentation. Inspired by scene graph representation, we formulate the overlapping problem as a simplified case, named scene overlap graph. We leverage each object's category, geometry and appearance features to
more » ... form relational embedding, and output a relation matrix that encodes overlap relations. In order to overcome the lack of supervision, we introduce a differentiable module to resolve the overlap between any pair of instances. The mask logits after removing overlaps are fed into per-pixel instance \verb|id| classification, which leverages the panoptic supervision to assist in the modeling of overlap relations. Besides, we generate an approximate ground truth of overlap relations as the weak supervision, to quantify the accuracy of overlap relations predicted by our method. Experiments on COCO and Cityscapes demonstrate that our method is able to accurately predict overlap relations, and outperform the state-of-the-art performance for panoptic segmentation. Our method also won the Innovation Award in COCO 2019 challenge.
arXiv:1911.07527v1 fatcat:hzuqk53k6jcizoaaspkvy3iz5q

MiniViT: Compressing Vision Transformers with Weight Multiplexing [article]

Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, Lu Yuan
2022 arXiv   pre-print
Vision Transformer (ViT) models have recently drawn much attention in computer vision due to their high model capability. However, ViT models suffer from huge number of parameters, restricting their applicability on devices with limited memory. To alleviate this problem, we propose MiniViT, a new compression framework, which achieves parameter reduction in vision transformers while retaining the same performance. The central idea of MiniViT is to multiplex the weights of consecutive transformer
more » ... blocks. More specifically, we make the weights shared across layers, while imposing a transformation on the weights to increase diversity. Weight distillation over self-attention is also applied to transfer knowledge from large-scale ViT models to weight-multiplexed compact models. Comprehensive experiments demonstrate the efficacy of MiniViT, showing that it can reduce the size of the pre-trained Swin-B transformer by 48\%, while achieving an increase of 1.0\% in Top-1 accuracy on ImageNet. Moreover, using a single-layer of parameters, MiniViT is able to compress DeiT-B by 9.7 times from 86M to 9M parameters, without seriously compromising the performance. Finally, we verify the transferability of MiniViT by reporting its performance on downstream benchmarks. Code and models are available at here.
arXiv:2204.07154v1 fatcat:euxga32ah5aercyff6hfu3zhxq

Near-term performance of quantum repeaters with imperfect ensemble-based quantum memories [article]

Yufeng Wu, Jianlong Liu, Christoph Simon
2019 arXiv   pre-print
JianLong Liu was supported by the National Key R&D Program of China (2017YFA0303902). Appendix A: Detailed derivation of Eq. (14) and Eq. (15) In this section, we give a detailed derivation of Eq.  ... 
arXiv:1912.01702v1 fatcat:7m76as5obfcu7gsjradjb5agha

TinyViT: Fast Pretraining Distillation for Small Vision Transformers [article]

Kan Wu, Jinnian Zhang, Houwen Peng, Mengchen Liu, Bin Xiao, Jianlong Fu, Lu Yuan
2022 arXiv   pre-print
Vision transformer (ViT) recently has drawn great attention in computer vision due to its remarkable model capability. However, most prevailing ViT models suffer from huge number of parameters, restricting their applicability on devices with limited resources. To alleviate this issue, we propose TinyViT, a new family of tiny and efficient small vision transformers pretrained on large-scale datasets with our proposed fast distillation framework. The central idea is to transfer knowledge from
more » ... e pretrained models to small ones, while enabling small models to get the dividends of massive pretraining data. More specifically, we apply distillation during pretraining for knowledge transfer. The logits of large teacher models are sparsified and stored in disk in advance to save the memory cost and computation overheads. The tiny student transformers are automatically scaled down from a large pretrained model with computation and parameter constraints. Comprehensive experiments demonstrate the efficacy of TinyViT. It achieves a top-1 accuracy of 84.8% on ImageNet-1k with only 21M parameters, being comparable to Swin-B pretrained on ImageNet-21k while using 4.2 times fewer parameters. Moreover, increasing image resolutions, TinyViT can reach 86.5% accuracy, being slightly better than Swin-L while using only 11% parameters. Last but not the least, we demonstrate a good transfer ability of TinyViT on various downstream tasks. Code and models are available at https://github.com/microsoft/Cream/tree/main/TinyViT.
arXiv:2207.10666v1 fatcat:3xkyjmockvhmbltkj3jhsur2sa

Rethinking and Improving Relative Position Encoding for Vision Transformer [article]

Kan Wu and Houwen Peng and Minghao Chen and Jianlong Fu and Hongyang Chao
2021 arXiv   pre-print
Relative position encoding (RPE) is important for transformer to capture sequence ordering of input tokens. General efficacy has been proven in natural language processing. However, in computer vision, its efficacy is not well studied and even remains controversial, e.g., whether relative position encoding can work equally well as absolute position? In order to clarify this, we first review existing relative position encoding methods and analyze their pros and cons when applied in vision
more » ... rmers. We then propose new relative position encoding methods dedicated to 2D images, called image RPE (iRPE). Our methods consider directional relative distance modeling as well as the interactions between queries and relative position embeddings in self-attention mechanism. The proposed iRPE methods are simple and lightweight. They can be easily plugged into transformer blocks. Experiments demonstrate that solely due to the proposed encoding methods, DeiT and DETR obtain up to 1.5% (top-1 Acc) and 1.3% (mAP) stable improvements over their original versions on ImageNet and COCO respectively, without tuning any extra hyperparameters such as learning rate and weight decay. Our ablation and analysis also yield interesting findings, some of which run counter to previous understanding. Code and models are open-sourced at https://github.com/microsoft/Cream/tree/main/iRPE.
arXiv:2107.14222v1 fatcat:p4jexj7nrzbjjmxk7eb2hkyxwq

HEAD: HEtero-Assists Distillation for Heterogeneous Object Detectors [article]

Luting Wang, Xiaojie Li, Yue Liao, Zeren Jiang, Jianlong Wu, Fei Wang, Chen Qian, Si Liu
2022 arXiv   pre-print
Conventional knowledge distillation (KD) methods for object detection mainly concentrate on homogeneous teacher-student detectors. However, the design of a lightweight detector for deployment is often significantly different from a high-capacity detector. Thus, we investigate KD among heterogeneous teacher-student pairs for a wide application. We observe that the core difficulty for heterogeneous KD (hetero-KD) is the significant semantic gap between the backbone features of heterogeneous
more » ... ors due to the different optimization manners. Conventional homogeneous KD (homo-KD) methods suffer from such a gap and are hard to directly obtain satisfactory performance for hetero-KD. In this paper, we propose the HEtero-Assists Distillation (HEAD) framework, leveraging heterogeneous detection heads as assistants to guide the optimization of the student detector to reduce this gap. In HEAD, the assistant is an additional detection head with the architecture homogeneous to the teacher head attached to the student backbone. Thus, a hetero-KD is transformed into a homo-KD, allowing efficient knowledge transfer from the teacher to the student. Moreover, we extend HEAD into a Teacher-Free HEAD (TF-HEAD) framework when a well-trained teacher detector is unavailable. Our method has achieved significant improvement compared to current detection KD methods. For example, on the MS-COCO dataset, TF-HEAD helps R18 RetinaNet achieve 33.9 mAP (+2.2), while HEAD further pushes the limit to 36.2 mAP (+4.5).
arXiv:2207.05345v1 fatcat:mwdzoimmxrfkjjv4utegtuu3au
« Previous Showing results 1 — 15 out of 504 results