Filters








3,498 Hits in 2.2 sec

Aesthetic and Musicological Interpretation of the Fifth Erhu Rhapsody by Wang Jianmin

2021 Art and Performance Letters  
Wang Jianmin not only moderately uses national music elements but strives to demonstrate a particular characteristic of national music.  ...  Wang Jianmin's series of five erhu rhapsody undoubtedly pushes erhu art to a new realm and a new height.  ...  Wang Jianmin are to raise one-quarter and three-quarter notes, and to lower one-quarter and three-quarter notes.  ... 
doi:10.23977/artpl.2021.020715 fatcat:6fasud7ypfcgtaq2tegjiomsbe

Deep Triplet Quantization [article]

Bin Liu, Yue Cao, Mingsheng Long, Jianmin Wang, Jingdong Wang
2019 arXiv   pre-print
Deep hashing establishes efficient and effective image retrieval by end-to-end learning of deep representations and hash codes from similarity data. We present a compact coding solution, focusing on deep learning to quantization approach that has shown superior performance over hashing solutions for similarity retrieval. We propose Deep Triplet Quantization (DTQ), a novel approach to learning deep quantization models from the similarity triplets. To enable more effective triplet training, we
more » ... ign a new triplet selection approach, Group Hard, that randomly selects hard triplets in each image group. To generate compact binary codes, we further apply a triplet quantization with weak orthogonality during triplet training. The quantization loss reduces the codebook redundancy and enhances the quantizability of deep representations through back-propagation. Extensive experiments demonstrate that DTQ can generate high-quality and compact binary codes, which yields state-of-the-art image retrieval performance on three benchmark datasets, NUS-WIDE, CIFAR-10, and MS-COCO.
arXiv:1902.00153v1 fatcat:s4vopgkgdfhsjpdvwjkuqijfuu

Answer to November 2016 Photo Quiz

Xiaoxia Hu, Yi Sun, Xinxin Xia, Lili Xu, Bihe Min, Daoyin Zhou, Libing Wang, Hui Wang, Ziguang Niu, Miaoxia He, Jianmin Wang, Anmei Deng (+2 others)
2016 Journal of Clinical Microbiology  
doi:10.1128/jcm.03295-14 pmid:27935833 pmcid:PMC5078568 fatcat:rkdqslw6dnavtgmcsegio53uhe

Matching Consecutive Subpatterns Over Streaming Time Series [article]

Rong Kang, Chen Wang, Peng Wang, Yuting Ding, Jianmin Wang
2018 arXiv   pre-print
Wang et al. [20] propose DSTree which is a data adaptive and dynamic segmentation index on time series.  ... 
arXiv:1805.06757v1 fatcat:o7o4pxxytrfkpdotm3pu2qjocy

Event Data Quality: A Survey [article]

Ruihong Huang, Jianmin Wang
2020 arXiv   pre-print
Wang et al. propose to recover missing events with process model constraints [45, 46] .  ...  Wang et al. [44] propose the event data repairing method with process model constraints. Notation.  ... 
arXiv:2012.07309v1 fatcat:qz5256pkcbe5lilvieq42ua3k4

Photo Quiz: A Man with Lymphadenopathy and Lethargy

Xiaoxia Hu, Yi Sun, Xinxin Xia, Lili Xu, Bihe Min, Daoyin Zhou, Libing Wang, Hui Wang, Ziguang Niu, Miaoxia He, Jianmin Wang, Anmei Deng (+2 others)
2016 Journal of Clinical Microbiology  
A 45-year-old Chinese male presented with lymphadenopathy, irregular fever, and lethargy. The patient had traveled to the Gabonese Republic in West Africa as a member of a labor export service and had stayed there for 6 months every year for the past 4 years. On examination, multiple papules were observed on the skin of the front chest (Fig. 1A) . Lymphadenopathy was observed in the right neck and the bilateral axilla. Mild splenomegaly was present. The serum immunoglobulin IgM level was
more » ... ed to 27.5 g/liter. An axillary node biopsy demonstrated reactive follicular hyperplasia, and no malignancy was noted (Fig. 1B) . Numerous structures were found in the peripheral blood smear (Fig. 1C) .
doi:10.1128/jcm.03285-14 pmid:27935830 pmcid:PMC5078535 fatcat:ry5zjasc3fc5ppibaugarwkrui

Bi-tuning of Pre-trained Representations [article]

Jincheng Zhong, Ximei Wang, Zhi Kou, Jianmin Wang, Mingsheng Long
2020 arXiv   pre-print
It is common within the deep learning community to first pre-train a deep neural network from a large-scale dataset and then fine-tune the pre-trained model to a specific downstream task. Recently, both supervised and unsupervised pre-training approaches to learning representations have achieved remarkable advances, which exploit the discriminative knowledge of labels and the intrinsic structure of data, respectively. It follows natural intuition that both discriminative knowledge and intrinsic
more » ... structure of the downstream task can be useful for fine-tuning, however, existing fine-tuning methods mainly leverage the former and discard the latter. A question arises: How to fully explore the intrinsic structure of data for boosting fine-tuning? In this paper, we propose Bi-tuning, a general learning framework to fine-tuning both supervised and unsupervised pre-trained representations to downstream tasks. Bi-tuning generalizes the vanilla fine-tuning by integrating two heads upon the backbone of pre-trained representations: a classifier head with an improved contrastive cross-entropy loss to better leverage the label information in an instance-contrast way, and a projector head with a newly-designed categorical contrastive learning loss to fully exploit the intrinsic structure of data in a category-consistent way. Comprehensive experiments confirm that Bi-tuning achieves state-of-the-art results for fine-tuning tasks of both supervised and unsupervised pre-trained models by large margins (e.g. 10.7\% absolute rise in accuracy on CUB in low-data regime).
arXiv:2011.06182v1 fatcat:aebujnhitrapdfhcbhvaav5sea

:{unav)

Yangxian Wang, Chunsen Wang, Changli Ma, Jianmin Ma
2012 Journal of Algebraic Combinatorics  
Let X n and Y n be the sets of quadratic forms and symmetric bilinear forms on an n-dimensional vector space V over F q , respectively. The orbits of GL n (F q ) on X n × X n define an association scheme Qua(n, q). The orbits of GL n (F q ) on Y n × Y n also define an association scheme Sym(n, q). Our main results are: Qua(n, q) and Sym(n, q) are formally dual. When q is odd, Qua(n, q) and Sym(n, q) are isomorphic; Qua(n, q) and Sym(n, q) are primitive and self-dual. Next we assume that q is
more » ... n. Qua(n, q) is imprimitive; when (n, q) = (2, 2), all subschemes of Qua(n, q) are trivial, i.e., of class one, and the quotient scheme is isomorphic to Alt(n, q), the association scheme of alternating forms on V . The dual statements hold for Sym(n, q).
doi:10.1023/a:1022978613368 fatcat:y34gv4yrjfdthhjokl2uvn3bjq

Spatiotemporal Pyramid Network for Video Action Recognition [article]

Yunbo Wang, Mingsheng Long, Jianmin Wang, Philip S. Yu
2019 arXiv   pre-print
Wang et al. [32] model long-term temporal structures by proposing a segmental network architecture with sparse sampling. Feichtenhofer et al.  ... 
arXiv:1903.01038v1 fatcat:7jwlmqjwavhehm6s2yz3reuyde

Editorial

Barbara Weber, Florian Daniel, Jianmin Wang
2014 Information Systems  
doi:10.1016/j.is.2014.05.001 fatcat:kznljhjszfcrzleux3p6w3h2uu

Partial Adversarial Domain Adaptation [article]

Zhangjie Cao, Lijia Ma, Mingsheng Long, Jianmin Wang
2018 arXiv   pre-print
Domain adversarial learning aligns the feature distributions across the source and target domains in a two-player minimax game. Existing domain adversarial networks generally assume identical label space across different domains. In the presence of big data, there is strong motivation of transferring deep models from existing big domains to unknown small domains. This paper introduces partial domain adaptation as a new domain adaptation scenario, which relaxes the fully shared label space
more » ... tion to that the source label space subsumes the target label space. Previous methods typically match the whole source domain to the target domain, which are vulnerable to negative transfer for the partial domain adaptation problem due to the large mismatch between label spaces. We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space. Experiments show that PADA exceeds state-of-the-art results for partial domain adaptation tasks on several datasets.
arXiv:1808.04205v1 fatcat:yxjehnhjjzhx5p26b62vibdc2q

Flexible Attributed Network Embedding [article]

Enya Shen, Zhidong Cao, Changqing Zou, Jianmin Wang
2018 arXiv   pre-print
Network embedding aims to find a way to encode network by learning an embedding vector for each node in the network. The network often has property information which is highly informative with respect to the node's position and role in the network. Most network embedding methods fail to utilize this information during network representation learning. In this paper, we propose a novel framework, FANE, to integrate structure and property information in the network embedding process. In FANE, we
more » ... sign a network to unify heterogeneity of the two information sources, and define a new random walking strategy to leverage property information and make the two information compensate. FANE is conceptually simple and empirically powerful. It improves over the state-of-the-art methods on Cora dataset classification task by over 5%, more than 10% on WebKB dataset classification task. Experiments also show that the results improve more than the state-of-the-art methods as increasing training size. Moreover, qualitative visualization show that our framework is helpful in network property information exploration. In all, we present a new way for efficiently learning state-of-the-art task-independent representations in complex attributed networks. The source code and datasets of this paper can be obtained from https://github.com/GraphWorld/FANE.
arXiv:1811.10789v1 fatcat:nshfq7t3qjbbth74rtrsareawq

Multi-Adversarial Domain Adaptation [article]

Zhongyi Pei, Zhangjie Cao, Mingsheng Long, Jianmin Wang
2018 arXiv   pre-print
Transfer learning (Pan and Yang 2010) bridges different domains or tasks to mitigate the burden of manual labeling for machine learning (Pan et al. 2011; Duan, Tsang, and Xu 2012; Zhang et al. 2013; Wang  ... 
arXiv:1809.02176v1 fatcat:mhigbmw4mncebkm7owndsxtjiq

Deep Priority Hashing [article]

Zhangjie Cao, Ziping Sun, Mingsheng Long, Jianmin Wang, Philip S. Yu
2018 arXiv   pre-print
Wang et al. [32] has provided a comprehensive literature survey that covers most of important methods and latest advances.  ... 
arXiv:1809.01238v1 fatcat:qc5buumbg5dsdds33dgg3rgn6e

Adversarial Learning for Zero-shot Domain Adaptation [article]

Jinghua Wang, Jianmin Jiang
2020 arXiv   pre-print
Zero-shot domain adaptation (ZSDA) is a category of domain adaptation problems where neither data sample nor label is available for parameter learning in the target domain. With the hypothesis that the shift between a given pair of domains is shared across tasks, we propose a new method for ZSDA by transferring domain shift from an irrelevant task (IrT) to the task of interest (ToI). Specifically, we first identify an IrT, where dual-domain samples are available, and capture the domain shift
more » ... h a coupled generative adversarial networks (CoGAN) in this task. Then, we train a CoGAN for the ToI and restrict it to carry the same domain shift as the CoGAN for IrT does. In addition, we introduce a pair of co-training classifiers to regularize the training procedure of CoGAN in the ToI. The proposed method not only derives machine learning models for the non-available target-domain data, but also synthesizes the data themselves. We evaluate the proposed method on benchmark datasets and achieve the state-of-the-art performances.
arXiv:2009.05214v1 fatcat:mgp36neavjadjp5m6tndxyjx5i
« Previous Showing results 1 — 15 out of 3,498 results