Filters








554 Hits in 1.3 sec

Auto-Context R-CNN [article]

Bo Li, Tianfu Wu, Lun Zhang, Rufeng Chu
<span title="2018-07-08">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Wu is the corresponding author arXiv:1807.02842v1 [cs.CV] 8 Jul 2018 be classified since the sliding window technique is practically prohibitive.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.02842v1">arXiv:1807.02842v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3fd6qzv3mncibmt2begdi7eiqq">fatcat:3fd6qzv3mncibmt2begdi7eiqq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929212836/https://arxiv.org/pdf/1807.02842v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/62/03/62035628c85e13c10db4dfe2acedc5741874fc2e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.02842v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Image Synthesis From Reconfigurable Layout and Style [article]

Wei Sun, Tianfu Wu
<span title="2019-08-20">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Despite remarkable recent progress on both unconditional and conditional image synthesis, it remains a long-standing problem to learn generative models that are capable of synthesizing realistic and sharp images from reconfigurable spatial layout (i.e., bounding boxes + class labels in an image lattice) and style (i.e., structural and appearance variations encoded by latent vectors), especially at high resolution. By reconfigurable, it means that a model can preserve the intrinsic one-to-many
more &raquo; ... pping from a given layout to multiple plausible images with different styles, and is adaptive with respect to perturbations of a layout and style latent code. In this paper, we present a layout- and style-based architecture for generative adversarial networks (termed LostGANs) that can be trained end-to-end to generate images from reconfigurable layout and style. Inspired by the vanilla StyleGAN, the proposed LostGAN consists of two new components: (i) learning fine-grained mask maps in a weakly-supervised manner to bridge the gap between layouts and images, and (ii) learning object instance-specific layout-aware feature normalization (ISLA-Norm) in the generator to realize multi-object style generation. In experiments, the proposed method is tested on the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained. The code and pretrained models are available at .
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.07500v1">arXiv:1908.07500v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hzvw33cy4vck5ahwnvmjg6e6ru">fatcat:hzvw33cy4vck5ahwnvmjg6e6ru</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200823010258/https://arxiv.org/pdf/1908.07500v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/03/e5/03e504fd71cdc80ebcc16798e951ece979eb7e0e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.07500v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Attentive Normalization [article]

Xilai Li, Wei Sun, Tianfu Wu
<span title="2021-03-25">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Wu is the corresponding author. 1 Classification in ImageNet: https://github.com/iVMCL/ AOGNets-v2 and Detection in MS-COCO: https://github.com/ iVMCL/AttentiveNorm_Detection [19] across three neural  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.01259v3">arXiv:1908.01259v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wktwntatb5bx7i6jo47wgbf2qm">fatcat:wktwntatb5bx7i6jo47wgbf2qm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200928105204/https://arxiv.org/pdf/1908.01259v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/99/30/9930a686ea454ad856c99e5e39f7f7fb5670d3db.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1908.01259v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recognizing Car Fluents from Video [article]

Bo Li and Tianfu Wu and Caiming Xiong and Song-Chun Zhu
<span title="2016-03-26">2016</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Physical fluents, a term originally used by Newton [40], refers to time-varying object states in dynamic scenes. In this paper, we are interested in inferring the fluents of vehicles from video. For example, a door (hood, trunk) is open or closed through various actions, light is blinking to turn. Recognizing these fluents has broad applications, yet have received scant attention in the computer vision literature. Car fluent recognition entails a unified framework for car detection, car part
more &raquo; ... alization and part status recognition, which is made difficult by large structural and appearance variations, low resolutions and occlusions. This paper learns a spatial-temporal And-Or hierarchical model to represent car fluents. The learning of this model is formulated under the latent structural SVM framework. Since there are no publicly related dataset, we collect and annotate a car fluent dataset consisting of car videos with diverse fluents. In experiments, the proposed method outperforms several highly related baseline methods in terms of car fluent recognition and car part localization.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1603.08067v1">arXiv:1603.08067v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zskp3jzkdvgefgz27ekb7otaoi">fatcat:zskp3jzkdvgefgz27ekb7otaoi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200911114250/https://arxiv.org/pdf/1603.08067v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/49/d9/49d977caaad699be61c23385bcaa933074f7f76c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1603.08067v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning Auxiliary Monocular Contexts Helps Monocular 3D Object Detection [article]

Xianpeng Liu, Nan Xue, Tianfu Wu
<span title="2021-12-09">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Wu is the corresponding author.  ...  Learning Auxiliary Monocular Contexts Helps Monocular 3D Object Detection Xianpeng Liu, 1 Nan Xue, 2 Tianfu  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.04628v1">arXiv:2112.04628v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e5ev2xesvjgmpe5cfm57cscu6i">fatcat:e5ev2xesvjgmpe5cfm57cscu6i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211211055837/https://arxiv.org/pdf/2112.04628v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2a/c0/2ac06626c66aa7c1b0e7a5b31ee51a088c630d88.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.04628v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning Patch-to-Cluster Attention in Vision Transformer [article]

Ryan Grainger, Thomas Paniagua, Xi Song, Tianfu Wu
<span title="2022-03-22">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Wu is the corresponding author. (ViT) model [12, 40] has witnessed remarkable progress in computer vision.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.11987v1">arXiv:2203.11987v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/c6tzn5pwf5bbfeiuqyzdjkkxhe">fatcat:c6tzn5pwf5bbfeiuqyzdjkkxhe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220325043944/https://arxiv.org/pdf/2203.11987v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8c/4a/8c4a4e33193dff9960a6c3f1b42ddf841ad3b702.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.11987v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Event Driven Fusion [article]

Siddharth Roheda, Hamid Krim, Zhi-Quan Luo, Tianfu Wu
<span title="2021-03-05">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper presents a technique which exploits the occurrence of certain events as observed by different sensors, to detect and classify objects. This technique explores the extent of dependence between features being observed by the sensors, and generates more informed probability distributions over the events. Provided some additional information about the features of the object, this fusion technique can outperform other existing decision level fusion approaches that may not take into
more &raquo; ... the relationship between different features. Furthermore, this paper addresses the issue of coping with damaged sensors when using the model, by learning a hidden space between sensor modalities which can be exploited to safeguard detection performance.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.11520v3">arXiv:1904.11520v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5tj3ofsqqzggddtb66b5rnuxiu">fatcat:5tj3ofsqqzggddtb66b5rnuxiu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210313163751/https://arxiv.org/pdf/1904.11520v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/02/d2/02d2f2fc9a9fba35d540f49e856f1c6370e3ff33.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1904.11520v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Refining Self-Supervised Learning in Imaging: Beyond Linear Metric [article]

Bo Jiang, Hamid Krim, Tianfu Wu, Derya Cansever
<span title="2022-02-25">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We introduce in this paper a new statistical perspective, exploiting the Jaccard similarity metric, as a measure-based metric to effectively invoke non-linear features in the loss of self-supervised contrastive learning. Specifically, our proposed metric may be interpreted as a dependence measure between two adapted projections learned from the so-called latent representations. This is in contrast to the cosine similarity measure in the conventional contrastive learning model, which accounts
more &raquo; ... correlation information. To the best of our knowledge, this effectively non-linearly fused information embedded in the Jaccard similarity, is novel to self-supervision learning with promising results. The proposed approach is compared to two state-of-the-art self-supervised contrastive learning methods on three image datasets. We not only demonstrate its amenable applicability in current ML problems, but also its improved performance and training efficiency.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.12921v1">arXiv:2202.12921v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fpemkuygyjedfb7ct4y3xkxo44">fatcat:fpemkuygyjedfb7ct4y3xkxo44</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220518004240/https://arxiv.org/pdf/2202.12921v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/08/89/0889caf63eec4ca0509da2728f2bb4ae4ececdd6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2202.12921v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Zero-Shot Learning posed as a Missing Data Problem [article]

Bo Zhao, Botong Wu, Tianfu Wu, Yizhou Wang
<span title="2017-02-21">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper presents a method of zero-shot learning (ZSL) which poses ZSL as the missing data problem, rather than the missing label problem. Specifically, most existing ZSL methods focus on learning mapping functions from the image feature space to the label embedding space. Whereas, the proposed method explores a simple yet effective transductive framework in the reverse way \--- our method estimates data distribution of unseen classes in the image feature space by transferring knowledge from
more &raquo; ... he label embedding space. In experiments, our method outperforms the state-of-the-art on two popular datasets.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1612.00560v2">arXiv:1612.00560v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p3vildklijdxfei6xnzljrafau">fatcat:p3vildklijdxfei6xnzljrafau</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200906013547/https://arxiv.org/pdf/1612.00560v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/18/a7/18a7ff041c4c716fa212632a3a165d45ecbdccb9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1612.00560v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Inducing Hierarchical Compositional Model by Sparsifying Generator Network [article]

Xianglei Xing, Tianfu Wu, Song-Chun Zhu, Ying Nian Wu
<span title="2020-06-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper proposes to learn hierarchical compositional AND-OR model for interpretable image synthesis by sparsifying the generator network. The proposed method adopts the scene-objects-parts-subparts-primitives hierarchy in image representation. A scene has different types (i.e., OR) each of which consists of a number of objects (i.e., AND). This can be recursively formulated across the scene-objects-parts-subparts hierarchy and is terminated at the primitive level (e.g., wavelets-like basis).
more &raquo; ... To realize this AND-OR hierarchy in image synthesis, we learn a generator network that consists of the following two components: (i) Each layer of the hierarchy is represented by an over-complete set of convolutional basis functions. Off-the-shelf convolutional neural architectures are exploited to implement the hierarchy. (ii) Sparsity-inducing constraints are introduced in end-to-end training, which induces a sparsely activated and sparsely connected AND-OR model from the initially densely connected generator network. A straightforward sparsity-inducing constraint is utilized, that is to only allow the top-k basis functions to be activated at each layer (where k is a hyper-parameter). The learned basis functions are also capable of image reconstruction to explain the input images. In experiments, the proposed method is tested on four benchmark datasets. The results show that meaningful and interpretable hierarchical representations are learned with better qualities of image synthesis and reconstruction obtained than baselines.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.04324v2">arXiv:1909.04324v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xpzlcqk4ffgldff3aghztkhkhq">fatcat:xpzlcqk4ffgldff3aghztkhkhq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200702220934/https://arxiv.org/pdf/1909.04324v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/32/ab/32ab69ff15899d8172248c230796ad55b8f07627.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.04324v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Zero-Shot Learning Posed as a Missing Data Problem

Bo Zhao, Botong Wu, Tianfu Wu, Yizhou Wang
<span title="">2017</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6s36fqp6q5hgpdq2scjq3sfu6a" style="color: black;">2017 IEEE International Conference on Computer Vision Workshops (ICCVW)</a> </i> &nbsp;
This paper presents a method of zero-shot learning (ZSL) which poses ZSL as the missing data problem, rather than the missing label problem. Specifically, most existing ZSL methods focus on learning mapping functions from the image feature space to the label embedding space. Whereas, the proposed method explores a simple yet effective transductive framework in the reverse way -our method estimates data distribution of unseen classes in the image feature space by transferring knowledge from the
more &raquo; ... abel embedding space. Following the transductive setting, we leverage unlabeled data to refine the initial estimation. In experiments, our method achieves the highest classification accuracies on two popular datasets, namely, 96.00% on AwA and 60.24% on CUB.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2017.310">doi:10.1109/iccvw.2017.310</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iccvw/ZhaoWWW17.html">dblp:conf/iccvw/ZhaoWWW17</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wjgybk6wpngkldghrvqbih4h2m">fatcat:wjgybk6wpngkldghrvqbih4h2m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180820200622/http://openaccess.thecvf.com:80/content_ICCV_2017_workshops/papers/w38/Zhao_Zero-Shot_Learning_Posed_ICCV_2017_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/30/73/30731b45b3579a0c1fafb72a7c9b80bc7a2a44f8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccvw.2017.310"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Lupus nephritis - alarmins may sound the alarm?

Tianfu Wu, Chandra Mohan
<span title="">2012</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/m5rx6mbo4jhzdownmvkt3dg5vq" style="color: black;">Arthritis Research &amp; Therapy</a> </i> &nbsp;
Th e recent report by Abdulahad and colleagues [1] that urinary HMGB1 may be a maker of lupus nephritis is the latest addition to a growing body of literature implicating a central role for this molecule in systemic lupus erythematosus (SLE). HMGB1 is a prototype of the alarmin family of molecules, implicated as autoadjuvants that serve to amplify the immune response. Perhaps the fi rst link to SLE emerged in 2005 when it was fi rst reported to be expressed at high levels in the skin of
more &raquo; ... s lupus [2] . Th ereafter, it was noted to be elevated in the serum of SLE patients as well, using an immunoblot approach [3]. Following that, there has been a steady trickle of reports validating the elevated levels of serum HMGB1 in SLE, as in the latest two studies [1, 4] . Indeed, elevated HMGB1 has been noted not only in the serum, but also in the kidneys of patients with lupus nephritis, as well as other chronic renal diatheses [4, 5] . How HMGB1 impacts the pathogenesis of SLE has been extensively studied. We now know that cell death as well as cell activation by infl ammatory triggers can promote the translocation of nuclear HMGB1 to the cytoplasm and its release into the extracellular milieu. Binding of released HMGB1 to a variety of receptors, including receptor for advanced glycation end products (RAGE), Toll-like receptor (TLR)2, TLR4, TLR9, Mac-1, syndecan-1, phosphacan protein-tyrosine phosphatase-ζ/ β, and CD24, evokes the transcription and elaboration of several pro-infl ammatory cytokines and type I interferons. Hence, the elevated levels of HMGB1 may, in part, explain the prominent type I interferon signature that characterizes SLE, as well as the documented increases in multiple pro-infl ammatory cytokines. Based on the fi ndings from multiple studies, elevated HMGB1 activity may also explain the phenotypic changes noted in dendritic cells and T cells in SLE, at least in part. Two additional studies further incriminated HMGB1 in the pathogenesis of lupus. Tian and co-workers [6] demon strated that the ability of DNA-containing immune complexes from SLE sera to stimulate plasmacytoid DCs and autoreactive B cells was contingent upon HMGB1 binding to immune complexes. Th us, HMGB1-DNA-anti-DNA complexes played a critical role in amplifying the auto-infl ammatory cascade, by engaging the corresponding receptors for DNA (that is, the B-cell receptor and TLR9) and HMGB1 (that is, RAGE, TLRs or other receptors). A year later, Voll's [7] group demonstrated that the injection of nucleosomes from apoptotic cells could elicit lupus in mice, and implicated a role for HMGB1-bound nucleosomes in this process. Collectively, these studies substantiate the autoadjuvant role of HMGB1 in driving systemic lupus. Th e recent reports on lupus nephritis [1, 4] shift our focus to the kidneys, in the context of SLE. It is evident from these studies that HMGB1 can be expressed in the kidneys and urine of patients with lupus nephritis, correlating with disease activity. How HMGB1 expressed within the kidneys might amplify local infl ammation is an open question. We know that the receptors for HMGB1 are expressed on multiple intrinsic renal cells, including proximal tubular cells, podocytes, mesangial cells and endothelial cells, in addition to their expression on the infi ltrating macrophages. Indeed, increased expres sion of RAGE may play a role in several chronic kidney diseases, including lupus nephritis [8] . Moreover, Abstract A growing body of literature has documented the elevated levels of the alarmin HMGB1 in lupus skin and serum. Two recent reports highlight the increased expression of HMGB1 in lupus nephritis, within the diseased kidneys or in the urine. Taken together with previous reports, these fi ndings suggest that the interaction of HMGB1 with a variety of receptors, including receptor for advanced glycation end products (RAGE) and Toll-like receptors, might play a role in the pathogenesis of lupus nephritis. These studies introduce urinary HMGB1 as a novel biomarker candidate in lupus nephritis. Whether alarmins would be eff ective in sounding the alarm at the incipience of renal damage remains to be ascertained.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/ar4109">doi:10.1186/ar4109</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/23270666">pmid:23270666</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC3674625/">pmcid:PMC3674625</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q4sfulkupnfhtekomzytqhw7my">fatcat:q4sfulkupnfhtekomzytqhw7my</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20130120141019/http://arthritis-research.com/content/pdf/ar4109.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/84/b2/84b25649edb23266f6ae6ef9018dc56c4521dc41.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1186/ar4109"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3674625" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Autoantibodies as Potential Biomarkers in Breast Cancer

Jingyi Qiu, Bailey Keyser, Zuan-Tao Lin, Tianfu Wu
<span title="2018-07-13">2018</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/67lbgjadfzhv7axcc5zzsqmwo4" style="color: black;">Biosensors</a> </i> &nbsp;
Breast cancer is a major cause of mortality in women; however, technologies for early stage screening and diagnosis (e.g., mammography and other imaging technologies) are not optimal for the accurate detection of cancer. This creates demand for a more effective diagnostic means to replace or be complementary to existing technologies for early discovery of breast cancer. Cancer neoantigens could reflect tumorigenesis, but they are hardly detectable at the early stage. Autoantibodies, however,
more &raquo; ... biologically amplified and hence may be measurable early on, making them promising biomarkers to discriminate breast cancer from healthy tissue accurately. In this review, we summarized the recent findings of breast cancer specific antigens and autoantibodies, which may be useful in early detection, disease stratification, and monitoring of treatment responses of breast cancer.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/bios8030067">doi:10.3390/bios8030067</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/30011807">pmid:30011807</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fuytmtduc5bhpop4fxxerbw7ge">fatcat:fuytmtduc5bhpop4fxxerbw7ge</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191204004415/http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC6163859&amp;blobtype=pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/bios8030067"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Learning Spatial Pyramid Attentive Pooling in Image Synthesis and Image-to-Image Translation [article]

Wei Sun, Tianfu Wu
<span title="2019-01-18">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Image synthesis and image-to-image translation are two important generative learning tasks. Remarkable progress has been made by learning Generative Adversarial Networks (GANs) goodfellow2014generative and cycle-consistent GANs (CycleGANs) zhu2017unpaired respectively. This paper presents a method of learning Spatial Pyramid Attentive Pooling (SPAP) which is a novel architectural unit and can be easily integrated into both generators and discriminators in GANs and CycleGANs. The proposed SPAP
more &raquo; ... tegrates Atrous spatial pyramid chen2018deeplab, a proposed cascade attention mechanism and residual connections he2016deep. It leverages the advantages of the three components to facilitate effective end-to-end generative learning: (i) the capability of fusing multi-scale information by ASPP; (ii) the capability of capturing relative importance between both spatial locations (especially multi-scale context) or feature channels by attention; (iii) the capability of preserving information and enhancing optimization feasibility by residual connections. Coarse-to-fine and fine-to-coarse SPAP are studied and intriguing attention maps are observed in both tasks. In experiments, the proposed SPAP is tested in GANs on the Celeba-HQ-128 dataset karras2017progressive, and tested in CycleGANs on the Image-to-Image translation datasets including the Cityscape dataset cordts2016cityscapes, Facade and Aerial Maps dataset zhu2017unpaired, both obtaining better performance.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.06322v1">arXiv:1901.06322v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3fcbl2bw2vbs7bolralqt7dnia">fatcat:3fcbl2bw2vbs7bolralqt7dnia</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200825031714/https://arxiv.org/pdf/1901.06322v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4e/e7/4ee7c202f5a0b66ee0ce7e97063eea76b138b2a8.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.06322v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multiple ectopic hepatocellular carcinomas in the pancreas

Zhigui Li, Xiaoting Wu, Tianfu Wen, Chuan Li, Wen Peng
<span title="">2017</span> <i title="Ovid Technologies (Wolters Kluwer Health)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2fgvdq2bsrdtbigpwmptmgucfm" style="color: black;">Medicine</a> </i> &nbsp;
RATIONALE: Ectopic liver tissue can develop at various sites near the liver. Ectopic hepatocellular carcinomas (HCCs) arising from ectopic liver tissue have a rare clinical incidence. A very rare case has been observed to have metastasis after operation. PATIENT CONCERNS: We report an extremely rare case with multiple masses which were identified in the head and body of the pancreas. DIAGNOSES: Ectopic hepatocellular carcinomas. INTERVENTIONS: The masses were removed by surgical resection.
more &raquo; ... pathological analysis showed that both masses were ectopic HCC. OUTCOMES: The patient was still alive and did not have metastasis and relapse. LESSONS: The literature review for this rare case is also presented to highlight the risk of ectopic HCC and good prognosis of operation for ectopic HCC. Abbreviations: AFP = alpha-fetoprotein, CEA = carcinoembryonic antigen, HCC = hepatocellular carcinomas, HCG = human chorionic gonadotropin.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1097/md.0000000000006747">doi:10.1097/md.0000000000006747</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/28746170">pmid:28746170</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC5627796/">pmcid:PMC5627796</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/noiizmy6mzbnfpxxs45bmudqey">fatcat:noiizmy6mzbnfpxxs45bmudqey</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190304121237/http://pdfs.semanticscholar.org/c67a/4a47ec3b3625cfb6f388638e07ecc5d2fddb.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c6/7a/c67a4a47ec3b3625cfb6f388638e07ecc5d2fddb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1097/md.0000000000006747"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5627796" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 554 results