A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Sequentially Generated Instance-Dependent Image Representations for Classification
[article]
2014
arXiv
pre-print
In this paper, we investigate a new framework for image classification that adaptively generates spatial representations. ...
its adaptive region selection allow the system to perform well in budgeted classification tasks by exploiting a dynamicly generated representation of each image. ...
Conclusion In this paper, we introduced an adaptive representation process for image classification. ...
arXiv:1312.6594v3
fatcat:d5kn6k45izeyrektpu2rogchn4
Sequence-to-Sequence Contrastive Learning for Text Recognition
[article]
2020
arXiv
pre-print
To account for the sequence-to-sequence structure, each feature map is divided into different instances over which the contrastive loss is computed. ...
Experiments on handwritten text and on scene text show that when a text decoder is trained on the learned representations, our method outperforms non-sequential contrastive methods. ...
is dependent on the image width (Fig. 3(b) ). 3. ...
arXiv:2012.10873v1
fatcat:2i4yud7ayzfavmangejytjm3iq
Learning Permutation Invariant Representations using Memory Networks
[article]
2020
arXiv
pre-print
Many real-world tasks such as classification of digital histopathology images and 3D object detection involve learning from a set of instances. ...
We evaluated the learning ability of MEM on various toy datasets, point cloud classification, and classification of lung whole slide images (WSIs) into two subtypes of lung cancer---Lung Adenocarcinoma ...
Memory networks enable learning of dependencies among instances of a set by providing an explicit memory representation for each instance in the sequence. ...
arXiv:1911.07984v2
fatcat:i2aqsn27fbf4vg4zhdkxjnobbm
Attention-driven Tree-structured Convolutional LSTM for High Dimensional Data Understanding
[article]
2019
arXiv
pre-print
Thus, ConvLSTM is not suitable for tree-structured image data analysis. ...
In order to address these limitations, we present tree-structured ConvLSTM models for tree-structured image analysis tasks which can be trained end-to-end. ...
This generation process is repeated 15000 times, resulting in a dataset with 10000 training instances, 2000 validation instances, and 3000 testing instances † . ...
arXiv:1902.10053v1
fatcat:z5s3rl5gdfciboa7lvajodbp2m
Sketch-a-Net that Beats Humans
2015
Procedings of the British Machine Vision Conference 2015
Prior work on sketch recognition generally follows the conventional image classification paradigm, that is, extracting hand-crafted features from sketch images followed by feeding them to a classifier. ...
Most handcrafted features traditionally used for photos (such as HOG, SIFT and shape context) have been employed, which are often coupled with Bagof-Words (BoW) to yield a final feature representations ...
Prior work on sketch recognition generally follows the conventional image classification paradigm, that is, extracting hand-crafted features from sketch images followed by feeding them to a classifier. ...
doi:10.5244/c.29.7
dblp:conf/bmvc/YuYSXH15
fatcat:ocn4m62sqbhcda6ywob7oqmyje
Short-term Traffic Prediction with Deep Neural Networks: A Survey
[article]
2020
arXiv
pre-print
In this study, we survey recent STTP studies applying deep networks from four perspectives. 1) We summarize input data representation methods according to the number and type of spatial and temporal dependencies ...
In modern transportation systems, an enormous amount of traffic data is generated every day. ...
For instance, in [46] , an input image is processed by a vision processing CNN and subsequently passed through a text-generating RNN. ...
arXiv:2009.00712v1
fatcat:rvcz235ugbahhjglkjgvwgxks4
Multimodal Sequential Fashion Attribute Prediction
2019
Information
Compared to other models, the sequential model is also better able to generate sequences of attribute chains not seen during training. ...
We propose to address this task with a sequential prediction model that can learn to capture the dependencies between the different attribute values in the chain. ...
We also thank the anonymous reviewers for their helpful comments.
Conflicts of Interest: The authors declare no conflict of interest. Information 2019, 10, 308 ...
doi:10.3390/info10100308
fatcat:435kwdcbpbdepnwqt2ewyzpqay
Learning to Continually Learn
[article]
2020
arXiv
pre-print
It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. ...
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. ...
We thank Hava Siegelmann for her vision in creating that program and for including us in it. ...
arXiv:2002.09571v2
fatcat:hdboateo6bdfvmi7fske6tq7le
Discovery of Shifting Patterns in Sequence Classification
[article]
2017
arXiv
pre-print
For instance, we can identify a cropland during its growing season, but it looks similar to a barren land after harvest or before planting. ...
In this paper, we investigate the multi-variate sequence classification problem from a multi-instance learning perspective. ...
[23] adopt a dictionary learning method to detect frequent patterns and then transform sequential data into a pattern-based representation for classification. Wang et al. ...
arXiv:1712.07203v1
fatcat:hki5a5xh3bgwlkmyfaypjybkmy
Sequential Explanations with Mental Model-Based Policies
[article]
2020
arXiv
pre-print
Our results suggest that mental model-based policies (anchored in our proposed state representation) may increase interpretability over multiple sequential explanations, when compared to a random selection ...
This work provides insight into how to select explanations which increase relevant information for users, and into conducting human-grounded experimentation to understand interpretability. ...
Acknowledgments We thank Sam Maldonado for setting up the MOOClet engine back-end server for data collection. ...
arXiv:2007.09028v1
fatcat:vnb6xobu7zfibhym67tqrp2sba
Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review
2018
JAMIA Journal of the American Medical Informatics Association
Results: We surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, ...
Review has grown for two reasons. ...
For instance, in 27 , 610 076 patient records from Vanderbilt's Electronic Medical Record were used to perform sequential prediction of medications. ...
doi:10.1093/jamia/ocy068
pmid:29893864
fatcat:ne7weiw7xvc2lp7hfgkzltdnri
HOTR: End-to-End Human-Object Interaction Detection with Transformers
[article]
2021
arXiv
pre-print
, and ii) the classification of the interaction labels. ...
Most existing methods have indirectly addressed this task by detecting human and object instances and individually inferring every pair of the detected instances. ...
The instance decoder transforms the instance queries to instance representations for object detection while the interaction decoder transforms the interaction queries to interaction representations for ...
arXiv:2104.13682v1
fatcat:egbxkcw6lra5fcjim5xqpjl3mi
An Attentive Survey of Attention Models
[article]
2021
arXiv
pre-print
We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications. ...
embeddings for image classification task. ...
Finally, Transformers have also been used for Image Generation task with Image Transformer by [Parmar et al. 2018] and Image GPT by ], designed to sequentially predict each pixel of an output image given ...
arXiv:1904.02874v3
fatcat:fyqgqn7sxzdy3efib3rrqexs74
A Survey of Deep Learning for Scientific Discovery
[article]
2020
arXiv
pre-print
In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different ...
Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. ...
The authors would like to thank Jon Kleinberg, Samy Bengio, Yann LeCun, Chiyuan Zhang, Quoc Le, Arun Chaganty, Simon Kornblith, Aniruddh Raghu, John Platt, Richard Murray, Stu Feldman and Guy Gur-Ari for ...
arXiv:2003.11755v1
fatcat:igy35ko5hfcj5ctp5nck7y2z44
Deeply Exploiting Long-Term View Dependency for 3D Shape Recognition
2019
IEEE Access
Incorporating the aggregation module into a standard convolutional network architecture, we develop an effective method for 3D shape classification and retrieval. ...
Most existing view-based methods treat the views of an object as an unordered set, which ignores the dynamic relations among the views, e.g. sequential semantic dependencies. ...
In many scenarios, the views are generated by a sequential process, e.g. moving a camera around the object. Therefore, the views may contain many sequential or temporal semantic dependencies. ...
doi:10.1109/access.2019.2934650
fatcat:vo7jdyq7qnbnvok6uwhblgzkvm
« Previous
Showing results 1 — 15 out of 56,439 results