A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Knowledge as Priors: Cross-Modal Knowledge Generalization for Datasets without Superior Knowledge
[article]
2020
arXiv
pre-print
Our key idea is to generalize the distilled cross-modal knowledge learned from a Source dataset, which contains paired examples from both modalities, to the Target dataset by modeling knowledge as priors ...
We name our method "Cross-Modal Knowledge Generalization" and demonstrate that our scheme results in competitive performance for 3D hand pose estimation on standard benchmark datasets. ...
Conclusion We introduce an end-to-end scheme for Cross-Modal Knowledge Generalization to transfer cross-modal knowledge between source and target datasets where superior modalities are missing. ...
arXiv:2004.00176v1
fatcat:rxcmw2dkivb4rktyasfrc4vnqu
Knowledge As Priors: Cross-Modal Knowledge Generalization for Datasets Without Superior Knowledge
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Our key idea is to generalize the distilled cross-modal knowledge learned from a Source dataset, which contains paired examples from both modalities, to the Target dataset by modeling knowledge as priors ...
We name our method "Cross-Modal Knowledge Generalization" and demonstrate that our scheme results in competitive performance for 3D hand pose estimation on standard benchmark datasets. ...
Conclusion We introduce an end-to-end scheme for Cross-Modal Knowledge Generalization to transfer cross-modal knowledge between source and target datasets where superior modalities are missing. ...
doi:10.1109/cvpr42600.2020.00656
dblp:conf/cvpr/00030CKM20
fatcat:yxfyr2ra6ng2pn6nxqpnmkeq44
From Labels to Priors in Capsule Endoscopy: A Prior Guided Approach for Improving Generalization with Few Labels
[article]
2022
arXiv
pre-print
We propose using freely available domain knowledge as priors to learn more robust and generalizable representations. ...
generalization, as well as scaling to unseen pathology categories. ...
Through this we test cross-dataset and cross-capsule modality transfer including generalization to new, unseen pathology categories like apthae, chylous, inflammatory lesion, etc. ...
arXiv:2206.05288v1
fatcat:cp4tigturzd47id5uu5c366gre
Predictive Top-Down Integration of Prior Knowledge during Speech Perception
2012
Journal of Neuroscience
Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. ...
When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. ...
trials without matching prior knowledge. ...
doi:10.1523/jneurosci.5069-11.2012
pmid:22723684
pmcid:PMC6620994
fatcat:vr4j7tdmpzawjmso5dqlujb75q
Cross-modality (CT-MRI) prior augmented deep learning for robust lung tumor segmentation from small MR datasets
[article]
2019
arXiv
pre-print
Cross-modality prior encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning model. ...
A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross-modality priors to augment training. ...
Mageras for his insightful suggestions for improving the clarity of the manuscript. ...
arXiv:1901.11369v2
fatcat:kfo4wsx6nvaxdhudwc3gwnnjl4
Semi-Supervised Speech Recognition via Local Prior Matching
[article]
2020
arXiv
pre-print
We demonstrate that LPM is theoretically well-motivated, simple to implement, and superior to existing knowledge distillation techniques under comparable settings. ...
In this work, we propose local prior matching (LPM), a semi-supervised objective that distills knowledge from a strong prior (e.g. a language model) to provide learning signal to a discriminative model ...
Acknowledgements The authors thank Jacob Kahn, Qiantong Xu, Tatiana Likhomanenko, Anuroop Sriram, Vineel Pratap, Vitaliy Liptchinsky, Ronan Collobert for their help and feedback. ...
arXiv:2002.10336v1
fatcat:usrgthqxfzd6zhwuoyagsq4z5i
Visual Question Answering with Prior Class Semantics
[article]
2020
arXiv
pre-print
We present a novel mechanism to embed prior knowledge in a model for visual question answering. ...
We extend the answer prediction process with a regression objective in a semantic space, in which we project candidate answers using prior knowledge derived from word embeddings. ...
These representations can be learned or initialized using prior knowledge, as described below. ...
arXiv:2005.01239v1
fatcat:ikfhkeohr5cprj7m42z556v2xa
NeRP: Implicit Neural Representation Learning with Prior Embedding for Sparsely Sampled Image Reconstruction
[article]
2021
arXiv
pre-print
In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as CT and MRI. ...
No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. ...
For comparison, the second row shows the reconstruction results without using the prior embedding. first two rows, where each column shows the cross-sectional image of the entire 3D volume. ...
arXiv:2108.10991v1
fatcat:k3p2nzbfujhh7jndxnrlg5nxla
Modeling of Intensity Priors for Knowledge-Based Level Set Algorithm in Calvarial Tumors Segmentation
[chapter]
2006
Lecture Notes in Computer Science
In this paper, an automatic knowledge-based framework for level set segmentation of 3D calvarial tumors from Computed Tomography images is presented. ...
The objective of this study is to analyze and validate different approaches in intensity priors modeling with an attention to multiclass problems. ...
In accordance to previously modeled intensity distributions, belief maps for each patient dataset have been generated, Fig. 2 . ...
doi:10.1007/11866763_106
fatcat:oitma6exyfaazn3h6m66efsi2i
COVID-19 Screening in Chest X-Ray Images Using Lung Region Priors
2021
IEEE journal of biomedical and health informatics
Firstly, we propose a multi-scale adversarial domain adaptation network (MS-AdaNet) to boost the cross-domain lung segmentation task as the prior knowledge to the classification network. ...
We extend the proposed MS-AdaNet for lung segmentation task on three different public CXR datasets. ...
as prior knowledge to generate multi-appearance images for improving the performance on COVID-19 screening. ...
doi:10.1109/jbhi.2021.3104629
pmid:34388102
pmcid:PMC8843059
fatcat:zzmtmjf6enb5hnw56d53oduj6i
Improving Uncertainty Calibration via Prior Augmented Data
[article]
2021
arXiv
pre-print
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators. ...
propose a solution to this problem by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior ...
An important design choice that we make is for our OOD generator network g φ (·) to take a dataset X as input to produce distributions of OOD data. ...
arXiv:2102.10803v1
fatcat:6vp7okim3rhpjlgt5xhqdylgny
Unsupervised Image-to-Image Translation with Generative Prior
[article]
2022
arXiv
pre-print
Extensive experiments demonstrate the superiority of our versatile framework over state-of-the-art methods in robust, high-quality and diversified translations, even for challenging and distant domains ...
Our key insight is to leverage the generative prior from pre-trained class-conditional GANs (e.g., BigGAN) to learn rich content correspondences across various domains. ...
This study is supported under the RIE2020 Industry Alignment Fund -Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). ...
arXiv:2204.03641v1
fatcat:qrwswsllh5cbreq4cr5bgb4gum
Incorporating Expert Prior Knowledge into Experimental Design via Posterior Sampling
[article]
2020
arXiv
pre-print
In this paper, we adopt the technique of Bayesian optimization for experimental design since Bayesian optimization has established itself as an efficient tool for optimizing expensive black-box functions ...
Again, it is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization process. ...
Without the loss of generality, a zero-mean GP is often employed in BO, i.e. f ∼ GP(0, K). ...
arXiv:2002.11256v1
fatcat:lcp7quchn5g6hb2jb5jva3kwea
Adaptive Diffusion Priors for Accelerated MRI Reconstruction
[article]
2022
arXiv
pre-print
Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff achieves superior performance to competing models in cross-domain tasks, and superior or on par performance in within-domain tasks ...
Conditional models perform de-aliasing under knowledge of the accelerated imaging operator, so they poorly generalize under domain shifts in the operator. ...
prior during inference for generalization performance (Fig. 2 ). ...
arXiv:2207.05876v1
fatcat:uewwk4ficnd4bmwrscx3eajnle
Scene shape priors for superpixel segmentation
2009
2009 IEEE 12th International Conference on Computer Vision
Superpixels are used as both regions of support for feature vectors and as a starting point for the final segmentation. ...
Lastly, we introduce a new metric for evaluating vision labeling problems. We measure performance on a challenging real-world dataset and illustrate the limitations of conventional evaluation metrics. ...
Our results are achieved without any explicit object knowledge, object location priors or even class specific edges. ...
doi:10.1109/iccv.2009.5459246
dblp:conf/iccv/MoorePWMJ09
fatcat:j6oxnxgssbbvfe4nhb3l737imu
« Previous
Showing results 1 — 15 out of 19,105 results