The Internet Archive has a preservation copy of this work in our general collections.
The file type is application/pdf
.
Filters
Efficient Learning with Partially Observed Attributes
[article]
2010
arXiv
pre-print
We describe and analyze efficient algorithms for learning a linear predictor from examples when the learner can only view a few attributes of each training example. ...
We demonstrate the efficiency of our algorithms by showing that when running on digit recognition data, they obtain a high prediction accuracy even when the learner gets to see only four pixels of each ...
We can learn an arbitrarily accurate predictor w from partially observed examples. ...
arXiv:1004.4421v2
fatcat:lwioc3pgeffu3a4yjor4m6fmte
Variational Selective Autoencoder: Learning from Partially-Observed Heterogeneous Data
[article]
2021
arXiv
pre-print
In this work, we propose the variational selective autoencoder (VSAE), a general framework to learn representations from partially-observed heterogeneous data. ...
VSAE learns the latent dependencies in heterogeneous data by modeling the joint distribution of observed data, unobserved data, and the imputation mask which represents how the data are missing. ...
In this work, we focus on improving deep latent variable models to efficiently learn from partially-observed heterogeneous data. ...
arXiv:2102.12679v1
fatcat:qmwywecuwnbbtm3b3hjldgcsca
Incomplete Graph Representation and Learning via Partial Graph Neural Networks
[article]
2021
arXiv
pre-print
To address this problem, we develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs), for attribute-incomplete graph representation and learning. ...
Existing GNNs are generally designed on complete graphs which can not deal with attribute-incomplete graph data directly. ...
This clearly demonstrates the efficiency of the proposed PaGNNs.
Conclusion In this paper, we develop a novel PaGNNs for attribute-incomplete graph data learning and representation. ...
arXiv:2003.10130v2
fatcat:2whdcqkyvrg7dprqlchn5rhmnq
Understanding Local Structure in Ranked Datasets
2013
Conference on Innovative Data Systems Research
We argue for the use of fundamental data management principles such as declarativeness and incremental evaluation, in combination with state-of-the-art machine learning and data mining techniques, for ...
and efficiently. ...
In [11] the authors work with partial observations that can be decomposed (factored). ...
dblp:conf/cidr/StoyanovichADJM13
fatcat:2g3fn3u7wnetxbimd4fu2rgtgq
Relational networks of conditional preferences
2012
Machine Learning
constant clause length and domain size) are efficiently learnable from ranking tasks using rank
Learning Learning to Optimize: Let X be a space of partial interpretations, Y the corresponding space ...
4: Tree CPR-nets (with constant clause length and domain size) are efficiently learnable from optimization tasks using opt Learning to rank: Let X be a space of outcome sets of size m, Y be the space ...
doi:10.1007/s10994-012-5309-4
fatcat:syqjiksl4rc33boszbgvbo5jbe
Learning from Partial Observations
2007
International Joint Conference on Artificial Intelligence
We extend the Probably Approximately Correct semantics to the case of learning from partial observations with arbitrarily hidden attributes. ...
We establish that simply requiring learned hypotheses to be consistent with observed values suffices to guarantee that hidden values are recoverable to a certain accuracy; we also show that, in some sense ...
An agent faced with partial observations needs to produce hypotheses that do not contradict what is actually observed; the values of masked attributes need not be predicted correctly. ...
dblp:conf/ijcai/Michael07
fatcat:rhqcsn2ufrgabhimggm3ousuya
Missing Information Impediments to Learnability
2011
Journal of machine learning research
To what extent is learnability impeded when information is missing in learning instances? ...
We present relevant known results and concrete open problems, in the context of a natural extension of the PAC learning model that accounts for arbitrarily missing information. ...
Learning from Partial Observations In the PAC learning model (Valiant, 1984) , examples are drawn from some unknown fixed probability distribution D over {0, 1} n . ...
dblp:journals/jmlr/Michael11
fatcat:bgpf6yazmjca3f6bsynmdxwhzq
Non-linear Attributed Graph Clustering by Symmetric NMF with PU Learning
[article]
2018
arXiv
pre-print
relationship between the topology and the attributes in real-world graphs, 2) it leverages the positive unlabeled learning to take the effect of partially observed positive edges into the cluster assignment ...
We propose Non-linear Attributed Graph Clustering by Symmetric Non-negative Matrix Factorization with Positive Unlabeled Learning. ...
learning to take the effect of partially observed positive edges, and 3) it achieves efficient computational complexity, O((n 2 + mn)kt) for learning the cluster assignment. ...
arXiv:1810.00946v1
fatcat:nyjgv2tl4nbofjsqg4cprye2dy
Bayesian Network Classifier for Medical Data Analysis
2009
International Journal of Computers Communications & Control
In this paper we analyse a tree-like Bayesian network learning algorithm optimised for classification of data and we give solutions to the interpretation and analysis of predictions. ...
<br />Surgery survival prediction was examined with the algorithm. Bypass surgery survival chance must be computed for a given patient, having a data-set of 66 medical examinations for 313 patients. ...
This work was partially supported by the Romanian Ministry of Education and Research through grant 11-039/2007. ...
doi:10.15837/ijccc.2009.1.2414
fatcat:be2ywzsbgncobagxnryqkihgua
Active learning with partially featured data
2014
Proceedings of the 23rd International Conference on World Wide Web - WWW '14 Companion
In this paper, we propose a new active learning algorithm in which the learner chooses the samples to be queried from the unlabeled data points whose attributes are only partially observed. ...
We discuss that our approach is flexible and can work with graph mining tasks as well as conventional semi-supervised learning problems. ...
In this paper, however, we present a new approach to learn from the partially observed dataset by first building an imputation model for missing features, and then by performing active learning in the ...
doi:10.1145/2567948.2580062
dblp:conf/www/MoonMK14
fatcat:cms2hvofpnfbll5ib5g5st2mo4
Interactive Multi-Label CNN Learning With Partial Labels
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We address the problem of efficient end-to-end learning a multi-label Convolutional Neural Network (CNN) on training images with partial labels. ...
Training a CNN with partial labels, hence a small number of images for every label, using the standard cross-entropy loss is prone to overfitting and performance drop. ...
Acknowledgements This work is partially supported by DARPA Young Faculty Award (D18AP00050), NSF (IIS-1657197), ONR (N000141812132) and ARO (W911NF1810300). ...
doi:10.1109/cvpr42600.2020.00944
dblp:conf/cvpr/HuynhE20b
fatcat:ji733zx3uzdjjplk4vazuocjgu
Introducing Partial Matching Approach in Association Rules for Better Treatment of Missing Values
[article]
2009
arXiv
pre-print
In recent years, lot of techniques are proposed for imputing missing values by considering attribute relationships with missing value observation and other observations of training dataset. ...
Our imputation technique combines the partial matching concept in association rules with k-nearest neighbor approach. ...
efficient method as compared to k-nearest neighbor approach. ...
arXiv:0904.3321v1
fatcat:ilii5cxzgfblzcxe6jzomhmi7e
Learning to Reason: The Non-Monotonic Case
1995
International Joint Conference on Artificial Intelligence
learning attribute func lions over a generalized domain We consider examples that illustrate various aspects of the non monotonicreasoning phenomena which have been used over the years as bench marks for ...
ing with incomplete information and at the same lime matches our expectations of plausible patterns of reasoning in cases where other theories do not This work continues previous works in the Learn ing ...
functions overIf we have efficient learning (to classify) algorithms for T that can tolerate classification noise we can Learn to Reason with F It turns out that many of the existing learning algorithms ...
dblp:conf/ijcai/Roth95
fatcat:x5jtp7kvxbdazkorjvt4ofzvoq
Optimizing the parameters of functioning of the system of management of data center it infrastructure
2016
Eastern-European Journal of Enterprise Technologies
positioning in space of the attributes of observations of any pair of classes. ...
Thus, the development of new schemes of encoding the attributes of the SLA breach in the algorithms of information-extreme machine learning and the ways of convolution of the partial criteria of efficiency ...
doi:10.15587/1729-4061.2016.79231
fatcat:slb4ombzvvevbeg4eys46wdghi
Partial observability and learnability
2010
Artificial Intelligence
previous learning models that deal with missing information. ...
We finally consider a special case of learning from partial learning examples, where some prior bias exists on the manner in which information is hidden, and show how this provides a unified view of many ...
issue, and depends on whether the formula can be evaluated efficiently on partial observations. ...
doi:10.1016/j.artint.2010.03.004
fatcat:lkqw47or3jfvzpdswf44tu6grm
« Previous
Showing results 1 — 15 out of 312,502 results