Filters








346,477 Hits in 3.7 sec

A General Dimension for Exact Learning [chapter]

José L. Balcázar, Jorge Castro, David Guijarro
2001 Lecture Notes in Computer Science  
We introduce a new combinatorial dimension that gives a good approximation of the number of queries needed to learn in the exact learning model, no matter what set of queries is used.  ...  This new dimension generalizes previous dimensions providing upper and lower bounds for all sorts of queries, and not for just example-based queries as in previous works.  ...  Exact learning and the general dimension We use a generalization of the exact learning model via queries of Angluin 1] .  ... 
doi:10.1007/3-540-44581-1_23 fatcat:p7xayuip5zfzvgqkyct736e2sy

Feature Selection through Minimization of the VC dimension [article]

Jayadeva, Sanjit S. Batra, Siddharth Sabharwal
2014 arXiv   pre-print
The recently proposed Minimal Complexity Machine (MCM) provides a way to learn a hyperplane classifier by minimizing an exact (Θ) bound on its VC dimension.  ...  It is well known that a lower VC dimension contributes to good generalization.  ...  Suresh Chandra of the Department of Mathematics at IIT Delhi for his valuable advice and critical appraisal of the manuscript.  ... 
arXiv:1410.7372v1 fatcat:7zlbcuc77zbxvbtfqyzcaofvza

Learning a hyperplane regressor through a tight bound on the VC dimension

Jayadeva, Suresh Chandra, Sanjit S. Batra, Siddarth Sabharwal
2016 Neurocomputing  
The capacity of a learning machine is measured by its Vapnik-Chervonenkis dimension, and learning machines with a low VC dimension generalize better.  ...  In this paper, we show how to learn a hyperplane regressor by minimizing an exact, or Θ bound on its VC dimension.  ...  The Vapnik-Chervonenkis (VC) dimension measures the capacity of a learning machine, and computational learning theory [7] [8] [9] [10] shows that a small VC dimension leads to good generalization.  ... 
doi:10.1016/j.neucom.2015.06.065 fatcat:bsffn2ng2zgtleguhzsbr3mtcy

Language Model Metrics and Procrustes Analysis for Improved Vector Transformation of NLP Embeddings [article]

Thomas Conley, Jugal Kalita
2021 arXiv   pre-print
We show the efficacy of this metric by applying it to a simple neural network learning the Procrustes algorithm for bilingual word mapping.  ...  We introduce Language Model Distance (LMD) for measuring accuracy of vector transformations based on the Distributional Hypothesis ( LMD Accuracy ).  ...  We suggest this basic enhancement would improve the Generalized Procrustes Algorithm and other NLP processing in general. This is left for future work.  ... 
arXiv:2106.02490v1 fatcat:oeqsumxxtzd6rj3mo4fwnkk3va

Learning a Fuzzy Hyperplane Fat Margin Classifier with Minimum VC dimension [article]

Jayadeva, Sanjit Singh Batra, Siddarth Sabharwal
2015 arXiv   pre-print
The Vapnik-Chervonenkis (VC) dimension measures the complexity of a learning machine, and a low VC dimension leads to good generalization.  ...  The recently proposed Minimal Complexity Machine (MCM) learns a hyperplane classifier by minimizing an exact bound on the VC dimension. This paper extends the MCM classifier to the fuzzy domain.  ...  The capacity of a learning machine may be measured by its VC dimension, and a small VC dimension leads to good generalization and low error rates on test data.  ... 
arXiv:1501.02432v1 fatcat:onju7xoslver3no6vt4wcnos54

Learning a hyperplane classifier by minimizing an exact bound on the VC dimension1

Jayadeva
2015 Neurocomputing  
The VC dimension measures the capacity of a learning machine, and a low VC dimension leads to good generalization.  ...  In this paper, we show how to learn a hyperplane classifier by minimizing an exact, or Θ bound on its VC dimension.  ...  This paper shows how to learn a classifier with large margin, by minimizing an exact (Θ) bound on the VC dimension.  ... 
doi:10.1016/j.neucom.2014.07.062 fatcat:vhj2j3z4ircfzfhcumwqn7s2ba

Exact Sampling from Determinantal Point Processes [article]

Philipp Hennig, Roman Garnett
2018 arXiv   pre-print
on regular grids, applicable to a general set of spaces.  ...  We point out that, for many settings of relevance to machine learning, it is also possible to draw exact samples from DPPs on continuous domains.  ...  Acknowledgements The authors are grateful to Lucy Kuncheva and Joseph Courtney for (separately) pointing out a nontrivial typo in Eq. (12) in an earlier version of this manuscript.  ... 
arXiv:1609.06840v2 fatcat:xjnhn2occ5eexhlraovv5r7nru

Generalized Information

Jonathan Bartlett
2019 Communications of the Blyth Institute  
Generalized Information (GI) is a measurement of the degree to which a program can be said to generalize a dataset.  ...  Active Information allows GI to be usable with both exact and inexact models.  ...  Therefore, in most machine learning systems, the model is only able to output a single dimension, which we can consider the "output" dimension.  ... 
doi:10.33014/issn.2640-5652.1.2.bartlett.1 fatcat:ll7a245yebajbjk4cpgxcdh72u

Contracting Arbitrary Tensor Networks: general approximate algorithm and applications in graphical models and quantum circuit simulations [article]

Feng Pan, Pengfei Zhou, Sujie Li, Pan Zhang
2019 arXiv   pre-print
We present a general method for approximately contracting tensor networks with an arbitrary connectivity.  ...  This enables us to release the computational power of tensor networks to wide use in optimization, inference, and learning problems defined on general graphs.  ...  Learning of graphical model using tensor networks Generative learning in the unsupervised learning models the joint distribution of random variables in the given data and generates new samples from the  ... 
arXiv:1912.03014v1 fatcat:3le2ypcqvjbjjd52fowemt3lt4

A New Abstract Combinatorial Dimension for Exact Learning via Queries

José L. Balcázar, Jorge Castro, David Guijarro
2002 Journal of computer and system sciences (Print)  
We introduce an abstract model of exact learning via queries that can be instantiated to all the query learning models currently in use, while being closer to them than previous uni catory attempts.  ...  We present a characterization of those Boolean function classes learnable in this abstract model, in terms of a new combinatorial notion that we introduce, the abstract identi cation dimension.  ...  Exact learning We use a generalization of the exact learning model via queries of Angluin 1] .  ... 
doi:10.1006/jcss.2001.1794 fatcat:3maos6rua5gfnaolgxylnfth7i

C3: A Command-line Catalogue Cross-matching tool for modern astrophysical survey data

Giuseppe Riccio, Massimo Brescia, Stefano Cavuoti, Amata Mercurio, Anna Maria Di Giorgio, Sergio Molinari
2016 Proceedings of the International Astronomical Union  
Conceived as a stand-alone command-line process or a module within generic data reduction/analysis pipeline, it provides the maximum flexibility, in terms of portability, configuration, coordinates and  ...  In the current data-driven science era, it is needed that data analysis techniques has to quickly evolve to face with data whose dimensions has increased up to the Petabyte scale.  ...  : the sky is partitioned in square cells whose size is defined by the maximum dimension that the matching regions can assume, with a minimum value to avoid the cell generation redundancy. cell (X,Y) (X  ... 
doi:10.1017/s1743921316013120 fatcat:mhetxl3txvea7ex5vtz4w6q5nu

LNEMLC: Label Network Embeddings for Multi-Label Classification [article]

Piotr Szymański, Tomasz Kajdanowicz, Nitesh Chawla
2019 arXiv   pre-print
or do not preserve generalization abilities for unseen label combinations.  ...  To address these issues we propose a new multi-label classification scheme, LNEMLC - Label Network Embedding for Multi-Label Classification, that embeds the label network and uses it to extend input space  ...  We also use just one multi-dimensional regressor instead of learning a regressor per dimension, which caused CLEMS not to finish on some of the evaluated data sets.  ... 
arXiv:1812.02956v2 fatcat:d2zlvcm7k5b2pbp5ikkj64bkpi

Neural Networks with Cheap Differential Operators [article]

Ricky T. Q. Chen, David Duvenaud
2019 arXiv   pre-print
We demonstrate these cheap differential operators for solving root-finding subproblems in implicit ODE solvers, exact density evaluation for continuous normalizing flows, and evaluating the Fokker--Planck  ...  We describe a family of restricted neural network architectures that allow efficient computation of a family of differential operators involving dimension-wise derivatives, used in cases such as computing  ...  Learning SDE models is a difficult task as exact maximum likelihood is infeasible.  ... 
arXiv:1912.03579v1 fatcat:gbwmog7bizcnvgu7dbx5h6v56q

Page 5274 of Mathematical Reviews Vol. , Issue 98H [page]

1998 Mathematical Reviews  
We also show that n®*) membership queries are necessary for exact learning.  ...  In contrast, for feedforward networks, VC dimension bounds can be expressed as a function of w only.  ... 

SHEARer: Highly-Efficient Hyperdimensional Computing by Software-Hardware Enabled Multifold Approximation [article]

Behnam Khaleghi, Sahand Salamat, Anthony Thomas, Fatemeh Asgarinejad, Yeseong Kim, Tajana Rosing
2020 arXiv   pre-print
Hyperdimensional computing (HD) is an emerging paradigm for machine learning based on the evidence that the brain computes on high-dimensional, distributed, representations of data.  ...  In contrast to previous works that generate the encoding hypervectors in full precision and then ex-post quantizing, we compute the encoding hypervectors in an approximate manner that saves a significant  ...  Hyperdimensional computing -HD for short -is an emerging paradigm for machine learning based on evidence from the neuroscience community that the brain "computes" on high-dimensional, distributed, representations  ... 
arXiv:2007.10330v1 fatcat:ghcc3za6yzaqnapczsa756cwxe
« Previous Showing results 1 — 15 out of 346,477 results