Filters








26,359 Hits in 2.5 sec

On the learnability of vector spaces

Valentina S. Harizanov, Frank Stephan
2007 Journal of computer and system sciences (Print)  
On the other hand, learnability from an informant does not correspond to similar algebraic properties of a given space.  ...  The central topic of the paper is the learnability of the recursively enumerable subspaces of V ∞ /V , where V ∞ is the standard recursive vector space over the rationals with (countably) infinite dimension  ...  So one can verify inductively that a 0 , a 1 , . . . , a n f (n): a 0 f (0); between f (n) + 1 and f (n + 1) there is at least another element of A.  ... 
doi:10.1016/j.jcss.2006.09.001 fatcat:vz7nvw45ozbppaqkvxah623u5u

On the Learnability of Vector Spaces [chapter]

Valentina S. Harizanov, Frank Stephan
2002 Lecture Notes in Computer Science  
On the other hand, learnability from an informant does not correspond to similar algebraic properties of a given space.  ...  The central topic of the paper is the learnability of the recursively enumerable subspaces of V ∞ /V , where V ∞ is the standard recursive vector space over the rationals with (countably) infinite dimension  ...  So one can verify inductively that a 0 , a 1 , . . . , a n f (n): a 0 f (0); between f (n) + 1 and f (n + 1) there is at least another element of A.  ... 
doi:10.1007/3-540-36169-3_20 fatcat:3pae26h5pzdbxjvqjz4vpes544

Inductive Inference Systems for Learning Classes of Algorithmically Generated Sets and Structures [chapter]

Valentina S. Harizanov
2007 Induction, Algorithmic Learning Theory, and Philosophy  
of Gödel codes (one by one) that at certain point stabilize at codes correct for A.  ...  Thus, knowing a computably enumerable set means knowing one of its infinitely many Gödel codes. In the approach to learning theory stemming from E.M.  ...  I wish to thank John Case for helpful comments on this paper and many valuable discussions about algorithmic learning theory.  ... 
doi:10.1007/978-1-4020-6127-1_2 dblp:series/leus/Harizanov07 fatcat:33mdgilndjfftdkc2oo2r52dby

A Fast Text-Driven Approach for Generating Artistic Content

Marian Lupașcu, Ryan Murdock, Ionuţ Mironică, Yijun Li
2022 ACM SIGGRAPH 2022 Posters  
Figure 1: Results of our proposed method, where (a) represents the content image, (b) is the content image stylized with the text "A bold color landscape depicting the road everyone lives on in the style  ...  of romanticism paintings | Low poly" and (c) represents the content image stylized with the text "City of the future | Geometric art".  ...  The vector f represents the projection in the CLIP space of the image from the current iteration and the vector s represents the projection in the CLIP space of a style parameter (image or text).  ... 
doi:10.1145/3532719.3543208 fatcat:tkpyvzuwkre57b4j2ox6sy7kv4

Learning from queries for maximum information gain in imperfectly learnable problems

Peter Sollich, David Saad
1994 Neural Information Processing Systems  
We find that for MSSE queries, the structure of the student space determines the efficacy of query learning, whereas MTSE queries lead to a higher generalization error than random examples, due to a lack  ...  teacher space entropy (MTSE) queries, which can be used if the teacher space is assumed to be known, but a student of a simpler form has deliberately been chosen.  ...  For finite N, the value of sp is dependent on the p previous training examples that define the existing version space and on the teacher vector wp sampled randomly from this version space.  ... 
dblp:conf/nips/SollichS94 fatcat:c22bajh46rdyhgbrtllhbezgj4

On learning a union of half spaces

Eric B Baum
1990 Journal of Complexity  
We study therefore the learnability of a simple, nontrivial class of concepts: unions of half spaces. We give a new, fast algorithm for learning unions of half spaces in fixed dimension.  ...  We also present, in an appendix, new lower bounds on the number of examples necessary for learning from one type of example.  ...  So for instance, the class H of half spaces is properly learnable. H is parametrized by a vector w E !Iti" and a threshold 8 E 8. The half space (w, 6): x E !  ... 
doi:10.1016/0885-064x(90)90012-3 fatcat:yw3attw3sfaazmwjndjrpt3zzq

The learnability of Pauli noise [article]

Senrui Chen, Yunchao Liu, Matthew Otten, Alireza Seif, Bill Fefferman, Liang Jiang
2022 arXiv   pre-print
Here we give a precise characterization of the learnability of Pauli noise channels attached to Clifford gates, showing that learnable information corresponds to the cycle space of the pattern transfer  ...  graph of the gate set, while unlearnable information corresponds to the cut space.  ...  Therefore, F L forms a vector space in R |Λ| . Our goal is to give a precise characterization of the learnable space F L .  ... 
arXiv:2206.06362v1 fatcat:pbutrcja45bstcudllwhkgv5aa

On Position Embeddings in BERT

Benyou Wang, Lifeng Shang, Christina Lioma, Xin Jiang, Hao Yang, Qun Liu, Jakob Grue Simonsen
2021 International Conference on Learning Representations  
To address this, we present three properties of PEs that capture word distance in vector space: translation invariance, monotonicity, and symmetry.  ...  We contribute the first formal and quantitative analysis of desiderata for PEs, and a principled discussion about their correlation to the performance of typical downstream tasks.  ...  ACKNOWLEDGMENTS The work is supported by the Quantum Access and Retrieval Theory (QUARTZ) project, which has received funding from the European Union's Horizon 2020 research and innovation programme under  ... 
dblp:conf/iclr/WangSLJYLS21 fatcat:jsya7sa4mfa5tkygy3zs4nyn6i

Adaptive duplicate detection using learnable string similarity measures

Mikhail Bilenko, Raymond J. Mooney
2003 Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '03  
We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine  ...  Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates.  ...  Using the SVM-based vector-space learnable similarity did not lead to improvements over the original vector space cosine similarity; performance has in fact decreased.  ... 
doi:10.1145/956750.956759 dblp:conf/kdd/BilenkoM03 fatcat:ijhfcujqqramriziclnqceveoi

Adaptive duplicate detection using learnable string similarity measures

Mikhail Bilenko, Raymond J. Mooney
2003 Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '03  
We present two learnable text similarity measures suitable for this task: an extended variant of learnable string edit distance, and a novel vector-space based measure that employs a Support Vector Machine  ...  Most existing approaches have relied on generic or manually tuned distance metrics for estimating the similarity of potential duplicates.  ...  Using the SVM-based vector-space learnable similarity did not lead to improvements over the original vector space cosine similarity; performance has in fact decreased.  ... 
doi:10.1145/956755.956759 fatcat:3dafob6h7veall7enhbf5lxo6a

Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding [article]

Yang Li, Si Si, Gang Li, Cho-Jui Hsieh, Samy Bengio
2021 arXiv   pre-print
Instead of hard-coding each position as a token or a vector, we represent each position, which can be multi-dimensional, as a trainable encoding based on learnable Fourier feature mapping, modulated with  ...  Our experiments based on several public benchmark tasks show that our learnable Fourier feature representation for multi-dimensional positional encoding outperforms existing methods by both improving the  ...  Acknowledgments We would like to thank anonymous reviewers for their insightful comments and constructive feedback that have significantly improved the work.  ... 
arXiv:2106.02795v3 fatcat:r64bsk7hmjdr5fibnqpvhckdua

Disentangling Successor Features for Coordination in Multi-agent Reinforcement Learning [article]

Seung Hyun Kim, Neale Van Stralen, Girish Chowdhary, Huy T. Tran
2022 arXiv   pre-print
We show that successor features can help address this challenge by disentangling an individual agent's impact on the global value function from that of all other agents.  ...  We use this disentanglement to compactly represent private utilities that support stable training of decentralized agents in unstructured tasks.  ...  Acknowledgments and Disclosure of Funding This work was supported by ONR Grant N00014-20-1-2249 and ARL Contract W911NF2020184.  ... 
arXiv:2202.07741v1 fatcat:eikxzanrfzg65jm5v5ruvt54wi

A sufficient condition for polynomial distribution-dependent learnability

Martin Anthony, John Shawe-Taylor
1997 Discrete Applied Mathematics  
We investigate upper bounds on the sample-size sufficient for 'solid' learnability with respect to a probability distribution.  ...  Extending analysis of Ben-David et al. (1989 and Bendek and Itai (1991) we obtain a sufficient condition for feasible (polynomially bounded) sample-size bounds for distribution-specific (solid) learnability  ...  It follows also that the notion of ddlearnability is not a vacuous one, since these same hypothesis spaces are dd-learnable but, being of infinite VC dimension, are not learnable.  ... 
doi:10.1016/s0166-218x(96)00129-1 fatcat:ef3ej5lx6vc4dmtuttnbotmnay

A new initialization method based on normed statistical spaces in deep networks

Hongfei Yang, ,Department of Mathematics, Yeung Kin Man Academic Building, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, Hong Kong, China, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng, Tieyong Zeng, ,Department of Mathematics, School of Science, Shanghai University, Shanghai 200444, China, ,HISILICON Technologies Co., Ltd., Huawei Base, Bantian, Longgang District, Shenzhen 518129, China, ,Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong, China
2020 Inverse Problems and Imaging  
Based on these two methods, we will propose a new initialization by studying the parameter space of a network.  ...  In order to do so, we introduce a norm to the parameter space and use this norm to measure the growth of parameters.  ...  This construction turns our vector space H N into a normed space (H N , · ), where the norm is defined by (19) [x] = [x], [x] .  ... 
doi:10.3934/ipi.2020045 fatcat:w2efosz5k5cvtmoimojrxwyzky

Rethinking Semantic Segmentation: A Prototype View [article]

Tianfei Zhou, Wenguan Wang, Ender Konukoglu, Luc Van Gool
2022 arXiv   pre-print
Instead of prior methods learning a single weight/query vector for each class in a fully parametric manner, our model represents each class as a set of non-learnable prototypes, relying solely on the mean  ...  category, by considering the softmax weights or query vectors as learnable class prototypes.  ...  Both the two types of methods are based on learnable prototypes; they are parametric models in the sense that they learn one prototype g c , i.e., linear weight w c or query vector e c , for each class  ... 
arXiv:2203.15102v2 fatcat:hlbuwxv5mnejzaqwg6bncbxski
« Previous Showing results 1 — 15 out of 26,359 results