Filters








721 Hits in 3.7 sec

Uniform Convergence Rates for Lipschitz Learning on Graphs [article]

Leon Bungert, Jeff Calder, Tim Roith
2021 arXiv   pre-print
Their continuum limits are absolutely minimizing Lipschitz extensions with respect to the geodesic metric of the domain where the graph vertices are sampled from.  ...  Lipschitz learning is a graph-based semi-supervised learning method where one extends labels from a labeled to an unlabeled data set by solving the infinity Laplace equation on a weighted graph.  ...  Acknowledgments The authors thank the Institute for Mathematics and its Applications (IMA) where this collaboration started at a workshop on "Theory and Applications in Graph-Based Learning" in Fall 2020  ... 
arXiv:2111.12370v1 fatcat:syz72j252bfvxlftl2lqqwobha

Consistency of Lipschitz learning with infinite unlabeled data and finite labeled data [article]

Jeff Calder
2019 arXiv   pre-print
We study the consistency of Lipschitz learning on graphs in the limit of infinite unlabeled data and finite labeled data.  ...  Then we go on to show that on a random geometric graph with self-tuning weights, Lipschitz learning is in fact highly sensitive to the distribution of the unlabeled data, and we show how the degree of  ...  For α ∈ R, let u α denote the solution of the continuum ∞-Laplace equation (25) , which represents the continuum limit of the Lipschitz learning with self-tuning weights.  ... 
arXiv:1710.10364v3 fatcat:erlnxamfdfhgrihqypxsg7wnui

A continuum limit for the PageRank algorithm [article]

Amber Yuan, Jeff Calder, Braxton Osting
2021 arXiv   pre-print
In this paper, we propose a new framework for rigorously studying continuum limits of learning algorithms on directed graphs.  ...  Semi-supervised and unsupervised machine learning methods often rely on graphs to model data, prompting research on how theoretical properties of operators on graphs are leveraged in learning problems.  ...  Calder were partially supported by NSF-DMS grants 1713691 and 1944925, and a University of Minnesota Grant in Aid award. J. Calder was also partially supported by the Alfred P. Sloan Foundation. B.  ... 
arXiv:2001.08973v3 fatcat:g5rlh2pxkfgohoaeie7nz77e7e

Hamilton-Jacobi equations on graphs with applications to semi-supervised learning and data depth [article]

Jeff Calder, Mahmood Ettehad
2022 arXiv   pre-print
While the p-eikonal equation does not correspond to a shortest-path graph distance, we nonetheless show that the continuum limit of the p-eikonal equation on a random geometric graph recovers a geodesic  ...  We consider applications of the p-eikonal equation to data depth and semi-supervised learning, and use the continuum limit to prove asymptotic consistency results for both applications.  ...  Acknowledgments The authors thank the Institute for Mathematics and its Applications (IMA), where part of this work was conducted. JC acknowledges funding from NSF grant DMS:1944925, the Alfred P.  ... 
arXiv:2202.08789v2 fatcat:5llfa4m2hvai5ngq45pvupdou4

A continuum limit for the PageRank algorithm

A. YUAN, J. CALDER, B. OSTING
2021 European journal of applied mathematics  
In this paper, we propose a new framework for rigorously studying continuum limits of learning algorithms on directed graphs.  ...  Semi-supervised and unsupervised machine learning methods often rely on graphs to model data, prompting research on how theoretical properties of operators on graphs are leveraged in learning problems.  ...  Calder were partially supported by NSF-DMS grants 1713691 and 1944925, and a University of Minnesota Grant in Aid award. J. Calder was also partially supported by the Alfred P. Sloan Foundation. B.  ... 
doi:10.1017/s0956792521000097 fatcat:seat64meszcwlmgokkl3yqur7a

Continuum Limits for Adaptive Network Dynamics [article]

Marios Antonios Gkogkas, Christian Kuehn, Chuang Xu
2021 arXiv   pre-print
For example, continuum limits for static or temporal networks are already established in the literature for certain models, yet the continuum limit of fully adaptive networks has been open so far.  ...  In this paper we introduce and rigorously justify continuum limits for sequences of adaptive Kuramoto-type network models.  ...  Acknowledgements: MAG and CK gratefully thank the TUM International Graduate School of Science and Engineering (IGSSE) for support via the project "Synchronization in Co-Evolutionary Network Dynamics (  ... 
arXiv:2109.05898v1 fatcat:4s5cj4znv5fvbc5yqx7ai5gzm4

On the Consistency of Graph-based Bayesian Learning and the Scalability of Sampling Algorithms [article]

Nicolas Garcia Trillos, Zachary Kaplan, Thabo Samakhoana, Daniel Sanz-Alonso
2020 arXiv   pre-print
We introduce new theory that gives appropriate scalings of graph parameters that provably lead to a well-defined limiting posterior as the size of the unlabeled data set grows.  ...  A popular approach to semi-supervised learning proceeds by endowing the input data with a graph structure in order to extract geometric information and incorporate it into a Bayesian framework.  ...  The learning problem in the discrete space M n is defined by means of a graph-based discretization of a continuum learning problem defined over functions on M.  ... 
arXiv:1710.07702v2 fatcat:oyucdllnrbdzbcdxzewdad7lge

Analysis and algorithms for ℓ_p-based semi-supervised learning on graphs [article]

Mauricio Flores, Jeff Calder, Gilad Lerman
2022 arXiv   pre-print
In particular, we find that Lipschitz learning (p=∞) performs well with very few labels on k-NN graphs, which experimentally validates our theoretical findings that Lipschitz learning retains information  ...  In the first part of the paper we prove new discrete to continuum convergence results for p-Laplace problems on k-nearest neighbor (k-NN) graphs, which are more commonly used in practice than random geometric  ...  It was proven in [10] that Lipschitz learning is well-posed with arbitrarily few labels, and the continuum limit on random geometric graphs is the continuum ∞-Laplace equation (see (1.11) ).  ... 
arXiv:1901.05031v4 fatcat:f7oko2pqlrbnnhwahsjhudoldy

Continuum Limit of Total Variation on Point Clouds

Nicolás García Trillos, Dejan Slepčev
2015 Archive for Rational Mechanics and Analysis  
In particular, we study when is the cut capacity, and more generally total variation, on these graphs a good approximation of the perimeter (total variation) in the continuum setting.  ...  Our goal is to develop mathematical tools needed to study the consistency, as the number of available data points increases, of graph-based machine learning algorithms for tasks such as clustering.  ...  The authors are grateful to Michel Talagrand for letting them know of the elegant proofs of matching results in [51] and generously sharing the relevant chapters of his upcoming book [52] .  ... 
doi:10.1007/s00205-015-0929-z fatcat:245qymrxk5co5m4f2tl5vnf5km

Properly-weighted graph Laplacian for semi-supervised learning [article]

Jeff Calder, Dejan Slepcev
2019 arXiv   pre-print
The performance of traditional graph Laplacian methods for semi-supervised learning degrades substantially as the ratio of labeled to unlabeled data decreases, due to a degeneracy in the graph Laplacian  ...  We prove that our semi-supervised learning algorithm converges, in the infinite sample size limit, to the smooth solution of a continuum variational problem that attains the labeled values continuously  ...  We prove in Theorem 3.1 that the solutions of the graph-based learning problem (14) , for the properly-weighted Laplacian, converge in the large sample size limit to the solution of a continuum variational  ... 
arXiv:1810.04351v2 fatcat:vjppffhzbfeelhzzluylfiyh5m

The game theoretic p-Laplacian and semi-supervised learning with few labels [article]

Jeff Calder
2018 arXiv   pre-print
In particular, we show that the continuum limit of graph-based semi-supervised learning with the game theoretic p-Laplacian is a weighted version of the continuous p-Laplace equation.  ...  We study the game theoretic p-Laplacian for semi-supervised learning on graphs, and show that it is well-posed in the limit of finite labeled data and infinite unlabeled data.  ...  Acknowledgments The author gratefully acknowledges the support of NSF-DMS grant 1713691. The author is also grateful to the anonymous referees, whose suggestions have greatly improved the paper.  ... 
arXiv:1711.10144v4 fatcat:on25wu3qgnfgtniliwz7lcztle

Consistency of Dirichlet Partitions [article]

Braxton Osting, Todd Harry Reeb
2017 arXiv   pre-print
A discrete version of Dirichlet partitions has been posed on graphs with applications in data analysis.  ...  In this paper, we extend results of N.García Trillos and D. Slepčev to show that there exist solutions of the continuum problem arising as limits to solutions of a sequence of discrete problems.  ...  continuum functions on Ω to the graphs G n .  ... 
arXiv:1708.05472v1 fatcat:64vfoqslvjb2zbnrod63vfnwie

Rates of Convergence for Laplacian Semi-Supervised Learning with Low Labeling Rates [article]

Jeff Calder, Dejan Slepčev, Matthew Thorpe
2020 arXiv   pre-print
We study graph-based Laplacian semi-supervised learning at low labeling rates. Laplacian learning uses harmonic extension on a graph to propagate labels.  ...  and consistent with a continuum Laplace equation.  ...  Laplace learning can also be formulated in terms of random walks on graphs.  ... 
arXiv:2006.02765v1 fatcat:gwu2vdbdmzdytmsprrc363krce

Nonlocal p-Laplacian Variational problems on graphs [article]

Yosra Hafiene, Jalal Fadili, Abderrahim Elmoataz
2019 arXiv   pre-print
In particular, we study convergence of the numerical solution to a discrete version of this nonlocal variational problem to the unique solution of the continuum one.  ...  When applied to variational problem on graphs, this error bound allows us to show the consistency of the discretized variational problem as the number of vertices goes to infinity.  ...  Continuing along the lines of [19] , the work of [14] studies the consistency of Lipschitz semi-supervised learning (i.e., p → ∞) on graphs in the same asymptotic limit.  ... 
arXiv:1810.12817v3 fatcat:3irsqlscvbemrpirnbaecuphtm

Variational limits of k-NN graph based functionals on data clouds [article]

Nicolas Garcia Trillos
2018 arXiv   pre-print
This paper studies the large sample asymptotics of data analysis procedures based on the optimization of functionals defined on k-NN graphs on point clouds.  ...  We rigorously show that provided the number of neighbors in the graph k:=k_n scales with the number of points in the cloud as n ≫ k_n ≫(n), then with probability one, the solution to the graph cut optimization  ...  towards continuum ones.  ... 
arXiv:1607.00696v3 fatcat:jlpluziv7rgyhhk2jc42dii23y
« Previous Showing results 1 — 15 out of 721 results