Filters








223 Hits in 3.8 sec

Computing Bi-Lipschitz Outlier Embeddings into the Line [article]

Karine Chubarian, Anastasios Sidiropoulos
2020 arXiv   pre-print
The problem of computing a bi-Lipschitz embedding of a graphical metric into the line with minimum distortion has received a lot of attention.  ...  This is the first algorithmic result for outlier bi-Lipschitz embeddings. Prior to our work, comparable outlier embeddings where known only for the case of additive distortion.  ...  ) Computes a (O(c 6 k log 5/2 n), O(c 13 ))-embedding of G into the line.  ... 
arXiv:2002.10039v1 fatcat:rwtogvcj3jb2negtjqtu2hetbq

Computing Bi-Lipschitz Outlier Embeddings into the Line

Karine Chubarian, Anastasios Sidiropoulos, Raghu Meka, Jarosław Byrka
2020 International Workshop on Approximation Algorithms for Combinatorial Optimization  
The problem of computing a bi-Lipschitz embedding of a graphical metric into the line with minimum distortion has received a lot of attention.  ...  This is the first algorithmic result for outlier bi-Lipschitz embeddings. Prior to our work, comparable outlier embeddings where known only for the case of additive distortion.  ...  The algorithm proceeds in the following steps. 36:12 Computing Bi-Lipschitz Outlier Embeddings into the Line Step 1: Density reduction.  ... 
doi:10.4230/lipics.approx/random.2020.36 dblp:conf/approx/ChubarianS20 fatcat:q4ylp465djfjle7mxdv25osb2a

Deep Manifold Transformation for Nonlinear Dimensionality Reduction [article]

Stan Z. Li, Zelin Zang, Lirong Wu
2021 arXiv   pre-print
The LGP constraints constitute the loss for deep manifold learning and serve as geometric regularizers for NLDR network training.  ...  In this paper, we propose a deep manifold learning framework, called deep manifold transformation (DMT) for unsupervised NLDR and embedding learning.  ...  Any such K is referred to as a locally bi-Lipschitz constant for the function Φ. The smallest constant K is called the (optimal) locally bi-Lipschitz constant.  ... 
arXiv:2010.14831v3 fatcat:4j5dvqq5gfhajf4ndopgwhwk7i

Diffusion Nets [article]

Gal Mishne, Uri Shaham, Alexander Cloninger, Israel Cohen
2015 arXiv   pre-print
the embedded data back to the high-dimensional space.  ...  Also, our approach is efficient in both computational complexity and memory requirements, as opposed to previous methods that require storage of all training points in both the high-dimensional and the  ...  The authors would like to thank Ronald Coifman, Ronen Talmon and Roy Lederman for helpful discussions and suggestions.  ... 
arXiv:1506.07840v1 fatcat:lugeo4k4fzgp3edmxvbvcy7tuu

Vehicle identification between non-overlapping cameras without direct feature matching

Ying Shan, H.S. Sawhney, R. Kumar
2005 Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1  
A pair of the embeddings representing two vehicles across two cameras are then used to compute the same-different probability.  ...  The embedding is computed as a vector each of whose components is a non-metric distance for a vehicle to an exemplar.  ...  We exploit the content in the edge maps by computing a robust distance measure that takes into account information in both the inlier and outlier pixels.  ... 
doi:10.1109/iccv.2005.247 dblp:conf/iccv/ShanSK05 fatcat:4e2uxmq7wfagfkf4k56dybmtam

MM for Penalized Estimation [article]

Zhu Wang
2020 arXiv   pre-print
The majorization-minimization (MM) algorithm is a computational scheme for stability and simplicity, and the MM algorithm has been widely applied in penalized estimation.  ...  When data are contaminated with outliers, robust loss functions can generate more reliable estimates.  ...  ,B = BI, where I is the identity matrix.  ... 
arXiv:1912.11119v2 fatcat:kmvki5dxizgddchhfizmmmmygm

Robust Voronoi-based curvature and feature estimation

Quentin Mérigot, Maks Ovsjanikov, Leonidas Guibas
2009 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling on - SPM '09  
Features (in red) computed by our algorithm from a point cloud sampling of the surface in yellow.  ...  edges of a piecewise smooth surface, with the error bounded by the Hausdorff distance between the point cloud and the underlying surface.  ...  Part of this work was done during a visit of the first author at Stanford University, and of the second author at INRIA.  ... 
doi:10.1145/1629255.1629257 dblp:conf/sma/MerigotOG09 fatcat:ve2xe4e2wrcqhc2sd4ygzsn4ye

A Nested Bi-level Optimization Framework for Robust Few Shot Learning [article]

Krishnateja Killamsetty, Changbin Li, Chen Zhao, Rishabh Iyer, Feng Chen
2021 arXiv   pre-print
We consider weights as hyper-parameters and iteratively optimize them using a small set of validation tasks set in a nested bi-level optimization approach (in contrast to the standard bi-level optimization  ...  Extensive experiments on synthetic and real-world datasets demonstrate that NestedMAML efficiently mitigates the effects of "unwanted" tasks or instances, leading to significant improvement over the state-of-the-art  ...  In addition to in-distribution data (i.e. data points sampled from sine waves), outliers or data points out of sine distributions (i.e. OOD) are added into meta-training stage.  ... 
arXiv:2011.06782v2 fatcat:7gxsfxgaf5blfnmbpwfve7j6cq

On Incremental Structure-from-Motion using Lines [article]

André Mateus, Omar Tahri, A. Pedro Aguiar, Pedro U. Lima, Pedro Miraldo
2021 arXiv   pre-print
From the intersection of planar surfaces arise straight lines. Lines have more degrees-of-freedom than points.  ...  Thus, line-based Structure-from-Motion (SfM) provides more information about the environment. In this paper, we present solutions for SfM using lines, namely, incremental SfM.  ...  ACKNOWLEDGEMENTS We would like to thank the editor and the reviewers for the time devoted to reviewing our paper and the comments provided.  ... 
arXiv:2105.11196v1 fatcat:hxwkeginzfekpd2ryehwbcuria

Task-Feature Collaborative Learning with Application to Personalized Attribute Prediction [article]

Zhiyong Yang, Qianqian Xu, Xiaochun Cao, Qingming Huang
2020 arXiv   pre-print
Though a substantial amount of studies have been carried out against the negative transfer, most of the existing methods only model the transfer relationship as task correlations, with the transfer across  ...  As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.  ...  To separate all the tasks and features into k groups, we expect to cut G BI into k connected components.  ... 
arXiv:2004.13930v1 fatcat:govvehitkbh6ddlovy36dlpvoy

Minimum Spectral Connectivity Projection Pursuit [article]

David P. Hofmeyr and Nicos G. Pavlidis and Idris A. Eckley
2017 arXiv   pre-print
The computational cost associated with each eigen-problem is quadratic in the number of data.  ...  We show that the optimal univariate projection based on spectral connectivity converges to the vector normal to the maximum margin hyperplane through the data, as the scaling parameter is reduced to zero  ...  We perform a rigorous investigation into the continuity and differentiability properties of eigenvalues of graph Laplacians as functions of the projection, and find that they are Lipschitz continuous (  ... 
arXiv:1509.01546v3 fatcat:7gj67sqn55hbzpztcfrcfgwq4a

Spectral clustering based on local linear approximations

Ery Arias-Castro, Guangliang Chen, Gilad Lerman
2011 Electronic Journal of Statistics  
In the context of clustering, we assume a generative model where each cluster is the result of sampling points in the neighborhood of an embedded smooth surface; the sample may be contaminated with outliers  ...  We obtain theoretical guarantees for this algorithm and show that, in terms of both separation and robustness to outliers, it outperforms the standard spectral clustering algorithm (based on pairwise distances  ...  Acknowledgements GC was at the University of Minnesota, Twin Cities, for part of the project.  ... 
doi:10.1214/11-ejs651 fatcat:ndvx3opyejhobkpvfvv56qcbsq

Comparison of Maximum Likelihood and GAN-based training of Real NVPs [article]

Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra, Peter Dayan
2017 arXiv   pre-print
We show that an independent critic trained to approximate Wasserstein distance between the validation set and the generator distribution helps detect overfitting.  ...  Finally, we use ideas from the one-shot learning literature to develop a novel fast learning critic.  ...  Log-probability Density Ratio Evaluation Real NVPs are not limited to the computation of negative log-probability densities of visible variables.  ... 
arXiv:1705.05263v1 fatcat:tjxqumbrybenhgwzknqqqw3f3a

Explicit Gradient Learning [article]

Mor Sinay, Elad Sarafian, Yoram Louzoun, Noa Agmon, Sarit Kraus
2020 arXiv   pre-print
Instead of fitting the function, EGL trains a NN to estimate the objective gradient directly.  ...  We derive EGL by finding weak-spots in methods that fit the objective function with a parametric Neural Network (NN) model and obtain the gradient signal by calculating the parametric gradient.  ...  Nevertheless, motivated by the ability to expand the input dimension into an arbitrary large number, we designed an ordinal variable embedding that is Lipschitz continuous s.t. for two relatively close  ... 
arXiv:2006.08711v1 fatcat:jeshon57rbdxzb4rdiacyzruue

Robust Regression via Model Based Methods [article]

Armin Moharrer, Khashayar Kamran, Edmund Yeh, Stratis Ioannidis
2021 arXiv   pre-print
Despite computational advantages due to its differentiability, it is not robust to outliers.  ...  Finally, we demonstrate experimentally (a) the robustness of l_p norms to outliers and (b) the efficiency of our proposed model-based algorithms in comparison with gradient methods on autoencoders and  ...  The authors gratefully acknowledge support from the National Science Foundation (Grants CCF-1750539, IIS-1741197, and CNS-1717213), DARPA (Grant HR0011-17-C-0050), and a research grant from American Tower  ... 
arXiv:2106.10759v4 fatcat:x5axggcevjbonbb46es26zjrni
« Previous Showing results 1 — 15 out of 223 results