Filters








201,062 Hits in 4.2 sec

Variance reduction in large graph sampling

Jianguo Lu, Hao Wang
2014 Information Processing & Management  
Many graphs are large and scale-free, inducing large degree variance and estimator variance.  ...  The norm of practice in estimating graph properties is to use uniform random node (RN) samples whenever possible.  ...  Variance Reduction Using RE Sampling The performance of estimators can be evaluated in terms of bias and variance.  ... 
doi:10.1016/j.ipm.2014.02.003 fatcat:4jvupneuinekld2u2wr67rerzq

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks [article]

Weilin Cong, Rana Forsati, Mahmut Kandemir, Mehrdad Mahdavi
2021 arXiv   pre-print
The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization.  ...  Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs).  ...  Despite the potential of GNNs, training GNNs on large-scale graphs remains a big challenge, mainly due to the inter-dependency of nodes in a graph.  ... 
arXiv:2006.13866v2 fatcat:em7gsyj23vcrbfgkt4dzwo45fa

Graph reductions to speed up importance sampling-based static reliability estimation

Pierre L'Ecuyer, Samira Saggadi, Bruno Tuffin
2011 Proceedings of the 2011 Winter Simulation Conference (WSC)  
We speed up the Monte Carlo simulation of static graph reliability models by adding graph reductions to zero-variance importance sampling (ZVIS) approximation techniques.  ...  We illustrate theoretically on small examples and numerically on large ones the gains that can be obtained, both in terms of variance and computational time.  ...  Those reductions, applied at each step, lead in most cases to variance reductions, but also to computational time reductions even if it requires some work, because fewer links need to be sampled.  ... 
doi:10.1109/wsc.2011.6147770 dblp:conf/wsc/LEcuyerST11 fatcat:t4prevryvnaijfvjka46ojc4fq

Variance reduction in stochastic methods for large-scale regularised least-squares problems [article]

Yusuf Pilavcı
2021 arXiv   pre-print
We apply this technique to Tikhonov regularization on graphs, where the reduction in variance is found to be substantial at very small extra cost.  ...  In particular, they can be used to perform Tikhonov regularization on graphs using random spanning forests, a specific DPP.  ...  We show below that setting the step size α correctly guarantees a reduction in variance, and give practical solutions for finding an appropriate step size in a graph signal processing setting.  ... 
arXiv:2110.07894v1 fatcat:bfclsyi4dvfhfkchyz2ouk2bly

The effect of sub-sampling in scale space texture classification using combined classifiers

M. J. Gangeh, B. M. ter Haar Romeny, C. Eswaran
2007 2007 International Conference on Intelligent and Advanced Systems  
The top graph shows the cumulative fraction of variance (eigen-values) in respect to the number of components without subsampling and the bottom graph shows the same with sub-sampling.  ...  We have discussed in [4] that Principal Component Analysis (PCA) performs an adaptive feature reduction in different scales in the sense that as we go to higher scales more reduction is performed.  ...  The cumulative fraction of variance (eigen-values) in respect to the number of components in scale 2 of the zero th order derivative before sub-sampling (top graph) and after sub-sampling (bottom graph  ... 
doi:10.1109/icias.2007.4658498 fatcat:luzcir6tzfa6biv3tah4i77p3a

Variance reduction for quantile estimates in simulations via nonlinear controls

Richard L. Ressler, Peter A. W. Lewis
1990 Communications in statistics. Simulation and computation  
This graph also points out the high variability of the variance reduction for large n as the number of quantile estimates becomes small.  ...  The bottom graph in Figure 7 is a summary of the percent variance reduction achieved by the various estimators.  ... 
doi:10.1080/03610919008812905 fatcat:vrnh4mbvbrb3fhi6gfimlmfohm

Adaptive Sampling Towards Fast Graph Representation Learning [article]

Wenbing Huang and Tong Zhang and Yu Rong and Junzhou Huang
2018 arXiv   pre-print
More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method.  ...  The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers  ...  The sampler for the layer-wise sampling is adaptive and determined by explicit variance reduction in the training phase. III.  ... 
arXiv:1809.05343v3 fatcat:bf7ygbcfazdxbakaxmcjphg7zy

GCN meets GPU: Decoupling "When to Sample" from "How to Sample"

Morteza Ramezani, Weilin Cong, Mehrdad Mahdavi, Anand Sivasubramaniam, Mahmut T. Kandemir
2020 Neural Information Processing Systems  
Sampling-based methods promise scalability improvements when paired with stochastic gradient descent in training Graph Convolutional Networks (GCNs).  ...  We also give corroborating empirical evidence on large real-world graphs, demonstrating that the proposed schema can significantly reduce the number of sampling steps and yield superior speedup without  ...  In addition, the analysis of theoretical aspects of lazy sampling, provides powerful tools for future studies of sampling-based GCNs.  ... 
dblp:conf/nips/RamezaniCMSK20 fatcat:hcmsiwgorbbsfn5x73jra7pb74

Variance Reduction for Inverse Trace Estimation via Random Spanning Forests [article]

Yusuf Yigit Pilavci, Nicolas Tremblay
2022 arXiv   pre-print
In this work, we show two ways of improving the forest-based estimator via well-known variance reduction techniques, namely control variates and stratified sampling.  ...  Implementing these techniques is easy, and provides substantial variance reduction, yielding comparable or better performance relative to state-of-the-art algorithms.  ...  However, it provides more variance reduction than the previous option(See Prop. 1 and 2 in [14] ). How to choose α.  ... 
arXiv:2206.07421v1 fatcat:mlzdsojo4zf4loyjwbio4hc7oa

Using stochastic computation graphs formalism for optimization of sequence-to-sequence model [article]

Eugene Golikov, Vlad Zhukov, Maksim Kretov
2017 arXiv   pre-print
Calculation of loss function can be viewed in terms of stochastic computation graphs (SCG).  ...  Our work provides a unified view on different optimization approaches for sequence-to-sequence models and could help researchers in developing new network architectures with embedded stochastic nodes.  ...  If computation of gradient involves taking integrals numerically, samples often show large variance.  ... 
arXiv:1711.07724v2 fatcat:mpx2illu75hnrhzkoowth35d6a

Spectral Methods for Dimensionality Reduction [chapter]

Saul Lawrence K., Weinberger Kilian Q., Sha Fei, Ham Jihun, Lee Daniel D.
2006 Semi-Supervised Learning  
How can we search for low dimensional structure in high dimensional data?  ...  Spectral methods have recently emerged as a powerful tool for nonlinear dimensionality reduction and manifold learning.  ...  Most theoretical work has focused on the behavior of these methods in the limit theoretical guarantees n → ∞ of large sample size.  ... 
doi:10.7551/mitpress/9780262033589.003.0016 fatcat:7jiquy4jjjekdbriruxgzfe6lu

On the Importance of Sampling in Training GCNs: Tighter Analysis and Variance Reduction [article]

Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi
2021 arXiv   pre-print
In this paper, we describe and analyze a general doubly variance reduction schema that can accelerate any sampling method under the memory budget.  ...  We complement our theoretical results by integrating the proposed schema in different sampling methods and applying them to different large real-world graphs.  ...  In addition, we observe that the effect of variance reduction depends on its base sampling algorithms.  ... 
arXiv:2103.02696v2 fatcat:lhgrtisjxvbbhf7dkzt2ytkmwu

Variance reduction with neighborhood smoothing for local intrinsic dimension estimation

Kevin M. Carter, Alfred O. Hero
2008 Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing  
We propose adding adaptive 'neighborhood smoothing' -filtering over the generated dimension estimates to obtain the most probable estimate for each sample -as a method to reduce variance and increase algorithm  ...  Finally, we illustrate the benefits of neighborhood smoothing on synthetic data sets as well as towards diagnosing anomalies in router networks.  ...  ACKNOWLEDGEMENTS We would like to thank Bobby Li from the University of Michigan for isolating the source of the anomalies we discovered in the Abilene data.  ... 
doi:10.1109/icassp.2008.4518510 dblp:conf/icassp/CarterH08 fatcat:ypferreidrbo5lhkbmv6kyb5ru

Walk, not wait

Azade Nazi, Zhuojie Zhou, Saravanan Thirumuruganathan, Nan Zhang, Gautam Das
2015 Proceedings of the VLDB Endowment  
In this paper, we introduce a novel, general purpose, technique for faster sampling of nodes over an online social network.  ...  WALK-ESTIMATE, which starts with a much shorter random walk, and then proactively estimate the sampling probability for the node taken before using acceptance-rejection sampling to adjust the sampling  ...  Variance Reduction: Weighted Sampling Our second idea for variance reduction is weighted samplingi.e., instead of picking u uniformly at random from N (u) as stated above (for estimating pt(u) from pt−  ... 
doi:10.14778/2735703.2735707 fatcat:l4qgzngbdrbnbboj5zzmmanh7a

Sub graph sampling and case citation network: a case study

Nidha Khanam, Rupali Sunil Wagh
2018 International Journal of Engineering & Technology  
Sub graph sampling can effectively be employed on large network structures to reduce the size of data while preserving the original properties of the network.  ...  Through this paper authors present a case study on application of sub graph sampling approach to obtain reduced case citation network in legal domain.  ...  Sub graph sampling technique acts as data reduction technique for large networks and reduces the number of nodes and edges for better understanding.  ... 
doi:10.14419/ijet.v7i3.11820 fatcat:nquvlx3cdras7mwlq25hblhsom
« Previous Showing results 1 — 15 out of 201,062 results