Filters








91 Hits in 2.4 sec

Bandit Samplers for Training Graph Neural Networks [article]

Ziqi Liu, Zhengwei Wu, Zhiqiang Zhang, Jun Zhou, Shuang Yang, Le Song, Yuan Qi
<span title="2020-06-11">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs).  ...  However, due to the intractable computation of optimal sampling distribution, these sampling algorithms are suboptimal for GCNs and are not applicable to more general graph neural networks (GNNs) where  ...  GNN-BS: Graph Neural Networks with Bandit Sampler In this setting, we choose 1 arm and repeat k times.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.05806v2">arXiv:2006.05806v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/p2q4ewbgwrffja3jlnmz4y56uu">fatcat:p2q4ewbgwrffja3jlnmz4y56uu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200623155215/https://arxiv.org/pdf/2006.05806v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.05806v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Novel Automated Curriculum Strategy to Solve Hard Sokoban Planning Instances [article]

Dieqiao Feng, Carla P. Gomes, Bart Selman
<span title="2021-10-03">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In addition, we show that we can further boost the RL performance with an intricate coupling of our automated curriculum approach with a curiosity-driven search strategy and a graph neural net representation  ...  Nevertheless, other combinatorial domains, such as AI planning, still pose considerable challenges for RL approaches.  ...  Acknowledgements We thank the reviewers for valuable feedback.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.00898v1">arXiv:2110.00898v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2737p3wcvfaxvcmti5kgxh4tk4">fatcat:2737p3wcvfaxvcmti5kgxh4tk4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211006045211/https://arxiv.org/pdf/2110.00898v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a7/be/a7be749d54ea91814367bb9853cc961f500b4cdc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.00898v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Biased Graph Neural Network Sampler with Near-Optimal Regret [article]

Qingru Zhang, David Wipf, Quan Gan, Le Song
<span title="2021-11-14">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Graph neural networks (GNN) have recently emerged as a vehicle for applying deep network architectures to graph and relational data.  ...  And unlike prior bandit-GNN use cases, the resulting policy leads to near-optimal regret while accounting for the GNN training dynamics introduced by SGD.  ...  Acknowledgements We would like to thank Amazon Web Service for supporting the computational resources, Hanjun Dai for the extremely helpful discussion, and the anonymous reviewers for providing constructive  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.01089v3">arXiv:2103.01089v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6qfdpolqp5ag7bt7qvdm3kufty">fatcat:6qfdpolqp5ag7bt7qvdm3kufty</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211120100309/https://arxiv.org/pdf/2103.01089v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f7/4b/f74bbfee13764134f55b0ce44555388844724a7d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.01089v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Gaussian process decentralized data fusion meets transfer learning in large-scale distributed cooperative perception

Ruofei Ouyang, Bryan Kian Hsiang Low
<span title="2019-01-28">2019</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/jmjdfeig5rayzgfuauvumileza" style="color: black;">Autonomous Robots</a> </i> &nbsp;
Resource Limitations in a Noisy Environment Matvey Soloviev*, Joseph Halpern Information Directed Sampling for Stochastic Bandits with Graph Feedback Fang Liu*, Swapna Buccapatnam, Ness Shroff Information  ...  Networks Bo Luo*, Liu Yannan, Lingxiao Wei, Qiang Xu Towards Perceptual Image Dehazing by Physics-based Disentanglement and Adversarial Training Xitong Yang*, Zheng Xu, Jiebo Luo Towards Training Probabilistic  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10514-018-09826-z">doi:10.1007/s10514-018-09826-z</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/67yqhwmgozccxni56rxmuapjgm">fatcat:67yqhwmgozccxni56rxmuapjgm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200319033824/https://aaai.org/Conferences/AAAI-18/wp-content/uploads/2017/12/AAAI-18-Accepted-Paper-List.Web_.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/a3/9b/a39b2e42983c9863a5e6ebaad48c6bce360da7f3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10514-018-09826-z"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Sequential Graph Convolutional Network for Active Learning

Razvan Caramalau, Binod Bhattarai, Tae-Kyun Kim
<span title="">2021</span> <i title="IEEE"> 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) </i> &nbsp;
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN).  ...  We flip the label of newly queried nodes from unlabelled to labelled, re-train the learner to optimise the downstream task and the graph to minimise its modified objective.  ...  Moreover, while optimizing deep neural network ness of the data [4] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr46437.2021.00946">doi:10.1109/cvpr46437.2021.00946</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/r65cu6zx4bgsphoz3k7knefnl4">fatcat:r65cu6zx4bgsphoz3k7knefnl4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220203135915/https://discovery.ucl.ac.uk/id/eprint/10142632/1/cvpr2021final.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7c/3c/7c3c7e65033ce871cef3b55da334ba08cef3b2b5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvpr46437.2021.00946"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Learning to Plan in High Dimensions via Neural Exploration-Exploitation Trees [article]

Binghong Chen, Bo Dai, Qinjie Lin, Guo Ye, Han Liu, Le Song
<span title="2020-02-23">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a meta path planning algorithm named Neural Exploration-Exploitation Trees (NEXT) for learning from prior experience for solving new path planning problems in high dimensional continuous state  ...  More specifically, NEXT exploits a novel neural architecture which can learn promising search directions from problem structures.  ...  Figure 12 and Figure 13 are neural architectures for the attention module, the policy/value network, and the planning module, respectively.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.00070v4">arXiv:1903.00070v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7eqk46oe4rg3blpayotbmzvm4q">fatcat:7eqk46oe4rg3blpayotbmzvm4q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321095544/https://arxiv.org/pdf/1903.00070v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.00070v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Graph Signal Sampling via Reinforcement Learning

Oleksii Abramenko, Alexander Jung
<span title="">2019</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/rc5jnc4ldvhs3dswicq5wk3vsq" style="color: black;">ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</a> </i> &nbsp;
Graph signal sampling is one the major problems in graph signal processing and arises in a variety of practical applications, such as data compression, image denoising and social network analysis.  ...  Overall, the goal of the agent is to select signal samples which allow for the smallest graph signal recovery error.  ...  During the training phase, instead of using the most recent transitions, we randomly sample minibatch of transitions from the replay memory and use them for fitting neural network.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icassp.2019.8683181">doi:10.1109/icassp.2019.8683181</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icassp/AbramenkoJ19.html">dblp:conf/icassp/AbramenkoJ19</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/26mtlhxzfvazdpm4s4j6336cpy">fatcat:26mtlhxzfvazdpm4s4j6336cpy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200309100423/https://aaltodoc.aalto.fi/bitstream/handle/123456789/34750/master_Abramenko_Oleksii_2018.pdf;jsessionid=1BBD092E4E2810409D0CD0599DE27A1C?sequence=2" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bc/82/bc82b088475eae4a5356c1b3fd8f646196727004.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icassp.2019.8683181"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Quantum Embedding Search for Quantum Machine Learning [article]

Nam Nguyen, Kwang-Chen Chen
<span title="2021-11-08">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper introduces a novel quantum embedding search algorithm (QES, pronounced as "quest"), enabling search for optimal quantum embedding design for a specific dataset of interest.  ...  First, we establish the connection between the structures of quantum embedding and the representations of directed multi-graphs, enabling a well-defined search space.  ...  Since training small network may result in varying classification performances, we train each neural networks 100 times and report their accuracy mean.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.11853v2">arXiv:2105.11853v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yx4cig5px5e6hedwwzdpugnmfq">fatcat:yx4cig5px5e6hedwwzdpugnmfq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211120121709/https://arxiv.org/pdf/2105.11853v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8f/66/8f66081d3333f7d09bbb4d3726f539d35d1cf2d9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2105.11853v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Quantum Embedding Search for Quantum Machine Learning

Nam Nguyen, Kwang-Cheng Chen
<span title="">2022</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
First, we establish the connection between the structures of entanglement using CNOT gates and the representations of directed multi-graphs, enabling a well-defined search space.  ...  Second, we instigate the entanglement level to reduce the cardinality of the search space to a feasible size for practical implementations.  ...  Since training a small network may result in varying classification performances, we prepare each neural network 100 times and report their accuracy.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2022.3167398">doi:10.1109/access.2022.3167398</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fde64j6sl5cdrlhyinbo764tva">fatcat:fde64j6sl5cdrlhyinbo764tva</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220505093040/https://ieeexplore.ieee.org/ielx7/6287639/9668973/09757160.pdf?tp=&amp;arnumber=9757160&amp;isnumber=9668973&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2b/50/2b500947d557d423a58d1fc43c22cbc1588abf81.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2022.3167398"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks [article]

Weilin Cong, Rana Forsati, Mahmut Kandemir, Mehrdad Mahdavi
<span title="2021-09-05">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs).  ...  The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization.  ...  INTRODUCTION Graph Neural Networks (GNNs) are powerful models for learning representation of nodes and have achieved great success in dealing with graph-related applications using data that contains rich  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.13866v2">arXiv:2006.13866v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/em7gsyj23vcrbfgkt4dzwo45fa">fatcat:em7gsyj23vcrbfgkt4dzwo45fa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210909000007/https://arxiv.org/pdf/2006.13866v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/57/ec/57ec9a67489e6722f544ab8d7d09ac2094be7b0d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.13866v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Do We Need Anisotropic Graph Neural Networks? [article]

Shyam A. Tailor, Felix L. Opolka, Pietro Liò, Nicholas D. Lane
<span title="2022-05-09">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Common wisdom in the graph neural network (GNN) community dictates that anisotropic models – in which messages sent between nodes are a function of both the source and target node – are required to achieve  ...  In addition to raising important questions for the GNN community, our work has significant real-world implications for efficiency.  ...  A convnet for the 2020s, 2022. Ziqi Liu, Zhengwei Wu, Zhiqiang Zhang, Jun Zhou, Shuang Yang, Le Song, and Yuan Qi. Bandit samplers for training graph neural networks, 2020.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.01481v5">arXiv:2104.01481v5</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hzfr4th5lndxzniw3j6ua3mrlu">fatcat:hzfr4th5lndxzniw3j6ua3mrlu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220512233055/https://arxiv.org/pdf/2104.01481v5.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bd/76/bd762087df4efc42d6e01687bfec36f7d7c396b7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2104.01481v5" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Sequential Graph Convolutional Network for Active Learning [article]

Razvan Caramalau, Binod Bhattarai, Tae-Kyun Kim
<span title="2021-04-01">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a novel pool-based Active Learning framework constructed on a sequential Graph Convolution Network (GCN).  ...  We flip the label of newly queried nodes from unlabelled to labelled, re-train the learner to optimise the downstream task and the graph to minimise its modified objective.  ...  We propose to implement the learner and the sampler by a standard Convolutional Neural Network (CNN) and the Graph Convolutional Networks (GCNs) (Kipf and Welling 2017; Bronstein et al. 2017) respectively  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.10219v3">arXiv:2006.10219v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4zsdntayx5dltgejbo6yrpt3py">fatcat:4zsdntayx5dltgejbo6yrpt3py</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200923001932/https://arxiv.org/pdf/2006.10219v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.10219v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

EPE-NAS: Efficient Performance Estimation Without Training for Neural Architecture Search [article]

Vasco Lopes, Saeid Alirezazadeh, Luís A. Alexandre
<span title="2021-02-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Neural Architecture Search (NAS) has shown excellent results in designing architectures for computer vision problems.  ...  We show that EPE-NAS can produce a robust correlation and that by incorporating it into a simple random sampling strategy, we are able to search for competitive networks, without requiring any training  ...  ENAS [19] , used a controller, trained with policy gradient, to discover architectures by searching for an optimal subgraph within a large computational graph.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.08099v1">arXiv:2102.08099v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bcfnjwk2ozauni6m3g6fhihy2u">fatcat:bcfnjwk2ozauni6m3g6fhihy2u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210218121617/https://arxiv.org/pdf/2102.08099v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e0/d6/e0d65433cd6dc55fec7322e01d243466b68cb97f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.08099v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Cakewalk Sampling [article]

Uri Patish, Shimon Ullman
<span title="2020-01-01">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
As a first use case, we empirically study the efficiency in which sampling methods can recover locally maximal cliques in undirected graphs.  ...  Sampling methods on the other hand can access any valid solution, and thus can be used either directly or alongside methods of the former type as a way for finding good local optima.  ...  Adam on the other hand has proven as effective for training neural networks in a wide variety of problems, and nowadays is probably the mostly commonly used gradient update.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1802.09030v2">arXiv:1802.09030v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/k3emg5fat5aflgqa6ds23qjsfy">fatcat:k3emg5fat5aflgqa6ds23qjsfy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200724012147/https://arxiv.org/pdf/1802.09030v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/39/af397f2122f44570b7b286f6e6633a8eda8582a2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1802.09030v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Boosting Active Learning to Optimality: A Tractable Monte-Carlo, Billiard-Based Algorithm [chapter]

Philippe Rolet, Michèle Sebag, Olivier Teytaud
<span title="">2009</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
A tractable approximation of the optimal (intractable) policy is presented, the Bandit-based Active Learner (BAAL) algorithm.  ...  This paper focuses on Active Learning with a limited number of queries; in application domains such as Numerical Engineering, the size of the training set might be limited to a few dozen or hundred examples  ...  Acknowledgments This work was supported in part by the Région Ile de France and Digiteo, and the PASCAL network of excellence.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-04174-7_20">doi:10.1007/978-3-642-04174-7_20</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oed7lwrssbf2rcv5oue3hgvdle">fatcat:oed7lwrssbf2rcv5oue3hgvdle</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190505095817/https://link.springer.com/content/pdf/10.1007%2F978-3-642-04174-7_20.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ea/16/ea1620afe7a29539f2ca4c04ec6fb5a31377d24f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-04174-7_20"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 91 results