Filters








159 Hits in 5.8 sec

Private and Byzantine-Proof Cooperative Decision-Making [article]

Abhimanyu Dubey, Alex Pentland
2022 arXiv   pre-print
The cooperative bandit problem is a multi-agent decision problem involving a group of agents that interact simultaneously with a multi-armed bandit, while communicating over a network with delays.  ...  We test our algorithms on a competitive benchmark of random graphs and demonstrate their superior performance with respect to existing robust algorithms.  ...  In this setting, a group of 𝑀 agents that communicate over a (connected, undirected) network G = (𝑉 , 𝐸) face an identical stochastic 𝐾-armed bandit, and must cooperate to collectively minimize their  ... 
arXiv:2205.14174v1 fatcat:fed6mns4wvc7lbz2mf3szuweme

Cooperative Multi-Agent Bandits with Heavy Tails [article]

Abhimanyu Dubey, Alex Pentland
2020 arXiv   pre-print
We propose MP-UCB, a decentralized multi-agent algorithm for the cooperative stochastic bandit that incorporates robust estimation with a message-passing protocol.  ...  We study the heavy-tailed stochastic bandit problem in the cooperative multi-agent setting, where a group of agents interact with a common bandit problem, while communicating on a network with delays.  ...  We consider M agents communicating via a connected, undirected graph G = (V, E).  ... 
arXiv:2008.06244v1 fatcat:2cjp4v6hrnccbdjmfdrhkc2gru

Table of Contents

2021 IEEE Transactions on Network Science and Engineering  
Dai 3369 Time and Energy Costs for Consensus of Multi-Agent Networks With Undirected and Directed Topologies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ...  Perc 3087 Collective Behaviors of Discrete-Time Multi-Agent Systems Over Signed Digraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  ... 
doi:10.1109/tnse.2021.3123860 fatcat:7ruw6beu6rablblt5txlvz7oh4

Graph-Based Recommendation System [article]

Kaige Yang, Laura Toni
2018 arXiv   pre-print
In this work, we study recommendation systems modelled as contextual multi-armed bandit (MAB) problems.  ...  We propose a graph-based recommendation system that learns and exploits the geometry of the user space to create meaningful clusters in the user domain.  ...  This learning process can be formalised by multi-armed bandit (MAB) framework [2] [3] [4] [5] .  ... 
arXiv:1808.00004v1 fatcat:mvkl2rumrnglteraa7fxul3knm

GRAPH-BASED RECOMMENDATION SYSTEM

Kaige Yang, Laura Toni
2018 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP)  
In this work, we study recommendation systems modelled as contextual multi-armed bandit (MAB) problems.  ...  We propose a graph-based recommendation system that learns and exploits the geometry of the user space to create meaningful clusters in the user domain.  ...  This learning process can be formalised by multi-armed bandit (MAB) framework [2] [3] [4] [5] .  ... 
doi:10.1109/globalsip.2018.8646359 dblp:conf/globalsip/YangT18 fatcat:wxddgptv3faaxmtclyj6iy3xpm

A Survey of Decentralized Online Learning [article]

Xiuxian Li, Lihua Xie, Na Li
2022 arXiv   pre-print
cost function of each agent is often time-varying in dynamic and even adversarial environments.  ...  At each time, a decision must be made by each agent based on historical information at hand without knowing future information on cost functions.  ...  over multi-agent networks.  ... 
arXiv:2205.00473v1 fatcat:irnm46esfvhe5mj2a7v3tqjh4u

Influence Maximization Based Global Structural Properties: A Multi-Armed Bandit Approach

Mohammed Alshahrani, Zhu Fuxi, Ahmed Sameh, Soufiana Mekouar, Shichao Liu
2019 IEEE Access  
We conduct extensive experiments on a large-scale graph in terms of influence spread, efficiency performance in terms of running time and space complexity, and how the reward parameters impact cumulative  ...  In a multi-armed bandit formalism, each agent has the chance to choose among k arms (actions), and according to the agent, the choice receives a reward.  ...  However, it should be improved for undirected graphs that match by 40%. C.  ... 
doi:10.1109/access.2019.2917123 fatcat:zmall3lmrzhjhbm6bdkyh7d33a

Thompson Sampling for Unimodal Bandits [article]

Long Yang, Zhao Li, Zehong Hu, Shasha Ruan, Shijian Li, Gang Pan, Hongyang Chen
2021 arXiv   pre-print
In this paper, we propose a Thompson Sampling algorithm for unimodal bandits, where the expected reward is unimodal over the partially ordered arms.  ...  We theoretically prove that, for Bernoulli rewards, the regret of our algorithm reaches the lower bound of unimodal bandits, thus it is asymptotically optimal.  ...  Let G = (V, E) denote an undirected graph.  ... 
arXiv:2106.08187v2 fatcat:3dusq6rnkrbp7asd2pxqkhx3h4

Graph Signal Sampling via Reinforcement Learning

Oleksii Abramenko, Alexander Jung
2019 ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Within our approach the signal sampling is carried out by an agent which crawls over the empirical graph and selects the most relevant graph nodes to sample, i.e., determines the corresponding graph signal  ...  Overall, the goal of the agent is to select signal samples which allow for the smallest graph signal recovery error.  ...  The graph G is an undirected simple graph (with no selfloops or multi-edges).  ... 
doi:10.1109/icassp.2019.8683181 dblp:conf/icassp/AbramenkoJ19 fatcat:26mtlhxzfvazdpm4s4j6336cpy

Bayesian Algorithms for Decentralized Stochastic Bandits [article]

Anusha Lalitha, Andrea Goldsmith
2020 arXiv   pre-print
We study a decentralized cooperative multi-agent multi-armed bandit problem with K arms and N agents connected over a network.  ...  We propose a decentralized Bayesian multi-armed bandit framework that extends single-agent Bayesian bandit algorithms to the decentralized setting.  ...  PROBLEM FORMULATION Consider an MAB problem with N agents connected through an undirected connected graph G.  ... 
arXiv:2010.10569v2 fatcat:qee2rxlwsvbexh24oyke6zpcpy

Differentially-Private Federated Linear Bandits [article]

Abhimanyu Dubey, Alex Pentland
2020 arXiv   pre-print
Our algorithms provide competitive performance both in terms of pseudoregret bounds and empirical benchmark performance in various multi-agent settings.  ...  In this paper, we study this in context of the contextual linear bandit: we consider a collection of agents cooperating to solve a common contextual bandit, while ensuring that their communication remains  ...  on private linear bandits in the multi-agent setting.  ... 
arXiv:2010.11425v1 fatcat:mbuav4eq7vfhbbega56pda3h4m

Effective Large-Scale Online Influence Maximization

Paul Lagree, Olivier Cappe, Bogdan Cautis, Silviu Maniu
2017 2017 IEEE International Conference on Data Mining (ICDM)  
Our solution for it follows the multi-armed bandit idea initially employed in Lei et al.  ...  In summary, we note that GT-UCB performs consistently as long as the method leads to candidates that are well spread over the graph.  ... 
doi:10.1109/icdm.2017.118 dblp:conf/icdm/LagreeCCM17 fatcat:kn2raojfzzd6hng6mltvvkhbxu

A Survey of Graph-Theoretic Approaches for Analyzing the Resilience of Networked Control Systems [article]

Mohammad Pirani, Aritra Mitra, Shreyas Sundaram
2022 arXiv   pre-print
We present graph-theoretic methods to quantify the attack impact, and reinterpret some system-theoretic notions of robustness from a graph-theoretic standpoint to mitigate the impact of the attacks.  ...  This paper presents an overview of graph-theoretic methods for analyzing the resilience of networked control systems.  ...  Informally speaking, an undirected k-circulant graph is a k-nearest neighbor cycle graph. Thus, with the same reasoning, an undirected k-circulant graph is 2k-connected and at least k 2 -robust.  ... 
arXiv:2205.12498v1 fatcat:6s7dk4wsf5gmpbuzsklsca7v4e

Fast Distributed Bandits for Online Recommendation Systems [article]

Kanak Mahadik, Qingyun Wu, Shuai Li, Amit Sabne
2020 arXiv   pre-print
Evaluation over both real-world benchmarks and synthetic datasets shows that DistCLUB is on average 8.87x faster than DCCB, and achieves 14.5% higher normalized prediction performance.  ...  Contextual bandit algorithms are commonly used in recommender systems, where content popularity can change rapidly.  ...  Let G(V , E) be the undirected graph consisting of vertices V and edges E ⊆ V × V .  ... 
arXiv:2007.08061v1 fatcat:6zlznh2cjbe5xo76ulcycgh5y4

Conformity in Scientific Networks [article]

James Owen Weatherall, Cailin O'Connor
2019 arXiv   pre-print
This preference for conformity interacts with the agents' beliefs about which of two (or more) possible actions yields the better outcome.  ...  Here we analyze a network epistemology model in which agents, all else being equal, prefer to take actions that conform with those of their neighbors.  ...  To expand on this hypothesis, we ran simulations for multi-armed bandit problems, where the agents are confronted with more than two possible actions.  ... 
arXiv:1803.09905v4 fatcat:iko5rzmckffu7ew574vymfaysi
« Previous Showing results 1 — 15 out of 159 results