Download PDFOpen PDF in browser

Reinforcement Learning-Based Consensus-Reaching in Large-Scale Social Networks

EasyChair Preprint 11382

14 pagesDate: November 25, 2023

Abstract

Social networks in present-day industrial environments encompass a wide range of personal information that has significant research and application potential. One notable challenge in the domain of opinion dynamics of social networks is achieving convergence of opinions to a limited number of clusters. In this context, designing the communication topology of the social network in a distributed manner is particularly difficult. To address this problem, this paper proposes a novel perception model for agents. The proposed model, which is based on bidirectional recurrent neural networks, can adaptively reweight the influence of perceived neighbors in the convergence process of opinion dynamics. Additionally, effective differential reward functions are designed to optimize three objectives: convergence degree, connectivity, and cost of convergence. Lastly, a multi-agent exploration and exploitation algorithm based on policy gradient is designed to optimize the model. Based on the reward values in inter-agent interaction processes, the agents can adaptively learn the neighbor reweighting strategy with multi-objective trade-off abilities. Extensive simulations demonstrate that the proposed method can effectively reconcile conflicting opinions among agents and accelerate convergence.

Keyphrases: Reinforcement Learning, Reweighting perception, opinion dynamics, social network

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:11382,
  author    = {Shijun Guo and Haoran Xu and Guangqiang Xie and Di Wen and Yangru Huang and Peixi Peng},
  title     = {Reinforcement Learning-Based Consensus-Reaching in Large-Scale Social Networks},
  howpublished = {EasyChair Preprint 11382},
  year      = {EasyChair, 2023}}
Download PDFOpen PDF in browser