Introduction

Observing, analyzing and modeling social interactions constitutes a substantial area of research in social sciences. The concept of social network provides a robust and handy, graph-theory-based framework for analyzing how people interact, communicate and form communities.

One special form of interaction that falls within the scope of this framework is social learning - an umbrella term for encompassing a variety of phenomenons including:

Many mathematical models have been developed to formalize those issues. Usually, every such model comprises of some social structure defining how agents interact and an updating rule which determines how each agent forms his own opinion, possibly by taking into account opinions of others. Sometimes those two elements are closely tied together and are difficult to distinguish but in general we may divide those models with respect to the latter component into two following categories.

  1. Bayesian learning models

The underlying assumption is agents update their believes using Bayes’ rule. Formally, given a parameter \(\Theta\) (e.g. an opinion about a political issue) and a signal \(s\) (e.g. a news or opinion of others), the updating procedure is given by \[\begin{equation} P(\Theta|s) = \frac{P(s|\Theta)P(\Theta)}{P(s)} , \end{equation}\] where \(P(\Theta)\) is a prior probability which may be interpreted as a result of agents own observations of the world.

  1. Non-bayesian learning models

This class of models usually incorporate a relatively more simple updating rule. The most prominent example is the DeGroot model 1, sometimes known as naive learning, which assumes that each agent updates her beliefs by taking an average of her neighbor’s opinions (possibly, but not necessarily including her own opinion). Contrary to the Bayesian framework which might impose great cognitive abilities on agents which may be unrealistic to bear by real people, the DeGroot model provides a rule that is empirically justifiable in its simplicity but still flexible for further extensions.

If all agents are Bayesian, and there is common knowledge of this, under mild assumptions, learning will be asymptotically efficient in large networks (see Gale and Kariv (2003) and Mossel and Tamuz (2010) for a myopic learning environment and Mossel, Sly, and Tamuz (2015) for a strategic learning environment).

However, if all agents update their guess as the majority of their neighbors’ prior guesses—as modeled by a coarse DeGroot model of learning, also known as the majority voting model (Liggett (1985))—then it is possible that a non-trivial set of agents will end up stuck making the wrong guess. In practice, it might be that there is a mix of sophisticated (Bayesian) and naive (DeGroot) learners, and that Bayesians are aware of this and incorporate it in their calculations. 2

In this short survey we will concentrate on the DeGroot-like models and provide a brief discussion of its various extensions found in the literature with particular focus on their possible usage in modeling and simulation of phenomena combining information bubbles and polarization observed during the pandemics such as the anti-vaccine movements or plandemics conspiracy theory.

We will heavily rely on two papers 3 and 4 and try to assess each model in context of simulating pandemics opinion phenomena.

Summary of the experimental paper

Basic model

In this section we will briefly present the basic version of the DeGroot model and introduce notation that we shall further use.

We consider a set of agents \(N = {1,2, ..., n}\) who interact in discrete time \(T = {1,2,...}\). At each time instant \(t\) every (fixed) agent \(i\) has his opinion \(x_i(t) \in [0,1]\) and trust (weight) \(w_{ij}\) towards every other agent \(j \in N\). Let’s denote with \(x(t) = \left(x_1(t), x_2(t), ..., x_n(t)\right)\) a vector of agents’ opinions or opinion profile at time \(t\). With such notation, each agent agent updates his opinion by averaging opinions of other agents, that is \[ x_i(t+1) = w_{i1}x_1(t) + w_{i2}x_2(t) + ... + w_{in}x_n(t) ,\] where \(\sum_j w_{ij} = 1\)

In the most general form trust may depend both on time and the opinion profile. However, whenever it does not lead to confusion we will simply write \(w_{ij}\) instead of \(w_{ij}(t,x(t))\) to denote agents \(i\) trust towards agent \(j\). The trust between agents gives raise to the structure of their interactions which can be represented as a row stochastic matrix, where \(W = [w_{ij}]\) which may be also viewed at an incidence matrix of a weighted directed graph. This allows us to apply elements of the sophisticated machinery of graph theory and Markov chains to analyzing social learning problems.

In compact matrix notation our general model (GM) has the following components:

The main issues analyzed within this framework are convergence of beliefs and reaching consensus.

We say that the beliefs converge for a given matrix of weights \(W\) whenever for every initial opinion profile \(x(0) \in [0,1]^n\) the limit \[\lim_{t \to \infty} W^tx(0)\] exists.

The convergence itself does not require many strong assumptions. In fact, as stated in 5 the sufficient and necessary conditions for beliefs to converge are:

The question alone whether or not the convergence holds is not the most exciting one however. We are more interested in structure of the limit vector which we may use to interpret the effect of interactions in the pandemic context.

One special form of the limit vector is one with all entries equal, that is a vector for which $ x()_i = c $ for all \(i\) and for some constant \(c \in [1,2]\). We will call it a consensus, and interpret as an agreement between agents that they reach after some (relatively big) period of time.

In the following sections we will present different versions and extensions of the general model along with some theoretical results and consider their usefulness in simulating opinion dynamics in pandemic context.

Time-varying weight on one’s own beliefs

DeMarzo et al.

One class of extensions introduces time dependent component which accounts for changes in trust that agents have in themselves. One may think of it as agents gaining (or loosing) confidence in their own opinions as the time passes.

One of the variants by 6 assumes the following form of opinion update: \(x(t+1) = \big((1-\lambda_t)I + \lambda_tW\big)x(t),\) where \(\lambda_t \in [0,1]\) for every \(t\) and \(I\) is an identity matrix. In fact, we may treat the lambda parameter as a function \(\lambda_t : T \to [0,1]\).

With such setting we see that at given \(t\) the smaller the parameter \(\lambda_t\) the less willingly the agents update their opinion in current step.

Further experimentation may be focused on different forms of the \(\lambda_t\) function or introducing some heterogeneity in how lambdas change for different agents.

In general the model seems suitable for modeling some aspects of pandemic situation. For instance, the decreasing function \(\lambda_t\) may be interpreted as growing stubbornness of agents who decide to stick to their opinions instead of interacting.

Examples of simulations

Graph used

W
##         [,1]   [,2]   [,3]   [,4] [,5] [,6] [,7] [,8] [,9] [,10]
##  [1,] 0.1875 0.1875 0.1875 0.1875 0.00 0.00 0.00 0.00 0.00  0.25
##  [2,] 0.1875 0.1875 0.1875 0.1875 0.00 0.00 0.00 0.00 0.00  0.25
##  [3,] 0.1875 0.1875 0.1875 0.1875 0.00 0.00 0.00 0.00 0.00  0.25
##  [4,] 0.1875 0.1875 0.1875 0.1875 0.00 0.00 0.00 0.00 0.00  0.25
##  [5,] 0.0000 0.0000 0.0000 0.0000 0.15 0.15 0.15 0.15 0.15  0.25
##  [6,] 0.0000 0.0000 0.0000 0.0000 0.15 0.15 0.15 0.15 0.15  0.25
##  [7,] 0.0000 0.0000 0.0000 0.0000 0.15 0.15 0.15 0.15 0.15  0.25
##  [8,] 0.0000 0.0000 0.0000 0.0000 0.15 0.15 0.15 0.15 0.15  0.25
##  [9,] 0.0000 0.0000 0.0000 0.0000 0.15 0.15 0.15 0.15 0.15  0.25
## [10,] 0.0500 0.0500 0.0500 0.0500 0.16 0.16 0.16 0.16 0.16  0.00
show_graph(W)

Decreasing lambda

\[\lambda_t = 1 - \frac{1}{20}t\]

Increasing lambda

\[\lambda_t = \frac{1}{20}t\]

Friedkin&Johnsen 7

In this variant agents’ willingness to update opinions are regulated by a diagonal matrix \(D\) with entries from the interval \((0,1)\) that is

\[x(t+1) = DWx(t) + (I-D)x(0).\] The main difference is that at each step agent weights his own initial opinion and opinions of others. Therefore, the opinions are formed partly endogenously by the interactions and partly exogenously by the initial opinions. The balance between the mentioned exogenous and endogenous formation for each agent is encoded by entries of \(D\). Those entries may be thought of as agents’ susceptibilities to interpersonal influence, with 1 meaning total susceptibility and 0 meaning none.

In terms of simulations this model is more difficult to handle. Even a small change in one of the entries of the \(D\) matrix results in different outcomes. On the other hand, it’s easy to produce convergence without consensus which may be desired in modeling information bubbles.

Examples of simulations

Graph used

W
##      [,1] [,2] [,3] [,4] [,5]
## [1,]  0.0 0.20  0.3 0.40 0.10
## [2,]  0.6 0.00  0.1 0.15 0.15
## [3,]  0.3 0.30  0.0 0.30 0.10
## [4,]  0.4 0.15  0.1 0.00 0.35
## [5,]  0.1 0.25  0.2 0.45 0.00
show_graph(W)

Special case with identity matrix

With \(D = I\) the model reduces to DeGroots basic model.

D = diag(5)
X_0<-c(0.4675,0.2667,0.676,0.727,0.1255)
x_0<-X_0
ag_names<-c('A','B','C','D','E')
sym <-simulate(W,X_0,10,update_rule_2, ag_names,D,x_0)
D
##      [,1] [,2] [,3] [,4] [,5]
## [1,]    1    0    0    0    0
## [2,]    0    1    0    0    0
## [3,]    0    0    1    0    0
## [4,]    0    0    0    1    0
## [5,]    0    0    0    0    1
make_opinon_plot(sym)

High but not total susceptibility

#### example from paper

D<- diag(c(0.9,0.9,0.8,0.76,0.83))
D
##      [,1] [,2] [,3] [,4] [,5]
## [1,]  0.9  0.0  0.0 0.00 0.00
## [2,]  0.0  0.9  0.0 0.00 0.00
## [3,]  0.0  0.0  0.8 0.00 0.00
## [4,]  0.0  0.0  0.0 0.76 0.00
## [5,]  0.0  0.0  0.0 0.00 0.83
X_0<-c(0.4675,0.2667,0.676,0.727,0.1255)
x_0<-X_0
ag_names<-c('A','B','C','D','E')
sym <-simulate(W,X_0,10,update_rule_2, ag_names,D,x_0)
make_opinon_plot(sym)

15% decrease in agent’s susceptibility

D<- 0.85*D

D
##       [,1]  [,2] [,3]  [,4]   [,5]
## [1,] 0.765 0.000 0.00 0.000 0.0000
## [2,] 0.000 0.765 0.00 0.000 0.0000
## [3,] 0.000 0.000 0.68 0.000 0.0000
## [4,] 0.000 0.000 0.00 0.646 0.0000
## [5,] 0.000 0.000 0.00 0.000 0.7055
X_0<-c(0.4675,0.2667,0.676,0.727,0.1255)
x_0<-X_0
ag_names<-c('A','B','C','D','E')
sym <-simulate(W,X_0,10,update_rule_2, ag_names,D,x_0)
make_opinon_plot(sym)

## More complex extensions implemented in Python

Apart from the mentioned linear extensions of the deGroot model there are more complicated models that sin general have the same base idea as the original one but are more difficult to implement from scratch.

Luckily, there is a Python library covering some of them which makes simulations feasible.

Those models rely on assumptions that agents update their opinions through interactions only with those agents that share opinions which are similar to theirs.

Deffuant model

In this model agents are connected by a complete social network (complete graph), and interact only in pairs at each step. The interacting pair \((i,j)\) is selected randomly from the population at each time point. After interaction,the two opinions, \(x_i\) and \(x_j\) may change, depending on a so called bounded confidence parameter \(\epsilon \in [0,1]\). This can be seen as a measure of the open-mindedness of individuals in a population. It defines a boundary behind which individuals cannot comunicate because their vews are too different. The updating process itself, is described as follows. If the distance between opinions \(d_{ij} = |x_i - x_j|\) is small enough, that is \(d_{ij} < \epsilon\) agents exchange views and the new opinions become \(x_i(t+1) = x_i(t) + \mu(x_j(t) - x_i(t))\) and \(x_j(t+1) = x_j(t) + \mu(x_i(t) - x_j(t))\), where \(\mu\) is a convergence parameter which is usually equal to 0.5. If \(d_{ij} >= \epsilon\) nothing happens.

Depending on \(\epsilon\) the model produces a consensus or a fragmented opinion.

Examples of simulations

Epsilon = 0.3

small eps

Epsilon = 0.15

small eps2

Epsilon = 0.05

small eps3

Algorithmic Bias Model

The library also has an implementation of extension of the Deffuant model in which the probability of choosing a partner for interaction depends on \(\epsilon\) as well. Below is a fragment of documentation:

The Algorithmic Bias model considers a population of individuals, where each individual holds a continuous opinion in the interval [0,1]. Individuals are connected by a social network, and interact pairwise at discrete time steps. The interacting pair is selected from the population at each time point in such a way that individuals that have close opinion values are selected more often, to simulate algorithmic bias. The parameter gamma controls how large this effect is. Specifically, the first individual in the interacting pair is selected randomly, while the second individual is selected based on a probability that decreases with the distance from the opinion of the first individual, i.e. directly proportional with the distance raised to the power gamma. Note: setting gamma=0 reproduce the results for the Deffuant model.

We see that for the same number of iterations and same \(\epsilon = 0.3\) the results change a lot when gamma parameter is set to 1.

gamma1

Hegselmann-Krause model

In this variant a similar idea of opinion closeness is used, but instead of interacting with only one other agent, individuals average their opinion with a group of agents that have similar opinion.

For a given \(\epsilon\) and a given agent \(i\) let’s denote by \(\Gamma_{\epsilon}\) a set of agents who have opinion that does not differ more than by \(\epsilon\) from agent \(i\). Then agent’s \(i\) opinion in the next step becomes \[x_i(t+1)= \frac{\sum_{j \in \Gamma_{\epsilon}} x_j(t)}{\#\Gamma_{\epsilon}} \].

The idea behind the formulation is that the opinion of agent i at time t+1, will be given by the average opinion by its, selected, -neighbor.

One big difference is that opinions in this model come from an interval \([-1,1]\) instead of \([0,1]\)

The model has been described along with excellent simulations in the paper Opinion dynamics and bounded confidence: models, analysis and simulation by Rainer Hegselmann and Ulrich Krause.

The Python package contains two very fresh extensions of the Hegselmann-Krause model:

Both models are described in Opinion Dynamic Modeling of Fake News Perception by Cecilia Toccaceli, Letizia Milli and Giulio Rossetti.

Summary

There is a wide variety of opinion dynamic models and the choice of one to use in simulating pandemics information phenomena should depend on answers to the following questions:

If simplicity is desired the linear DeGroot extensions described in the first part might be the right choice as they can be easily manipulated “by hand” and allow for easy interpretations of the parameters.

The later described models although more complex and less transparent are the newest and state-of-art approaches to modeling opinion dynamics and allow to generate different scenarios within one single framework just by manipulating the parameters.

If so the Python package offers ready and tested models of bounded confidence along with simulation and visualization functions while the other extensions need to be implemented from scratch.

Of course, the models also differ in their inner mechanics and before deciding on the model one should double check if the model can produce results that he expects.


  1. DE GROOT M H (1974) Reaching a consensus. J. Amer. Statist. Assoc. 69. pp. 118 – 121.↩︎

  2. Chandrasekhar, A.G., Larreguy, H. and Xandri, J.P. (2020), Testing Models of Social Learning on Networks: Evidence From Two Experiments. Econometrica, 88: 1-32↩︎

  3. Akylai Taalaibekova, 2018. “Opinion formation in social networks,” Operations Research and Decisions, Wroclaw University of Technology, Institute of Organization and Management, vol. 2, pages 85-108.↩︎

  4. R.Hegselmann, U. Krause, et al.: “Opinion dynamics and bounded confidence models, analysis, and simulation.” in Journal of artificial societies and social simulation, 2002↩︎

  5. Hegselmann, U. Krause, et al.: “Opinion dynamics and bounded confidence models, analysis, and simulation.” in Journal of artificial societies and social simulation, 2002↩︎

  6. DEMARZO P.M., V AYANOS D., Z WIEBEL J., Persuasion bias, social influence, and unidimensional opinions, Quart. J. Econ., 2003, 118 (3), 909–968.↩︎

  7. FRIEDKIN N.E., JOHNSEN E.C., Social influence networks and opinion change, Adv. Group Proc., 1999,16, 1–29.↩︎