Despite various studies and counter-studies, largely funded by the networks themselves, social media remains an extremely problematic vehicle for divisive messages and harmful movements.
But its influence is often misunderstood, or elements are amalgamated to obscure the facts, for various reasons. The true influence of the social is not necessarily due to algorithms or amplification as elements of focus. The biggest harm comes from the connection itself and the ability to tune into the thoughts of people you know, which was not possible in the past.
Here’s an example – let’s say you’re fully vaccinated against COVID, trust the science completely, and do what health officials have advised, no issues, no concerns about the process. But then you see a message from your old friend – let’s call him “Dave” – in which Dave expresses his concerns about the vaccine and why he is reluctant to get it.
You might not have spoken to Dave in years, but you like him, you respect his opinion. Suddenly, he’s not a faceless, nameless activist that you can easily dismiss, he’s someone you know, and it makes you wonder if there is more than you might think. Dave never seemed stupid or gullible, maybe you should think about it a little more.
So you do it – you read the links Dave posted, you check out posts and articles, maybe even browse a few groups to try and get a better understanding. Maybe you start posting comments on anti-vax articles as well, and all of this signals to Facebook’s algorithms that you are interested in this topic and are more and more likely to engage in similar posts. The recommendations start to change in your feed, you get more involved with the topic, and all of that pushes you to one side or the other of the argument, fueling the division.
But it didn’t start with the Algorithm, which is a key refutation of Meta’s counterarguments. It started with Dave, someone you know, posting an opinion that piqued your interest.
This is why broader campaigns aimed at manipulating public opinion are so worrying. The disruption campaigns orchestrated by the Russian Internet Research Agency in the run-up to the 2016 US election are the most public example of this, but similar efforts occur all the time. Last week, reports revealed that the Indian government was using robot-fueled brute force social media campaigns to ‘flood the area’ and shift public debate on certain topics by making alternative topics trending. on Facebook and Twitter. Many NFT and crypto projects are now looking to cash in on the wider hype by using Twitter bots to make their offerings more popular and reputable than they are.
Most people, of course, are increasingly wary of such pressures and will more easily question what they see online. But just like the classic Nigerian email scam, it only takes a very small number of people to hook up, and all the effort is worth it. Labor costs are low and the process can be largely automated. And a few Daves can end up having a big impact on public discourse.
The motivations behind these campaigns are complex. In the case of the Indian government, it is about controlling public discourse and suppressing possible dissent, while for the crooks it is about money. There are many reasons why such pushes are being implemented, but there is no doubt that social media has provided a valuable and viable connector for these efforts.
But the counter-arguments are selective. Meta says political content is only a small part of the overall material shared on Facebook. Which might be true, but that only counts shared posts, not personal posts and group chats. Meta also says that divisive content is actually bad for business because, as CEO Mark Zuckerberg explains:
“We make money from ads, and advertisers constantly tell us that they don’t want their ads to be next to harmful or angry content. And I don’t know of any tech company that sets out to create products that make people angry or depressed. Moral, business, and product incentives all go in the opposite direction.“
Yet at the same time, Meta’s own research has also shown Facebook’s power to influence public opinion, especially in the political context.
In 2010, an estimated 340,000 additional voters participated in the U.S. Congressional elections due to a Facebook post on election day boosted by Facebook.
According to to study:
“About 611,000 users (1%) received an ‘info message’ at the top of their news feed, which encouraged them to vote, provided a link to information about local polling stations, and included a clickable button “I voted” and a count of Facebook users who clicked. About 60 million users (98%) received a “social post”, which included the same elements but also showed profile photos of up to ‘to six randomly selected Facebook friends who clicked the “I voted” button. The remaining 1% of users were assigned to a control group that did not receive any messages. “
The results showed that those who saw the second post, with images of their relationships included, were increasingly likely to vote, which ultimately led to 340,000 more people going to the polls as a result of the coup. peer thumb. And that’s only a small scale in terms of Facebook, out of 60 million users, with the platform now approaching 3 billion monthly assets globally.
It’s clear, based on Facebook’s own evidence, that the platform does indeed hold significant influencing power through peer knowledge and personal sharing.
So it’s not Facebook specifically, or the infamous news feed algorithm, that are the main culprits in this process. It’s the people and what people choose to share. This is what Meta CEO Mark Zuckerberg has repeatedly pointed out:
“Yes, we have some big disagreements, maybe more now than at any time in recent history. But part of that is because we put our issues on the table – issues that weren’t talked about for a long time. More people from more parts of our society have a voice than ever before, and it will take time to hear those voices and bring them together into a cohesive narrative.“
Contrary to the suggestion that this causes more problems, Meta sees Facebook as a vehicle for real social change, that through free speech we can reach a point of better understanding, and that providing a platform for all should, in theory, ensure better representation and connection.
Equally important is that optimistically true, but all the same, the ability of bad actors to influence these shared opinions as well is equally important, and it is just as often the thoughts that are amplified among your network connections.
So what to do, beyond what Meta’s enforcement and moderation teams are already working on?
Well, probably not much. In some ways, repeating text detection in posts would apparently work, which platforms are already doing in different ways. Limiting sharing around certain topics can also have some impact, but in reality the best way forward is what Meta is doing, working to detect the perpetrators of such and removing networks amplifying questionable content.
Would removing the algorithm work?
May be. Whistleblower Frances Haugen highlighted the News Feed‘s algorithm, and its focus on engagement above all else, as a key issue, as the system is effectively designed to amplify content that prompts argument.
This is certainly problematic in some apps, but would that prevent Dave from sharing his thoughts on an issue? No, it wouldn’t, and at the same time, there is no indication that the Davees of the world are obtaining their information from questionable sources like the ones highlighted here. But social media platforms and their algorithms facilitate both, they improve that process, and offer whole new avenues of division.
Different measures could be adopted, but the effectiveness of each is highly questionable. Because a lot of it isn’t a social media issue, it’s a people issue, as Meta puts it. The problem is, we now have access to everyone’s thoughts, and some of them that we won’t agree with.
In the past, we could go on, blissfully unaware of our differences. But in the age of social media, that’s no longer an option.
In the long term, as Zuckerberg says, will this lead us to a more understanding, integrated and civil society? The results so far suggest that we have a way forward.