Elon Musk’s bid to buy Twitter has prompted the WHO to make a statement on misinformation. Misinformation costs lives, Mike Ryan, executive director of the health emergencies program at the WHO, said about the offer from Musk, a self-described free-speech absolutist. The Recode podcast has pointed out that in the past Musk was known for his trolling behavior.
Twitter has certain festures that make spreading disinformation easy. So this week The Inoculation explore the implications of the bid for the disinformation scene with the help of WF Thomas, a disinformation researcher. Eva talked to him while working on a story on audio disinformation, which was part of her Transatlantic Media Fellowship by the Heinrich Boell Foundation Washington, DC. You can read the BBC reporting on Russia’s Twitter network here and the New Yorker’s article on regulation here.
Please subscribe to our newsletter, and this show on Apple Podcasts, Audible,Google Podcasts, Spotify, or another platform of your choice. Follow us on Facebook as @theinoculation, on Twitter as @TInoculation, and on Instagram as @the_inoculation
Unknown Speaker 0:00
I invested in Twitter as I believe in its potential to be the platform for free speech around the globe. And I believe free speech is a societal imperative for a functioning democracy.
Eva Schaper 0:12
Hello, and welcome to the inoculation. And today we're gonna talk about Elon Musk and Twitter. Is this a disinformation nightmare. We just heard Elon left and one part of his filing with the American Securities and Exchange Commission, where he filed his initial bid for Twitter.
Daiva Repeckaite 0:29
Last week, Twitter's board agreed for Twitter to be bought by Elon Musk. And in the past, he was known for a very peculiar stance on on misinformation. For example, in September 2020, he said he would not be getting vaccinated for COVID-19 on the grounds that he and his family were not at risk. And when the Canadian truckers protests happened, he tweeted,
Unknown Speaker 0:53
Canadian truckers rule.
Eva Schaper 0:55
And, you know, basically, these are some of the things that make some people less than thrilled about this plant acquisition, Mike Ryan, the executive director of health emergencies at the who said that misinformation costs lives. And
Daiva Repeckaite 1:11
the who actually made a separate statement on that when they found out that Musk is about to buy Twitter,
Eva Schaper 1:17
and diver I think that you also told me something that you heard on the Recode podcast. What did they say about this acquisition?
Daiva Repeckaite 1:25
Exactly. They are tried called Musk a troll, because he was known to for saying things that deliberately annoy people. They also said that this will be a struggle for him and the new management to ensure that Twitter respects the laws of all the different countries where it operates, while at the same time following Musk's so called Freedom absolutists Stance.
Eva Schaper 1:53
Okay, so it seems like there is a conflict brewing and quite a decisive situation for disinformation. So let's just look at some of the background and some of the things that make Twitter such a dangerous conduit for misinformation. And I also like to listen back into my chat with Jeff Thomas, a disinformation researcher who talked to me earlier this year about Twitter spaces and why he thinks Twitter spaces, which are the audio chat rooms that Twitter set up are actually a quite a dangerous space for breeding disinformation.
Unknown Speaker 2:32
Twitter loves to be very opaque and unclear about their algorithm right and how things get pushed to the top. One of the things we know is like when they have kind of these topics is trending topics that will appear on someone's feed, very extremist and outrageous content attempts to do very well and those get pushed to people right. So if you're following these, these kind of basic level conservatives, you'll get, alright people popping up on your feed, the way the algorithm is based, it's based on engagement, right? And people engage with these outrageous click Beatty with these kind of almost cartoonish things, you know, even if it's just to say, respond and say this is stupid. It's the algorithms the rewards that and pushes it out. It's also very unclear, you know, how things appear in that central spaces tab and how they get ordered why something would appear higher up than something else, it's probably based on listenership is probably based on how many people are going in, I would assume. They're listening to see if it's active, or they have something. So if it's just dead air, it's not going to appear there.
Eva Schaper 3:37
Okay, well, that was quite interesting. So let's take a look at which features are specific to Twitter compared to say, Facebook or other social media? And what makes these features so explosive when it comes to disinformation. So this is some information we found and an interesting article on the conversation. And we're linking to everything on our website. So if you want to read up after you've listened to the show, please go ahead. So what did what did we find what's what's so dangerous about Twitter Dima?
Daiva Repeckaite 4:13
So first of all, Twitter is designed for fast moving conversations. And this piece in the conversation says says that the average half life of a tweet is about 20 minutes, compared to five hours for Facebook posts and 20 hours for Instagram posts. So this, this shows that the conversation on Twitter are moving in real time.
Eva Schaper 4:40
Okay. And I think one other problem that we have a lot, which I think is not as widespread on Facebook is that there are a lot of Twitter bots. Do you want to explain to our listeners what a bot is?
Daiva Repeckaite 4:54
It's usually a code that generates a lot of tweets or a lot of code intent, and it can work around the clock and keep tweeting in different time zones.
Eva Schaper 5:07
And I think one thing we need to add one thing we need to tell our listeners is, this is a bid the deal still may not get through the chances seem low. And of course, right now we're speculating about the changes, we'll see. We don't know what Elon Musk is going to do. And we don't know what will happen he would like to do is allow users to edit their tweets, which means that bots could possibly alter the things that are tweeting, or the things that are requesting, which means this would allow them to substantially drive disinformation on the site.
Daiva Repeckaite 5:46
Exactly. And there are some studies that claim that nearly half of accounts tweeting about COVID-19 are likely to the bots so they could tweet something outrageous and plant it in people's minds and then alter it before any kind of fact checking or moderation kicks in if if ever.
Eva Schaper 6:06
Right. Exactly, exactly. So that seems quite dangerous. And I think one more thing that we haven't talked about yet, how does Elon Musk actually want to make money from Twitter?
Daiva Repeckaite 6:18
He wants to switch to the subscription model, rather than advertising.
Eva Schaper 6:24
Okay, so right now Twitter makes money by selling ads in its feet. Why are subscriptions worse than advertising? Mean? It seems like well, advertising subscriptions. Why could that make a difference? Because paying
Daiva Repeckaite 6:38
users will be even more difficult to control.
Eva Schaper 6:42
Okay? Because there can't be because advertisers will not put can put pressure on twitter then as they do sometimes now. And some say that, compared to other social media sites that Twitter is actually quite aggressive in using content moderation against disinformation. Exactly. So that could be a huge change. And what else do you think could be a problem?
Daiva Repeckaite 7:08
First of all, the Recode podcast also mentioned that to recoup his loans, he might want to cut costs and knowing his use, cost cutting might result in reducing the content moderation team.
Eva Schaper 7:23
Do we know how Twitter regulates disinformation now,
Daiva Repeckaite 7:26
so on COVID-19, according to a statement by the company, it is considered that tweets violate its policies if they are, quote, advanced a claim of fact expressed in definitive terms are demonstratively false or misleading based on widely available authoritative sources, and are likely to impact public safety or cause serious harm. So in those cases, they might require customers to delete such offending tweets. Or for repeat offenders, they might simply lock them out of their accounts.
Eva Schaper 8:07
Okay, that's interesting, and they especially point out vaccine disinformation. So they're basically saying false claims that suggests vaccines contain deadly or severely harmful ingredients will lead to users being suspended. What we've also heard is that Elon Musk might be a less stringent moderator than Twitter is now or let's say Twitter, under under a musk ownership could be a less stringent Moderator.
Daiva Repeckaite 8:38
So it's important to note that there's already a lot of freedom to express different outrageous views in Twitter spaces.
Eva Schaper 8:47
Okay. And another thing is, of course, if you have paying subscribers, they might want to put pressure on you to allow somebody like Donald Trump back on Twitter who was banned, and as we know, Donald Trump's tweets, did extremely well, accruing lots of clicks, retweets, millions of retweets, millions of clicks, which means this is a model that would be profitable for a subscription model versus an advertising based model. So right now, what we're looking at was a Twitter and actual tweets, but there's also Twitter spaces. And this these are basically these are audio rooms that allow real time chats. I talked to the researcher, there'll be a Thomas earlier this year, and he told me why he thinks Twitter spaces are especially dangerous.
Unknown Speaker 9:42
The first time I realised this was going to be a problem who is actually I'm misinformation research researcher, I follow a lot of other misinformation researchers on Twitter. But there was a Twitter says actually involves talking with Nazis or breaking out of a bubble and talking with Nazis. stuff loaded that was tongue in cheek or not, you know, I looked at some of the people talking and they were self described boy nationalists, right? Who were speaking. And that was really alarming for me. And that was that was I think Germans took to Twitter spaces a bit. And us Americans did. You know, there's there's been this false what I view as a false idea where Oh, we just need to talk with these people holding these extremists are these very out there and harmful beliefs. And that will bridge the gap actors who hold these extremist beliefs. You spread disinformation, you spread misinformation. They rely on that they rely on presenting things as a debate when they have no clear intention to debate anyone who disagrees with them. But it gives them this platform, it gives them this false equivalency or this completely ridiculous belief. system completely ridiculous. opinion deserves to be treated as equal and deserves to be talked with. Right.
Eva Schaper 11:03
Okay. And one thing that's that's been talked about a lot, do you think that Twitter will be regulated? Will it be regulated in the United States, for example,
Daiva Repeckaite 11:13
we don't know that I think it's more likely that it will be regulated in the EU first,
Eva Schaper 11:19
right. And this is something that John Cassidy already said in The New Yorker that the EU model might be something that the US should follow. So the 27 member states of the European Union, they agreed on a new law that requires big online platforms, to police hate speech, and disinformation more effectively. Of course, we don't know how this law will work or if it will, at the end of the day, cut down on disinformation and hate speech. But it is a first step. Eu Digital Services Act allows European governments to ask web platforms like Twitter or Facebook and YouTube to remove content that promotes terrorism, hate speech, child sexual abuse, or commercial scams and platforms also will be obliged to prevent the manipulation of services having an impact on democratic processes and public security. And as the law was introduced, Carrie Burton, who is the US Commissioner for the internal market said, The time of big online platforms behaving like they're too big to care is coming to an end.
Daiva Repeckaite 12:34
This is very timely, given recent BBC is reporting that the Russian government has a huge network of official Twitter accounts. The BBC found more than 100 of them linked to different missions and embassies. And they're not subject to the same regulations as media. They have been retweeting and amplifying a lot of voices that spread misinformation about Russia's war in Ukraine. And being government accounts they're subject to different type of regulation, which researchers consider a loophole in Twitter's moderation policies.
Eva Schaper 13:16
Okay, so, I think all in all, we can say that, this will be interesting, this will continue to be interesting. And it almost seems as if more regulation could be inevitable,
Daiva Repeckaite 13:28
but it's also a very multilingual market. So it still remains to be seen how platforms can find a way to manage this huge, diverse space.
Eva Schaper 13:39
Okay, so stay tuned. We'll keep you updated on the latest developments. That was it for this week. We hope you enjoyed the episode.
Daiva Repeckaite 13:49
Please subscribe to our newsletter inoculated and to the show on Apple podcasts, audible Google podcast, Spotify and or any other platform of your choice.
Eva Schaper 13:59
And if you're looking for a transcript, transcript will be available at our site at www. The inoculation that comm you can also find us on all major social media channels. Thanks for listening. Bye for now.
Unknown Speaker 14:14
Bye for now.
Transcribed by https://otter.ai