The UK’s Guardian reports on a draft of a Brown University study. Bots are amplifying climate change denialism …
The social media conversation over the climate crisis is being reshaped by an army of automated Twitter bots, with a new analysis finding that a quarter of all tweets about climate on an average day are produced by bots, the Guardian can reveal.
The stunning levels of Twitter bot activity on topics related to global heating and the climate crisis is distorting the online discourse to include far more climate science denialism than it would otherwise.
An analysis of millions of tweets from around the period when Donald Trump announced the US would withdraw from the Paris climate agreementfound that bots tended to applaud the president for his actions and spread misinformation about the science.
The actual study is not yet available, but once it is, it will make interesting reading. We can glean a few snippets from the Guardian article.
What exactly did the researchers do?
The researchers examined 6.5m tweets posted in the days leading up to and the month after Trump announced the US exit from the Paris accords on 1 June 2017. The tweets were sorted into topic category, with an Indiana University tool called Botometer used to estimate the probability the user behind the tweet is a bot.
Is Botometer accurate and where can I find it?
Botometer is an online scoring system for determining the likelihood that Twitter accounts are automated. It was built by researchers at the University of Southern California and the Center for Complex Networks and Systems Research at Indiana University. Scores are based on machine learning techniques. The tool has been trained by examining the characteristics of thousands of Twitter accounts determined by humans as being automated or not.
You can find it here and give it a go yourself against your own twitter followers.
Do they have examples of bots in action?
Yes they do …
One that ranks highly on the Botometer score, @sh_irredeemable, wrote “Get lost Greta!” in December, in reference to the Swedish climate activist Greta Thunberg.
This was followed by a tweet that doubted the world will reach a 9-billion population due to “#climatechange lunacy stopping progress”. The account has nearly 16,000 followers.
Another suspected bot, @petefrt, has nearly 52,000 followers and has repeatedly rejected climate science. “Get real, CNN: ‘Climate Change’ dogma is religion, not science,” the account posted in August. Another tweet from November called for the Paris agreement to be ditched in order to “reject a future built by globalists and European eco-mandarins”.
Does this really matter?
Yes it does.
John Cook, an Australian cognitive scientist and co-author with Lewandowsky, said that bots are “dangerous and potentially influential”, with evidence showing that when people are exposed to facts and misinformation they are often left misled.
“This is one of the most insidious and dangerous elements of misinformation spread by bots – not just that misinformation is convincing to people but that just the mere existence of misinformation in social networks can cause people to trust accurate information less or disengage from the facts,” Cook said.
How can I tell if a twitter account is human or a bot?
I’ve posted on this topic before.
Too many tweets : Accounts that tweet out more than about 100 tweets every day should be treated with an appropriate degree of suspicion. Some people might retweet lots of things at times, but not every day. If an account that is a year or two old and has tweeted 200K tweets, then that’s a red flag.
Example – here is a Pro-Trump Kremlin biased bot named @sunneversets100. Since its creation in Nov 2016, it has tweeted out 187K tweets. Humans can not sustain that tweet rate.
No Personal Details – Check the @sunneversets100 example and you will find that there are no personal details. All it links to is a political site.
Content – Bots are designed to amplify and promote a specific agenda. If there is no actual user created content but instead a stream of retweets promoting a specific agenda or slogans designed to provoke, then you should be suspicious. You can of course fall prey to a malicious chatterbot. That screaming argument you are having on twitter might actually be you screaming back at a piece of software that simply exists to push your emotional buttons.
Back to @Sunneversets100.
Much of its content is Sputnik. Seriously now, who, in a US political context, would actually be retweeting Sputnik, a Russian Government controlled news outlet?
US political activists don’t behave like that, but Russian bots do.
Botnets and Eggs
Botnets: One variation is that instead of one Bot tweeting out an excessive quantity of content, an array of mostly passive bots are activated. This is usually done to vigorously promote one specific article. If an account that has been around for some time with very little activity is suddenly promoting something that is politically divisive, then be suspicious . Especially when many apparently dormant accounts are suddenly promoting the same article.
Egg tweeters: If something is being retweeted by lots of people who apparently don’t have an uploaded image of any sort, be suspicious. You have probably found a botnet in play. “egg accounts” are called that because you used to see just an egg shape instead of an actual profile picture.
Fake People: If you find an account that you have suspicions about, then grab the url for their profile picture. Check to see if the image is being repurposed and reused across many different accounts. Google image search can help you make that discovery.
Another quick check is to take note of the twitter handle. If instead of an actual name @john_smith you find a random array of letters @6gwkxp7nwl or perhaps something like @vicky_126431536 then it is machine generated and not a real account.
A variation of this check is a name check. Look and see if the Twitter name matches the twitter handle. Also check to see if the gender of the name matches the gender of the image presented. If either test fails, then be suspicious.
The Information War
The arena is your mind and the prize being fought for is the ability to influence and manipulate you.
If you up your game and learn how to bot-spot, then you are less likely to be manipulated by such bots.
Further Reading about the climate Bots
- Guardian 21st Feb 2020 – Revealed: quarter of all tweets about climate crisis produced by bots
- Botometer is here.
- Brown University – Botnet 101: Don’t Get Own3d!
- A previous posting of mine from 2018 – Are all your Twitter friends human? #bots and #botnets