BOTs in Social Media - how do you know if a given account is real? Can BOTs have an impact on users’ opinions? How do service administrators fight with automated traffic?
source: own elaboration
BOTs in Social Media - how do you know if a given account is real? Can BOTs have an impact on users’ opinions? How do service administrators fight with automated traffic?
The existence of BOTs in Social Media is a fact, but this moment is perfect to ask how big is the scale of this phenomenon and the opinion-forming power of these automated solutions. Results of a study by Carnegie Mellon University suggest that nearly half of all Twitter accounts actively speaking out about the COVID-19 are likely BOTs. As you know, such headlines are rapidly gaining popularity - even Hillary Clinton (who has almost 28 million followers) shared this “news”. But should we really believe such statistics? And if so - what do administrators of popular social media websites do with it?
What criteria allow to determine whether a given account belongs to a human or is it managed in an automated manner?
There is no one surefire way to check if a given Social Media account is actually a human being. Therefore, researchers create algorithms that check combinations of several criteria. Among the most frequently mentioned signs that a given account is controlled by a BOT, we can find:
- Account names that look random - for example, a combination of a popular name with a string of numbers.
- Accounts using someone else’s photo as a profile photo, or not having such a photo at all.
- The number and quality of followers - if there are many suspicious accounts among the followers of a given account, including those managed by BOTs, it may mean that the given account is also not real. The same is true for accounts with very few followers.
- Activity analysis - and therefore, above all, the frequency of publishing content, if it is suspiciously high, as well as the use of hashtags or messages that seem to be copied on a massive scale – it both may indicate an automated operation of the account.
- Geolocation - you should also pay attention to where the account owner is - if, for example, he is in a different country every few hours, it may mean that it is really a computer program.
- A network of mentions about a given account - it is also worth checking how often a given account has been exchanged by other users.
Of course, even the simultaneous occurrence of several of the above-mentioned premises doesn’t give 100% certainty that the account is false and belongs to the BOT. For example, if the account is new - we won’t find many mentions of it, nor will it have many followers. Similarly, if someone is an activist, he can post huge amounts of posts from different places on the planet, which may result in the account being artificially identified as a BOT.
Can BOTs have an impact on user opinions?
A team of researchers from Carnegie Mellon University, under the supervision of Kathleen Carley, has collected over 200 million tweets about the coronavirus as of January 2020. According to the analysis of the collected data, about 45% of them came from accounts that resemble those managed by BOTs more than people. Researchers also looked at the 50 most influential COVID-19 retweets. As they suggest 82% of them come from BOTs.
A similar hypothesis emerged about BOTs in another Twitter discussion - about the protests against the killing of George Floyd. The incident took place in May this year during his arrest, when a policeman pressed his neck to the ground with his knee for 8 minutes, as a result of which the detainee was killed. This started a wave of protests and a huge stir in social media. One of the articles in Digital Trends discussed research suggesting that BOTs may have been responsible for spreading conspiracy theories and false information about the protests and the Black Lives Matter hashtag (30 to 49 percent of the tweet accounts for the protests were BOTs, according to researchers).
While the results of such studies point to the opinion-forming power of BOTs, we must bear in mind that they are actually only human-designed software. Additionally, not all studies are unanimous. Sarah Jackson, associate professor at the Annenberg School for Communication at the University of Pennsylvania and co-author of the book "#HashtagActivism, Network of Race and Gender Justice," says it is more important to focus on where BOTs are in Social Media and with whom they interact. As shown by the research of the authors of the aforementioned book, BOTs active in connection with #BlackLivesMatter, interacted with a very small number of real people.
How do social media service administrators fight with automated traffic?
Of course, the discussion on the opinion-forming power of BOTs didn’t go unnoticed by administrators of popular websites. According to Twitter, recognizing whether an account is run by a BOT or a human is less important than the activities undertaken in a given account. Therefore, instead of defining who is the owner of the accounts, it defines what actions are prohibited on them:
- malicious use of automation to undermine and disrupt public conversation,
- artificially amplifying Twitter conversations, including by creating multiple accounts or overlapping accounts,
- generating, soliciting or buying false manifestations of involvement,
- participation or engagement in massive or aggressive tweets,
- using hashtags in a spammy manner, including using unrelated hashtags in a tweet.
Under the new regulations, administrators can ask for confirmation of who is behind an account if it matches a pattern of inauthentic behavior. When assessing whether an account is suspicious or not, Instagram will take into account a number of elements, including:
- whether most of the followers of the profile are in a country other than the account location,
- whether there are clear signs of using automation, such as having automated accounts among your followers,
- whether the account is involved in coordinated inauthentic behavior.