A 17-year-old teenage girl from Minnesota is one of the victims of social media bots. In real life, her social media use is limited to musing about being bored or trading jokes with her friends. Occasionally, like many other teenagers, she posts selfies.
But on Twitter, there is another version of her that none of her family members or friends would recognize. Her fake profile promotes pornography, cryptocurrency, and illicit trade. This investigation report was published in The New York Times on Jan 27.
In the investigation report, The New York Times revealed that how bots are being used to create fake social media profiles, disseminate propaganda, and promote illicit trade. These automated accounts are now an inseparable part of the already booming industry of online influence, illicit trade, and crypto business.
The New York Times report also unveiled that the social media giant Facebook had disclosed to its investors in Nov. 2017 that up to 60 million automated accounts may be present on its platform. Though bots are such a danger to society and the global economy, social media platforms are yet to find how to block these bad bots.
To dig deeper into automated activities on social media platforms, we studied the presence of non-human traffic on a social media giant.
A popular social networking platform with millions of monthly active users was inundated with automated accounts attempting to scrape content, spam comment section, generate fraudulent impressions on videos and fake likes/shares.
Attackers created fake accounts to commit nefarious automated activities while masquerading as genuine users. 1.6% of total signups in a day were from bots on this platform. Let’s take a detailed look at how attackers used these automated accounts on this platform:
Scraping of Latest Feeds and User-Generated Posts
Automated accounts are deployed on social media platforms to scrape the latest feeds and other user-generated content. Many organizations use these scraped content for trend analysis. During our study, 2.39% of total latest feeds were scraped on this social networking platform in a day. 12.77% of total user-generated posts were also scraped during the day.
Automated accounts attempt to divert discussions on social media platforms or spread propaganda through spam comments. During our study, we observed that 5.5% of total comments in a day were from fake accounts.
Fraudulent Video Impressions
Many publishers, agencies, and content creators buy traffic to coax brands, advertisers and marketers. They deploy bots to generate fraudulent impressions on videos and other media. On this social media platform, 5.61% of total views were generated from bot traffic in a day.
Fake Likes and Shares
Bots are also used to gain social influence or spread propaganda. These automated accounts promote posts as directed by their bot herders or disseminate misinformation. We observed that 2.97% of total likes/shares in a day were from non-human traffic.
Automated accounts are also used to tag genuine users in posts without their consent. This could be an attempt to disseminate propaganda or promote a brand/product. 2.25% of total tags in a day were from automated accounts.
Recommendations: How Social Networks Can Root Out Bad Bots
In an article published by MIT Sloan, Tauhid Zaman, associate professor of operations management at MIT Sloan referred social media bots problem as “the newest arms race”.
As the bad bots are evading every filter installed by social networks, social networks must move from conventional defenses to bot management technologies. Social media platforms need to understand bots’ behavior, and their intent to stop them. Our study shows that bots on social media networks are capable of mimicking human behavior to masquerade as genuine users. In such a sophisticated scenario, it is must that social networks deploy a dedicated bot mitigation and management solution that reads beyond mouse movements and keystrokes to analyze the visitors’ intent and catch sophisticated bots.