Every few months, social media companies claim they have removed another billion fake accounts. How did a 21-year-old delivery man in Pennsylvania pretend to be Trump family members on Twitter for nearly a year and eventually fooled the president?
The answer has to do with the size of the social networks, the complexity of catching fakes, and the business incentives of the companies that run the sites.
The bot problem
Facebook said it suspended 4.5 billion accounts in the first nine months of the year and intercepted more than 99 percent of those accounts before users could flag them. That number of accounts – that’s nearly 60 percent of the world’s population – is mind-boggling. It’s puffed up too.
The vast majority of these accounts were so-called bots, or automated accounts, often created en masse by software programs. Bots have been used for years to artificially amplify certain posts or topics so that more people can see them.
In the past few years, Facebook, Twitter, and other tech companies have caught bots much better. They use software that often detects and blocks them during the registration process by looking for digital evidence that suggests the accounts are automated.
As Facebook trapped more bots, it has also reported increasingly colossal statistics on how many fake accounts it removed. These numbers have made a lot of positive headlines for the company, but “they’re not used that much internally,” said Alex Stamos, former chief information security officer of Facebook, who left the company in 2018. “One person can blow out the statistics because there is no cost to try. “
In other words, a person can create a software program that tries to create millions of Facebook accounts. When Facebook’s software blocks these bots, the number of fakes that are deleted swells.
Facebook has admitted that these statistics aren’t that helpful. “Simplified attacks greatly skew the number of fake fake accounts,” said Alex Schultz, Facebook’s vice president of analytics, last year. The spread of fake accounts is a more telling metric, he said. And it shows that the company still has a big problem. Despite the removal of billions of accounts, Facebook estimates that 5 percent of its profiles are fake, or more than 90 million accounts, a number that hasn’t changed in over a year.
Hand made fakes
Social media companies have a much harder time with fake accounts that are created manually – that is, when a person is sitting at a computer or typing on a phone.
Such fakes don’t carry the same tell-tale digital markings of a bot. Instead, the company’s software must look for other clues, such as: E.g. an account that sends the same message to several strangers. However, this approach is imperfect and works better with certain types of counterfeit.
This partly explains why the Pennsylvania delivery man Josh Hall was repeatedly able to pose as President Trump’s relative on Twitter and attract tens of thousands of followers before the company took notice.
Manual counterfeits can be more harmful than bots because they look more believable. Political activists use such fakes to spread disinformation and conspiracy theories, while scammers use them to deceive people. Criminals have posed themselves as celebrities, soldiers, and even Mark Zuckerberg on social media to trick people into handing over money.
Twitter’s efforts to catch scammer accounts are made more difficult by the policy that allows spoof accounts. The company requires that spoof accounts be clearly labeled.
Facebook also still has problems with accounts posing as public figures, but regular reviews by the New York Times suggest the company may be better off removing them. Instagram, which owns Facebook, hasn’t made as much progress.
Ask user for help
One way to fight the counterfeits is to need more documentation to create an account. Businesses have started asking for a phone number frequently, but they are reluctant to make it difficult for people to join their websites. Your businesses are geared towards adding more users so they can sell more ads to run. In addition, Twitter particularly values the anonymity of its users. The company said it enables dissidents to speak out against authoritarian governments.
To reduce the number of questionable accounts that should be checked, organizations rely on users to flag them. The strategy is far more efficient and cost-effective for companies. This also means that a fake account gets the more attention the more precisely it is marked for closer inspection.
Still, it sometimes takes companies a while to act. Mr Hall gained 77,000 followers posing as President Trump’s brother and 34,000 followers as the President’s 14-year-old son before Twitter closed the accounts that Mr Hall used to spread conspiracy theories. And from 2015 to 2017, Russian government officials posed as the Tennessee Republican Party on Twitter, drawing 150,000 followers, including senior members of the Trump administration, while posting racist and xenophobic messages, according to a federal investigation.
A Twitter spokesman said in a statement: “We work hard to ensure that violations of our impersonation rules, especially when people try to spread misinformation, are resolved quickly and consistently.”
Still, most fakes don’t attract many followers. Mr Stamos argued that scam reports that few people notice don’t have much of an impact. “It’s getting pretty zen, but, if no one follows a fake account, does the fake account exist?” he said.
Mr Stamos said tech companies face so many threats that they have to make tough decisions about what topics to work on and sometimes it’s not worth rooting out every fake account.
“Companies usually make an effort to make sure that the things they can show are the worst, not just the things that look bad,” he said. “How do you apply the ever-limited resources you have to the problems that actually do harm?”