Ryan, a college student without a journalism background, accomplished something many a fact-checker like myself dreams of doing. On June 15, he fact-checked Twitter CEO Jack Dorsey on his own platform — and Dorsey followed up with a clarification. “If he sees this, sorry <3,” Ryan messaged me in a recent Twitter DM. He’s flagged more than 270 tweets with “notes” on the platform’s crowdsourced fact-checking feature called Birdwatch — making Ryan the No. 2 most prolific user. (We are not using Ryan’s last name as he’s not a professional fact-checker and fears being doxxed or harassed.)
Birdwatch is Twitter’s latest and most public effort to address misinformation on its platform. The program, still in its testing phase, allows users (or “Birdwatchers”) to attach notes that provide additional context to a contested tweet. For example a Birdwatcher may append a link from NASA or fact-checking organizations like Science Feedback to a tweet claiming the sky is green.
I’ve watched Birdwatch blossom from when it used just a few lines of code to rank its notes to a complex system that incorporates the helpfulness of a user’s previous notes, and soon, with their ideological perspective.
Of the top 10 rated notes on the platform, seven include at least one link to a reputable source like the U.S. Centers for Disease Control and Prevention or PolitiFact. In an analysis I ran earlier this year, less than half in my dataset had a source (and most were just other tweets).
And although you will occasionally come across juvenile refuse-themed notes like this one, “This is ,” the amount of idiotic or partisan bickering is way less prevalent in the helpful notes Birdwatch is surfacing.
I was curious to know who the heck these normal people who log so many hours correcting misleading tweets are and why the heck do they do it. I followed and reached out to the 10 most active Birdwatchers to learn more.
Besides Ryan, I also got in touch with Celeste Labedz, a California Institute of Technology Ph.D. candidate who spends most of her Birdwatching correcting phony earthquake predictions, and Aaron Segal, a 33-year-old software engineer from the Bronx, New York. (Their comments have been lightly edited for length and clarity.)
Celeste Labedz: I’m a seismologist and I like to try to combat misinformation about earthquakes, so I keep tabs on a few highly-followed earthquake “prediction” charlatans and put the same Birdwatch note onto every one of their pseudoscientific tweets.
Ryan: For a long time, at least a few years, I’ve gotten very annoyed whenever I see something with thousands of engagement that I know is false, whether it be serious or even just something little about a video game. I always tried to tell people in the comments, but it was no use. The damage (small or not) had already been done.
Aaron Segal: These days I spend an hour or two a week on Birdwatch. I used to go on most days but I’ve stopped bothering so much recently because I don’t actually think it’s effective.
Ryan: Most of my Birdwatch notes are spur-of-the-moment. I write them when I see misinformation that needs to be corrected during my typical Twitter browsing. But sometimes I do go down long rabbit holes, searching for keywords and monitoring repeat offenders, and that could last upwards of an hour. (Ryan keeps a massive text document with keywords to search.)
Labedz: For my particular usage (debunking obvious misinformation in my field), it would be cool if experts in fields could somehow be verified as such so their notes could be prioritized. But, of course, I recognize that that’s a very complicated and potentially gameable system that could give unfair advantage and so probably shouldn’t be implemented!
Segal: They should add more moderation. They should also take people who keep writing unhelpful notes off of Birdwatch, and people who keep writing misinformation off Twitter.
Ryan: I’ve suggested the ability to allow normal users to report for misinformation, and have those reports be looked at by Birdwatchers.
Ryan: I suggested they should do either some form of payment or, more realistically, a Birdwatch badge for your profile. The dopamine from helpful votes already works for me, though.
Segal: If they want a community-based system like the one they’re testing, what they should really do to incentivize people is make the system more effective. If writing and rating Birdwatch comments could have the effect of removing misinformation or hatred from Twitter, and people could see the effect they were having, that would probably be enough incentive for people.
Labedz: The best way to get more nerds like me would just be to advertise Birdwatch more. A non-money option could be things like spotlight features by Twitter on experts debunking misinformation in their fields. Like, “This month’s Birdwatch star is Dr. Jane Doe of the University of Wherever, who’s busting myths about wildfires. Here’s the top 10 things she wants you to know about safety, ecology, and more!” People can learn things, it’s good publicity on both sides, it’s simple and cute.
Ryan: I think misinformation is a huge problem in our world, and governments are using it to their advantage. Everyone’s probably heard how it can be a “threat to our democracy,” and I agree with that. Most pieces of misinformation can be debunked easily with a Google search, but people seem to share it more than the truth because it reinforces their beliefs.
Labedz: For my opinion on misinformation in society as a whole, I think it’s a really big deal. It’s already doing major damage and has the potential to get a whole lot worse. I think more platforms need to crack down on it, but I recognize that that’ll be difficult to make them do, since misinformation is great for engagement and therefore quite profitable for platforms. I think media literacy needs to be emphasized for kids in schools and for the general public of all ages to help out on the individual level in addition to regulation at the platform level.
Segal: I think misinformation is affecting humanity like a drug. People want to believe it. Adding notes and comments won’t really help because people will just believe whatever they hear if it confirms their prior beliefs. And it does nothing about hateful content, which is as big or bigger a problem.