In 1907, statistician Francis Galton observed something strange at a county fair: Attendees were playing a game in which they guessed the weight of a cow, with the answer closest to The truth will win the prize. To Galton’s surprise, while individual attendees’ predictions varied greatly, the crowd’s predictions were, on average, only a pound away from the cow’s actual weight — closer to the nearest individual.
The name of this phenomenon, in which noisy, individual judgments can be aggregated together to produce remarkably accurate results, was coined by the journalist James Surowiecki is “the wisdom of the crowd.”
And therein lies a possible answer to a conundrum: how to combat misinformation on Facebook. Sometimes it seems like fact-checking on social media has become impossible for humans. There are too many papers for people to verify authenticity (and the AI is not up to the task yet). For example, in the United States, Facebook’s fact-checking partners employ a small number of people—total 26 employees according to the 2020 report, although today the number may be larger — those who must try to monitor the content of more than 2 billion people and 8 billion unique URLs per year. These experts, who are trained to scrutinize content and label its authenticity, can only fact-check a small fraction of the URL content posted each day. Facebook uses an automated system to flag content similar to false content determined by a fact-checker, but even the most generous estimate of that system’s bandwidth leaves a large amount of potentially misleading content unchecked.
It certainly pays to check. Research has consistently shown that fact-checkers’ corrections reduce faith in wrong information and make people less likely to share it. Content flagged by validation testers may dropped in the news feed, reduce the number of people exposed to it in the first place. But on a giant platform like Facebook, using professional fact-checkers is like turning on a faucet in a burning building — right idea, wrong scale.
What if the solution were ordinary people? Tech companies are betting that the wisdom of the crowd can help solve the problem of scaling. Both Facebook and Twitter recently launched crowdsourced fact-checking products, hoping to harness the power of the masses that Galton discovered at that county fair. The wisdom of the crowd has been successfully applied to many other areas—prediction market, chess, medical diagnosis.
The question is whether a reality check should be followed. People are rightly skeptical of this concept. Digital literacy – defined as the growing ability to sift through and understand information from digital sources – is low among Internet users, and topics — politics, science — can be fraught and polarizing. Ordinary people always fall for falsehood; That’s why misinformation is such a problem in the first place.
However, new research shows that members of a mob can work together to separate fact from fiction. In one Recent paper published in Science Advances, we found that, when assessing the truthfulness of headlines, the ratings of a politically balanced small group of parishioners closely corresponded with those of professional fact-checkers. The crowd’s performance was all the more remarkable because, unlike fact-checkers – who were asked to carefully study each claim – the crowd was shown only the title and leading sentence of the piece. newspaper, then asked for their assessment without performing outside research. Lay people have a less intensive process, but we’ve found that their answers closely match those of fact-checkers — at much lower cost and higher speed.
Here’s how research works: We start with a set of articles that Facebook’s algorithm flags to verify authenticity, because the articles are potentially misleading, go viral, or just go viral. on important topics such as politics or health. We then asked three professional fact-checkers to research and rate the accuracy of the articles on a scale of one to seven. Also, we asked an average group of people on the Amazon Mechanical Turk website to rate the accuracy of just the titles and citations of those articles, without doing any further research.
It’s a bit like a bull: Fact-checkers agree with each other more than they agree with any given person in the crowd, but once the crowd’s answers are averaged together, that no longer happens. We found that after collecting about 10-15 responses from parishioners, our politically balanced crowd’s average answer corresponds to the average real-tester answer. as well as the fact-checker’s average answer correspond to each other. Crowds are also effective. They take an average of 30 seconds to rank each title and get paid around $10/hour. With 10 ratings per entry, authenticity verification takes less than a dollar per headline.
True, some stories are easier to test than others. In 2017, the false story that received the most “engagement” on Facebook was a hoax with the odd title: “Nurse babysitter taken to hospital after inserting baby into her vagina”. It doesn’t take an expert to know that the headline is preposterous. But will a crowd share the wisdom of a false claim by a politician contradicted by the opposition? Which side will the members of the crowd take?
Our research has found reason for optimism. Even in this polarized world, it really was less than crowdsourced responses to match fact-checkers’ performance on political articles, versus articles about something other than politics.
Polls show Republicans tend to accuse fact-checkers of a liberal bias. So you can expect our politically balanced crowd, including Republicans, to come up with answers that don’t resemble professional fact-checkers more often than one. The group consisted of only Democrats. But while we found that individual Democrats tended to agree more with fact-checkers, once the crowd reached a critical count of about 15 responses, the crowds The politically balanced crowd correlates with fact-checkers just as the Democrat-only crowd.
We found a similar pattern for other characteristics. While individuals who were more politically savvy and scored higher on cognitive reasoning tests agreed more with fact-checkers, once the crowd was large enough, there was no crowd. consists of high-achieving individuals who are politically superior to the ordinary crowd. A larger and more balanced crowd can compensate for poorer individual performance.
So community sourcing promises – but it has to be done carefully. Depends a lot on the design. In our study, community members were not given a choice of their stories to rank. But the “opt-in” design opens the door for people who actively seek out information they disagree with to flag it as “fake news.” Indeed, we recently analyzed the Twitter opt-in crowdsourced fact-checking program “BirdWatch” and found that users from one party were more likely to flag content from the other party as misleading. While these dynamics may be evidence of politically motivated brigade activity, it is also possible that each side is simply controlling misinformation from the other, and partisan is helping to promote my participationn platform. There are still many questions to explore before we declare the crowdsourced fact check an unqualified success.
Nor are we suggesting that platforms replace their professional fact-checkers with regular ones. We envision a system that combines crowds of casuals, professional fact-checkers, and machine learning techniques to meaningfully scale reality-verification. Furthermore, we consider fact-checking to be just one tool among the many solutions needed to limit the spread of misinformation. Other tools, like promote accuracy, downrank the algorithm, and digital interventions play a role in combating the larger problem.
The sheer number of people on social media are often blamed for its woes — vaccine conspiracy groups with thousands of members, a fake story being shared by millions. But one way to combat the apparent frenzy of these online mobs is to exploit an equally powerful phenomenon: their intellect.
https://time.com/6124637/misinformation-fact-checking-facebook/ How crowd wisdom can solve Facebook’s authenticity-checking problem