From Voice of America
As Notre Dame Cathedral burned, a posting on Facebook circulated – a grainy video of what appeared to be a man in traditional Muslim garb up in the cathedral.
Fact-checkers worldwide jumped into action and pointed out the video and postings were fake and the posts never went viral.
But this week, the Sri Lanka government temporarily shut down Facebook and other sites to stop the spread of misinformation in the wake of the Easter Sunday bombings in the country that killed more than 250 people. Last year, misinformation on Facebook was blamed for contributing to riots in the country.
Facebook, Twitter, YouTube and others are increasingly being held responsible for the content on their sites as the world tries to grapple in real time with events as they unfold. From lawmakers to the public, there has been a rising cry for the sites to do more to combat misinformation particularly if it targets certain groups.
Shift in sense of responsibility
For years, some critics of social media companies, such as Twitter, YouTube and Facebook, have accused them of having done the minimum to monitor and stamp out misinformation on their platforms. After all, the internet platforms are generally not legally responsible for the content there, thanks to a 1996 U.S. federal law that says they are not publishers. This law has been held up as a key protection for free expression online.
And, that legal protection has been key to the internet firms’ explosive growth. But there is a growing consensus that companies are ethically responsible for misleading content, particularly if the content has an audience and is being used to target certain groups.
Tuning into dog whistles
At a recent House Judiciary Committee hearing on white supremacy and hate crimes, Congresswoman Sylvia Garcia, a Texas Democrat, questioned representatives from Facebook and Google about their policies.
“What have you done to ensure that all your folks out there globally know the dog whistles, know the keywords, the phrasing, the things that people respond to, so we can be more responsive and be proactive in blocking some of this language?” Garcia asked.
Each company takes a different approach.
Facebook, which perhaps has had the most public reckoning over fake news, won’t say it’s a media company. But it has taken partial responsibility about the content on its site, said Daniel Funke, a reporter at the International Fact-Checking Network at the Poynter Institute.
The social networking giant uses a combination of technology and humans to address false posts and messages that appear to target groups. It is collaborating with outside fact-checkers to weed out objectionable content, and has hired thousands to grapple with content issues on its site.
Swamp of misinformation
Twitter has targeted bots, automatic accounts that spread falsehoods. But fake news often is born on Twitter and jumps to Facebook.
“They’ve done literally nothing to fight misinformation,” Funke said.
YouTube, owned by Google, has altered its algorithms to make it harder to find problematic videos, or embed code to make sure relevant factual content comes up higher in the search. YouTube is “such a swamp of misinformation just because there is so much there, and it lives on beyond the moment,” Funke said.
Other platforms of concern are Instagram and WhatsApp, both owned by Facebook.
Some say what the internet companies have done so far is not enough.
“To use a metaphor that’s often used in boxing, truth is against the ropes. It is getting pummeled,” said Sam Wineburg, an education professor at Stanford University.
What’s needed, he said, is for the companies to take full responsibility: “This is a mess we’ve created and we are going to devote resources that will lower the profits to shareholders, because it will require a deeper investment in our own company.”
Fact-checking and artificial intelligence
One of the fact-checking organizations that Facebook works with is FactCheck.org. It receives misinformation posts from Facebook and others. Its reporters check out the stories then report on their own site whether the information is true or false. That information goes back to Facebook as well.
Facebook is “then able to create a database now of bad actors, and they can start taking action against them,” said Eugene Kiely, director of FactCheck.org. Facebook has said it will make it harder to find posts by people or groups that continually post misinformation.
The groups will see less financial incentives, Kiely points out. “They’ll get less clicks and less advertising.”
Funke predicts companies will use technology to semi-automate fact-checking, making it better, faster and able to match the scale of misinformation.
That will cost money of course.
It also could slow the internet companies’ growth.
Does being more responsible mean making less money? Social media companies are likely to find out.
Leave a Reply