Mark Zuckerberg faced tough questions from congressional lawmakers about the misleading, malicious and harmful content on Facebook and asked what their plan for dealing with this issue is.
Mark Zuckerberg said his company wishes to have more than 20K people working on content review and security by the end of the year. According to experts, at present, the AI is still far away from responsible alternative to human looking at a screen.
Zuckerberg said that Facebook is planning to nurture the people who are doing a review with building AI tools, which is in progress, but some of the stuff is just hard. It would help us to get a better place on removing more of the harmful content.
Zuckerberg said that 99% of the terrorism-related content is removed from the face before it’s reported. He embraced that this type of content was easy to find. The most difficult task is to find bullying, hate speech, and threats made on Facebook. These type of contents can be manipulated in a number of ways.
For instance, Jigsaw and Google, the two companies have a tool known as “Perspective” that is developed to find whether the content is toxic. However, the system complicates word with negative connotations with harmful and negative speech.
The hate speech detector found that “Race War Now” was only 24% toxic while “Garbage Truck” was found to be 78% toxic. It is only possible through deep learning which is able to learn on its own. However, regarding Artificial Intelligence system, it can learn from the data that humans have marked bad or good in a machine-understandable way. Jigsaw said that the AI system’s inadequacies were due to lack of labeled data. It had not seen examples of these multifaceted, content examples to learn that race war is bad in some contexts, but not when it comes to a history book or scholarly article.
Facebook’s biggest challenge is identifying the variations in hate speech, terrorism, threats and bullying to train AI to look out parallel examples. The problem gets scrappier because not everyone can agree what makes the content abusive or harmful.
During the US presidential election in 2016, Facebook faced the similar problem when social media platform was overloaded by reports of fake news. Mark Zuckerberg was of the view that AI is the ultimate solution to the problem and the company was hiring 20,000 human content moderators.
Head of Automated Fact-Checking at UK non-profit Full Fact, Mevan Babakar said that irrespective of whether humans or AI are used to moderate content, the question still lies there that who will identify whether the speech is acceptable or not.
Facebook has more control over choosing human moderators to decide on what’s good or what’s bad rather than relying on algorithm making decisions. Although Facebook might not be able to set itself free from such tough questions, humans might have to make such tough decisions.
Mark Zuckerberg said that Facebook has been successful in deploying Artificial Intelligence to police in order to fight against terrorist propaganda. He said that he was optimistic about Facebook’s AI will be able to realize the linguistic nuances of content with enough capability to flag potential risks. However, experts said that they were helped by geography and required human moderators to make the final ruling. The cases like hate speech, dangerous content and violent videos are harassing Facebook every day. They are more elusive, prevalent and difficult to monitor.
Some of the researchers disagree with the Zuckerberg that technological predictions are always difficult to make. Co-founder of the Fake News Challenge Delop Rao says that “focusing on AI as the only avenue for battle hate speech could lead to binding spots in thinking”. Regulation, platform design, AI tools and education assist moderators to find content that could lead to being an alternative to AI approach.
Data and Society think tank researcher, Robyn Caplan said Zuckerberg’s positivity seemed to rattle with more practical conversations she had with reps from platforms similar to Facebook who have emphasized that AI can help flag questionable content but cannot be trusted to do the removal.
“AI can’t understand the context of speech and, since most categories for problematic speech are poorly defined having humans determine context is not only necessary but desirable,” she said.
The experts were of the view that the problem prevailing Facebook is something that no one can truly win. In spite of acknowledging the bitter fact, Zuckerberg has taken to shrugging unclearly in the future, and at a version of AI that could eventually save the day.