(Headline USA) A quasi-independent review board is recommending that Facebook parent company Meta overturn two decisions it made this fall to remove posts “informing the world about human suffering on both sides” of the Israel—Hamas war.
In both cases, Meta ended up reinstating the posts—one showing Palestinian casualties and the other, an Israeli hostage—on its own, although it added warning screens to both due to violent content. This means the company isn’t obligated to do anything about the board’s decision.
That said, the board also said it disagrees with Meta’s decision to bar the posts in question from being recommended by Facebook and Instagram, “even in cases where it had determined posts intended to raise awareness.”
And it said Meta’s use of automated tools to remove “potentially harmful” content increased the likelihood of taking down “valuable posts” that not only raise awareness about the conflict but may contain evidence of human rights violations. It urged the company to preserve such content.
The Oversight Board, established three years ago by Meta, issued its decisions Tuesday in what it said was its first expedited ruling—taking 12 days rather than the usual 90.
In one case, the board said, Instagram removed a video showing what appears to be the aftermath of a strike on or near Al-Shifa Hospital in Gaza City. The post shows Palestinians, including children, injured or killed.
Meta’s automated systems removed the post saying it violated its rules against violent and graphic content. While Meta eventually reversed its decision, the board said, it placed a warning screen on the post and demoted it, which means it was not recommended to users and fewer people saw it. The board said it disagrees with the decision to demote the video.
The other case concerns video posted to Facebook of an Israeli woman begging her kidnappers not to kill her as she is taken hostage during the Hamas raids on Israel on Oct. 7.
Users appealed Meta’s decision to remove the posts, and the cases went to the Oversight Board. The board said it saw an almost three-fold increase in the daily average of appeals marked by users as related to the Middle East and North Africa region in the weeks following Oct. 7.
Meta said it welcomes the board’s decision.
“Both expression and safety are important to us and the people who use our services. The board overturned Meta’s original decision to take this content down but approved of the subsequent decision to restore the content with a warning screen,” the company said.
“Meta previously reinstated this content so no further action will be taken on it,” the statement continued. “There will be no further updates to this case, as the board did not make any recommendations as part of their decision.”
In a briefing on the cases, the board said Meta confirmed it had temporarily lowered thresholds for automated tools to detect and remove potentially violating content.
“While reducing the risk of harmful content, it also increased the likelihood of mistakenly removing valuable, non-violating content from its platforms,” the Oversight Board said, adding that as of Dec. 11, Meta had not restored the thresholds to pre-Oct. 7 levels.
Meta, then called Facebook, launched the Oversight Board in 2020 in response to criticism that it wasn’t moving fast enough to remove misinformation, hate speech and influence campaigns from its platforms. The board has 22 members, a multinational group that includes legal scholars, human rights experts and journalists.
The board’s rulings, such as in these two cases, are binding but its broader policy findings are advisory and Meta is not obligated to follow them.
Meta has come under severe criticism and scrutiny for its cooperation with U.S. intelligence services and political figures on the Left during the leadup to the 2020 election.
At the same time that Meta CEO Mark Zuckerberg was giving hundreds of billions of dollars to two nonprofit groups as part of an effort to meddle in elections at the local level in support of Democratic candidates, Facebook and other social-media sites cooperated with the FBI, which was coordinating with the Biden campaign, to actively suppress true but damaging stories, such as the Hunter Biden laptop scandal, that interfered with their narrative.
The company has given several indications that it intends to do so again in the 2024 election, despite a recent legal ruling in Missouri v. Biden that formally blocks the Biden administration from colluding with Big Tech.
It recently announced tighter censorship standards on its new platform, Threads, which was designed to be an alternative to the Elon Musk-owned Twitter. The new policy, announced on Meta’s blog last week, will let users know if content has been “fact checked” by its team of leftist gatekeepers and will algorithmically push that content lower in their feeds.
One thing unlikely to register with the so-called fact-checkers, however, is the plethora of disinformation being pushed by the Hamas-run government in Gaza, which has regularly used crisis actors and false reporting of statistics to inflate the civilian impact of Israel’s response to the Oct. 7 terrorist attack.
Adapted from reporting by the Associated Press