Often, when graphic content appears on these social media platforms, algorithms or content reviewers will censor or remove said posts.
In the letter, lawmakers Reps Carolyn Maloney, D-N.Y., Stephen Lynch, D-Mass., Gregory Meeks, D-N.Y., and Bill Keating, D-Mass., acknowledge that the companies sometimes need to remove this content for the safety and well-being of their users. Still, the lawmakers expressed concerns that outright removal of the content could erase the evidence. That evidence is perhaps necessary for determining war crimes like genocide.
As of April 12, TikTok "removed 41,191 videos, 87% of which violated our policies against harmful misinformation. The vast majority (78%) were identified proactively." This content was about the war in Ukraine.
Still, there are cases where mass reporting from a bunch of users or bots on a platform can remove content. Often that content violated zero community guidelines. Also, how platforms set up and enforce these rules and guidelines is not always uniform.
"We are concerned that the processes by which social media platforms take down or block this content," part of the letters read.
Removing said content "can result in the unintentional removal and permanent deletion of content that could be used as evidence of potential human rights violations," the lawmakers added.
Lawmakers also asked the companies for access to any evidence saved. Various international organizations and the US government will then use that evidence to investigate. Many of these investigations are already underway but it could take years to get conclusive results.
"Organizations will require access to the entirety of information and evidence available, including content posted on social media platforms, to conduct full and complete investigations," the lawmakers wrote.
The problem is that social media companies and their platforms are not very transparent about which content they remove. A similar problem occurred during the recent Syrian Civil War. According to BBC, Human Rights Watch asked for a centralized system of uploads throughout the conflict to no avail.
These companies also have a variety of different responses to such situations. Now, TikTok’s focus is more on accounts fabricating content or spreading mis- and dis-information. Google and Twitter have not made public any policies addressing these concerns.
Meta's response was it removes "content when it glorifies the violence or celebrates the suffering of others." The Facebook company also said it is exploring "ways to preserve this type and other types of content when we remove it."