The New York Stern School of Business has published an extensive study of how Facebook moderators handle information on a daily basis and how it is filtered by both themselves and artificial intelligence systems before or after it is published.
According to estimates and the data that emerge, about 300,000 errors are made daily in terms of the content that appears or not in the feed of users of the popular platform.
Mark Zuckerberg himself has admitted that:
“the wrong choices are often made. Maybe more than two in ten cases ”.
This report, which was republished in “Forbes”, examines in detail the origins of surveillance strategies and is also rich in statistical data on the type of content that has been removed.
For example, in the first quarter of 2020, 40% of the content removed -which does not include spam and fake accounts- involved nudity and pornography, violent content, terrorism, hate speech, weapons and drugs.
Of all this, a percentage that reaches 88% was automatically blocked before even a single user could see it.
An extremely interesting aspect concerns the role of the companies themselves that manage social media -that is, Facebook– which use external partners to evaluate and approve content.
“Social media platforms have marginalized people who do this job, assigning most of them to external suppliers”,
referred to some part of the research, which also includes analysis data for Twitter and YouTube.
What could be done to improve the situation? According to the same database, social media should hire direct coordinators, pay them well enough, help them with this particularly demanding process and, in effect, reduce the use of external suppliers.