In the second quarter of this year, the Facebook administration took concrete actions 22.5 million times, identifying malicious or potentially dangerous content in user accounts. In the first quarter, this figure was 9.6 million cases, according to a report posted on the company's Facebook website on Tuesday.
It indicates that the effectiveness of detecting entries containing malicious content increased by 6 percentage points in the same time period, from 89% to 95%. On Instagram, this figure increased by 39 percentage points, from 45% to 84%. The company explains this achievement by вЂњexpanding automated technologyвЂќ in the sections of the platforms where users communicate in English, Spanish, Arabic, and Indonesian.
According to the company, it restored deleted or temporarily blocked content if, after additional checks, it came to the conclusion that the account owners did not violate ethical rules. вЂњWe want people to know that the data we provide about malicious content is accurate, so we will conduct an independent audit of our decisions with the involvement of a third party, starting in 2021,вЂќ Facebook promised.
In early March, Facebook CEO Mark Zuckerberg said that the social network removes false information about the spread of coronavirus and вЂњconspiracy theoriesвЂќ, and blocks ads from companies that try to use the situation for their own purposes.