Published: Wed, May 16, 2018
Science | By Cecil Little

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

Facebook, Aiming for Transparency, Details Removal of Posts and Fake Accounts

To distinguish the many shades of offensive content, Facebook separates them into categories: graphic violence, adult nudity/sexual activity, terrorist propaganda, hate speech, spam and fake accounts.

"We're not releasing that in this particular report", said Alex Schultz, the company's vice president of data analytics.

On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017. Some 3.4 million pieces of content were either removed or labelled with a warning during the period covered by the report, with Facebook's improved detection systems picking up 85.6 per cent of the content it subsequently took action on.

Fake accounts disabled during the first quarter hit 583 million, and the majority of them were removed "within minutes of registration", Facebook reported.

Marchessault scores twice as Knights even West finals
One goal is different than two goals is different than three goals and so on and so forth. "We rebounded after that". The Jets have also hit on some late-round picks, including Hellebuyck, whom they took No. 130 overall six years ago.


Facing regulatory pressures from Congress over its role in the Cambridge Analytica data scandal, Facebook says it will double its safety and security team to 20,000 this year. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards". "However, we don't have a sense of how many incorrect takedowns happen - how many appeals that result in content being restored".

Over the a year ago, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000.

Utilizing new artificial-intelligence-based technology, Facebook can find and moderate content more rapidly and effectively than the traditional, human counterpart-that is, in terms of detecting fake accounts or spam, at least.

For every 10,000 posts on Facebook, users posted roughly 22 to 27 pieces of content featuring violent images.

Samsung Galaxy Note 9 rumoured to look alike Note 8
Galaxy Note 9 with the big share of probability will look exactly the same as the current model generation. The cost of Galaxy Note 8 after the cashback can be termed as good value for money.


In some cases, Facebook's automated systems did a good job finding and flagging content before users could report it.

The report comes in the face of increasing criticism about how Facebook controls the content it shows to users, though the company was clear to highlight that its new methods are evolving and aren't set in stone, CNET's Parker reports.

This, the company says, is because there is little of it in the first place and because most is removed before it is seen. "Hate speech content often requires detailed scrutiny by our trained reviewers to understand context", explains the report, "and decide whether the material violates standards, so we tend to find and flag less of it". But only 38 percent had been detected through Facebook's efforts - the rest flagged up by users.

Facebook today revealed for the first time how much sex, violence and terrorist, propaganda has infiltrated the platform-and whether the company has successfully taken the content down. Facebook boasts 2.2 billion monthly active users, and if Facebook's AI tools didn't catch these fake accounts flooding the social network, it would have gained than a quarter of its total population in just 89 days.

United Kingdom spy chief warns on threats from IS, Russia
Mr Barnier has previously declared the United Kingdom will be cut from security and defence involvement once it leaves the EU. The attack, with the novichok toxin, marked the first use of a nerve agent in Europe since the Second World War.


Like this: