Facebook CTO explains how AI protects the company

At today’s F8 Conference, Facebook’s Chief Technology Officer revealed how artificial intelligence (AI) has been helping the online social networking service 

According to Facebook, the most important part of AI being used in the company is its ability to keep the community safe.

Facebook’s Chief Technology Officer, Mike Schroepfer, said today at Facebook’s annual conference F8 that this is because the artificial intelligence used is extremely useful for combating abuse on the platform, such as bullying, terrorist content and hate speech.

Terrorist content

He revealed that the artificial intelligence the company has been using has removed up to two million pieces of terrorist content.

Despite this, he said Facebook is aware that there’s a lot of work that still needs to be done, as well as technology that needs to be evolved.

He then continued, saying the company will be investing more heavily in AI and finding ways to make it work with less human supervision, or none whatsoever.

Despite this, Research Scientist at Facebook, Isabel Kloumann, said that AI is hard to test for fairness and ensure that it incorporates a diverse set of voices in its decisions.

Developing the best AI

“How can AI tell the difference between an unpopular opinion that may show up on a Facebook or Instagram post and a comment that’s intended to spread hate?” she said at F8.

At F8 it was also revealed that Facebook will be writing a ‘Security Playbook’ for other technology companies to follow, to help work with other research scientists and academics in order to develop the best AI systems.

By releasing the ‘playbook’, Facebook also hopes to get feedback on its policies by getting real-world examples on how an issue manifests itself in the community.

Written by Leah Alger

More
articles