The group will focus their engagement on requesting more details from the tech companies on how they monitor and censor "objectionable and extremist content;" their policies on user accounts sharing such content and their investment in safety measures.
So how might Facebook prevent another shooting from going viral?
"We are in the process of contacting other New Zealand and leading global investors, seeking their support for this initiative", Whineray said. He also live-streamed the attack on Facebook, and the video continued to appear elsewhere on social media in the ensuing hours.
"An independent board chair is essential to moving Facebook forward from this mess, and to re-establish trust with Americans and investors alike", said New York City Comptroller Scott M. Stringer speaking past year during the Facebook data breach. To be fair, however, the live was only viewed fewer than 200 times, while the original video was watched 4,000 times overall. In an attempt to explain users' behaviour and alarm triggers, Rosen argued that Facebook may have not accounted for more accurate and specific reasons users could list in their reports. Witness.org works with advocates and dissidents to document human rights abuses, and is advising Facebook on AI and content moderation.
Artificial Intelligence systems rely on "training data", in which Facebook and other companies feed their software examples of content to take down.
"Of course we hate it, when bad people, including this attacker. use our service in this way".
It's unclear if the initial report of the shooting came from New Zealand police.
Facebook spokesperson Simon Dilner said it could have done better and it was prepared for regulatory action. Popular financial media site Zerohedge recently became censored in New Zealand, as well as online video site LiveLeak and Reddit-like website 4chan. "The first call we received was at 1:41 p.m. on Friday, and the first armed police unit arrived at the scene at 1:47 p.m., six minutes later".
There are millions of live broadcasts every day.
Rosen said that one issue the company has faced with its A.I. system finds it hard to distinguish between video game footage and real life: "Another challenge is to automatically discern this content from visually similar, innocuous content - for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground". "[But] there are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos", Rosen said.