top of page

Thank you for sharing

  • boffin2coffin
  • Jun 30, 2019
  • 3 min read

Written for Funeralcare magazine, June 2019




“Thank you for sharing” has become a phrase delivered dripping with sarcasm – a reprimand for too-much-detail. Those of us with Facebook-free teenage years smugly tsk-tsk the oversharing we escaped.




Inadvertently though, we may make publicly available that which we meant to share only with those closest, with a far greater risk than just embarrassing our children. But does using restricted audience filters guarantee that our photos, posts and likes will not be used by those we don’t know? Can we safely read our News Feeds without stumbling across stuff we’d rather not see? Seems not.


In the past year alone, privacy breaches and social media content scandals have become as prevalent as the common cold. Facebook in particular has been held to account over numerous incidents of unauthorised data sharing, fake news and toxic content. Many incidents are blamed on security holes or programming bugs, such as the one that gave around 1500 apps access to your photos, whether you shared them or not. Many more, it seems, are intentional.


Facebook’s first big publicly acknowledged privacy breach involved British political firm Cambridge Analytica. The company had development access to Facebook data, and harvested, without consent, the personal info of users for political advertising purposes. Further investigation revealed Facebook had struck deals to share personal data with Apple, Amazon, Microsoft, Netflix and Spotify – to name a few of the more than 150 companies involved.


Fake news sounds harmless – a trap for the gullible few rather than the well-informed many. A tool for election interference? Surely it’s all just “Dancing Cossacks” (if you remember National’s 1975 election campaign). Fast-forward 40 years: the Russian Internet Research Agency was established to do just that. The company used not only Facebook, but Instagram, Twitter, and YouTube to disseminate targeted advertising aimed at disrupting the last US Election.


But that was over there, right? Now that our protective bubble has burst, we know bad stuff can happen here. It did.


Shockingly, livestream of the Christchurch terror attack was not the first such international incident on the Facebook platform. Last year, the social media company was charged with enabling the spread of fake news and hate speech which led to violence and death in the Muslim communities of Myanmar and Sri Lanka.


The fallout from these incidents has done more than just damage Facebook’s reputation. It has led to a global call to make tech companies more responsible for the content they host. This month’s Christchurch Call to Action summit held in Paris saw unprecedented agreement between governments and major tech companies to make the internet a safer place. Acknowledging the difficulties of global regulation, the summit established a voluntary framework committing signatories to adopt and enforce laws banning violent content. Seventeen countries, the European Commission, and eight major tech companies - including Facebook, Google, Microsoft, Twitter and YouTube – have adopted the call. The United States, with its free speech legislation, has no duty of care. Dialogue is ongoing.


Facebook announced early in April that they would not be placing restrictions on livestreaming. In a volte-face less than a fortnight later they announced interventions to counter violent extremism, and restrictions to prevent rulebreakers from “going live”.


All eight tech companies are taking steps to moderate and restrict content, and firm up on user privacy. They have committed to developing Artificial Intelligence technology to prevent upload of objectionable content, and to detect and remove it if it has been posted. Facebook has committed to establishing incident management teams to urgently respond to such content, and an independent board to manage content decisions.


Updated moderation tools are being introduced to capture user trust indicators in posts and advertising. User feedback will be used to remove fake news and hate speech, and reduce the reach of groups that repeatedly share misinformation. Collaborative tools are being added for groups, and encryption will support private interactions. New restrictions are being placed on developer access. And past over-sharing will soon be able to be wiped from record as effectively as from drunken memory, with the long-wished-for introduction of the Clear History Tool.


Over the last two years, Facebook’s user base has declined by 15 million in the US. The proposed changes go some way towards restoring trust, if not revenue, for the social media giant. But don’t assume CEO Mark Zuckerberg is crying all the way to the bank. Facebook also owns Messenger, WhatsApp, Instagram and Oculus VR, as well as a plethora of companies specialising in artificial intelligence and image recognition.


All of which leaves them well-placed to lead the changes we are calling for.


 
 
 

Comments


Featured Posts 
Recent Posts 

© 2022 by Kay Paku

bottom of page