Skip to main contentSkip to navigationSkip to navigation
A young woman using an smart phone looking
ChildLine says cyberbullying is behind rise in children seeking counselling for low self-esteem. Photograph: Alamy
ChildLine says cyberbullying is behind rise in children seeking counselling for low self-esteem. Photograph: Alamy

What are four of the top social media networks doing to protect children?

This article is more than 8 years old

With reports of cyberbullying on the rise, a guide to what Facebook, Twitter, Snapchat and Instagram are doing, and whether it’s enough

According to recent report from NSPCC, ChildLine conducted 35,000 counselling sessions for low self-esteem between April 2014 and March 2015. The report blames “a constant onslaught from cyber-bullying, social media and the desire to copy celebrities,” as key reasons.

Julia Fossi, senior analyst for online safety at NSPCC says that while most platforms are taking steps to improve safety, social networks must be held more accountable for the content they host.

She says that social sites, which often use tracking technology for adverts and marketing could use a similar technology “to identify potential bullying issues and help determine what an effective intervention would look like.”

With reports of cyberbullying on the rise and girls more likely to be affected, Will Gardner, CEO, Childnet International says that the area is “challenging” but agrees that sites must continue innovating with technology to tackle the issue.

Here, we look at what four of the biggest social media networks are currently doing.

Facebook

Photograph: Karen Bleier/AFP/Getty Images

Facebook’s rules states under-13s can’t sign up, but research from EU Kids Online and the LSE found half of 11 to 12-year-olds are on Facebook. .

Announcing the recent formation of the Online Civil Courage Initiative – a partnership between Facebook and NGOs to fund counter speech campaigns against terrorism and bullying – Facebook COO Sheryl Sandberg said that, “hate speech has no place in our society — not even on the internet”. Facebook polices the content on its own site on a report by report basis, relying on users to report posts to its “around the clock” global support teams.

While Facebook claims it has improved its reporting transparency with a user dashboard that lets users know how their complaint is being dealt with, there is no available open data on how many reports are resolved satisfactorily and how many abusive users and pages are removed.

The network does have a family safety centre with information aimed at teens and parents, and encourages users to block or unfriend anyone who is abusive.

Twitter

Photograph: Lauren Hurley/PA

In a leaked memo in February last year, former Twitter CEO Dick Costolo claimed that Twitter “sucks at dealing with abuse and trolls”.

Since then, the company says it has streamlined the process of reporting harassment and has made improvements around reporting other content issues including impersonation and the sharing of private and confidential information.

Crucially the site has updated its enforcement procedures too, claiming to use both an automated and human response to conduct investigations and follow appropriate actions swiftly. The site says it will take action against abusers depending on severity, ranging from requiring specific tweets to be deleted to permanently suspending accounts. Like Facebook, there is no public data showing the effectiveness of its policies and reporting.

Last year Twitter launched a safety centre where users can learn about staying safe online, with sections created especially for teens, parents and educators. It also recently announced a partnership with mental health charity Cycle Against Suicide to promote online safety.

Snapchat

Photograph: Peter Macdiarmid/Getty Images

A report from last year’s Safer Internet Day found that Snapchat is the third most popular messaging or social media app among the 11 to 16 age group (behind Facebook and YouTube).

The app has community guidelines which outline what users shouldn’t send others, including harassment, threats and nudity, and as with other sites and apps, users can block people and report abuse.

It has a safety centre with safety tips and advice, produced in partnership with experts from iKeepSafe, UK Safer Internet Centre and ConnectSafely. And in November, the app partnered with Vodafone to raise awareness of cyberbullying by offering users emojis designed to be shared as an act against online abuse.

Still, according to the NSPCC’s Net Aware guide, “64% of the young children we asked think Snapchat can be risky”.

Instagram

Photograph: Thomas Coex/AFP/Getty Images

Owned by Facebook since 2012, Instagram has community guidelines and tips for parents that address questions such as, “who can see my teen’s photos?”. Like Facebook, users have to be aged 13 or over, though it’s easy to lie about your age and sign up. Instagram encourages users to report those underage via an online form or through in-app reporting. This reporting also applies to abusive content, impersonation and hate accounts.

The company claims it monitors reports 24/7 to investigate abuse, shut down accounts and report to relevant authorities. Again there are no public stats to enable an accurate measure of effectiveness.

The ability to follow accounts of people you don’t know and access unsuitable material has been highlighted by the NSPCC, although the charity says most content is deemed low risk.

Comments (…)

Sign in or create your Guardian account to join the discussion

Most viewed

Most viewed