Most online grooming offences in UK committed on Facebook-owned apps | Society

More than half of personal hygiene-related offenses recorded under a law that made it illegal to send sexual messages to children were committed on Facebook-owned apps, figures reveal.

The data, obtained by the NSPCC under the freedom of information laws, shows 10,019 offenses of sexual communication with a child have been recorded since the legislation was introduced in April 2017.

Advertisement

In the incidents where media were provided, more than half were confirmed on Facebook-owned apps, Facebook Messenger, Facebook Messenger, Instagram and WhatsApp.

The revelations come at a difficult time for Facebook, which faces requests to postpone plans to implement end-to-end encryption across all your messaging platforms. This would mean that messages – including text and images – would be scrambled digitally so that only the sender and recipient can understand them. The company has been informed that this could risk increasing the sexual exploitation of children.

The NSPCC urged the prime minister to urgently move forward with new legislation in the online harm bill, which he said would help prevent offenders from using social media to attack children for sexual abuse.

Advertisement

Peter Wanless, chief executive of the NSPCC, who met Boris Johnson at a "hidden damage" summit last week, said: "Child abuse is an inconvenient truth for tech chiefs, who have failed to make their websites safe and allowed that criminals use them as a playground to care for our children.

“Last week, the Prime Minister indicated to me his determination to take on Silicon Valley and make the United Kingdom the world leader in online security. He can do this by committing to an online damage law that imposes a legal duty of care on big technologies to proactively identify and manage security risks.

Advertisement

"Now is the time to complete the regulation and raise a watchdog with teeth to hold technology managers criminally liable if their platforms allow children to suffer serious but preventable damage."

The number of crimes under the 2017 law is accelerating, the figures show, with 23% recorded in the last six months of the covered period, from April 2017 to October 2019.

There were 5,784 infractions where the media were registered. Facebook, Messenger, Instagram or Whatsapp it was used in 3,203 instances, 55% of the total media are known for, while Snapchat was used 1,060 times.

In February, then-digital minister Matt Warman promised to publish an online damage law during the current parliamentary session, following the proposals presented in a white paper. The proposals included independent regulation of social networks with possible criminal sanctions if the technology directors do not keep children safe on their platforms.

Advertisement

However, a complete response to the consultation on White paper now it is not expected until the end of the year and the NSPCC has expressed concerns that a regulator will not be in place until 2023.

Advertisement

The charity wants to see a bill that establishes a duty of care in technology companies to make their websites safer for children within 18 months.

Susie Hargreaves, executive director of the Internet Watch Foundation, a UK charity responsible for finding and removing child sexual abuse material online, said: “The time it takes is leading to uncertainty for all of us who hinder progress. We support requests for the government to speed up the timeline for introducing online damage legislation. "

Facebook said it used the integrated "photo DNA" that scanned all images and videos on Instagram and Facebook and flagged known child exploitation material so it could be removed. He also said he used technology that detected child nudity and previously unknown child exploratory content when it was uploaded. This means that 99% of child nudity content has been detected by this technology and removed, Facebook said.

A spokesman added: “There is no place for child care or exploitation on our platforms and we use technology to proactively find and remove it. We have a content and security team of over 35,000 people investigating reports from our community and working to keep our platforms safe. Our teams also work closely with child protection and police experts, reporting content directly to experts like CEOP [the Child Exploitation and Online Protection Command, part of the UK’s National Crime Agency] and NCMEC [the US National Center for Missing and Exploited Children]. "

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *