Lewis Leigh has launched the campaign with Ofcom urging people to report harmful content because ‘nans are the best judges’.

Lewis Leigh has launched the campaign with Ofcom urging people to report harmful content because ‘nans are the best judges’.

To encourage kids to report hazardous information that would upset their grandmothers, a TikTok celebrity has started the Ofcom-supported campaign “Only Nans.”
After publishing adorable video of him dancing with his elderly grandmother, Phyllis, on the well-known video sharing website, Lewis Leigh amassed millions of followers.
Since “nans are the best judges out there,” he has now started a campaign urging people to report stuff that will offend their grandmothers.
Lewis rose to fame on the app during lockdown and decided to work with Ofcom to start the campaign after going through some dubious content.
Nans always offer the best guidance, he declared. Think about what your grandmother would think the next time you’re looking through your phone and come across something.
“Perhaps think about reporting it if Nan says “no.””
The communications regulatory body Ofcom reported that 67% of adolescents and young people, aged 13 to 24, have come across at least one item of potentially hazardous content online.
@lewisleighh
I love social media, but there can be some harmful content out there! We’re all guilty of scrolling past it but the only way to get rid of it is to report it! So, I’ve teamed up with @Ofcom to play OnlyNans, and I’ve called in the big guns to help me… My Nanny Phyllis! #ad
However, the study also discovered that only 17% actually reported it because more than 20% of respondents claimed that disclosing it wouldn’t change anything.
Additionally, they discovered that 13% of survey participants were unsure about who to notify or what to do when they came across hazardous content.
Misinformation, frauds, and harsh language were the most frequent types of content that young people encountered.
His campaign coincides with the passage of the Online Safety Bill, which will give Ofcom the authority to penalise social media platforms if they breach their duty of care.
Up to £18 million in fines, or 10% of the qualifying company’s sales, may be imposed by Ofcom.
In addition to making social media corporations remove unlawful information, such as images of child abuse, they will also be required to try to remove some “hate crime offenses,” even though they would be legal in the real world due to rights for freedom of expression.
Since the white paper’s release three years ago, news publishers have fought for a total exemption from the Online Safety Bill.
They are worried that the most recent draft of the Bill does not appear to meet a suggestion from MPs for an amendment to safeguard press freedom.
A ban on digital corporations censoring news information should be included, according to the joint parliamentary committee that examined the bill. This is true unless the content violates the law.
According to new legislation, social media executives who refuse to work with regulators to protect the vulnerable online risk going to jail.
An earlier draft of the Online Safety Bill, which was published last year, stated that tech companies might face massive fines, perhaps in the billions of pounds, if they disobeyed a duty of care.
Ministers previously refrained from holding executives personally accountable for business failures, but now senior managers will be prosecuted for negligence.
The regulation is known as the Nick Clegg bill after the former deputy prime minister, who is currently Facebook’s vice president of global affairs and communications.
Children’s groups and concerned families have long pushed for social media companies to face legal action if they don’t take action against self-harm content.
It happened when a teenage girl’s father committed suicide after reading hundreds of online messages on self-harm and suicide.
After viewing the horrific photos on Instagram, Molly Russell, 14, committed herself in 2017. Her father, Ian Russell, told MPs that he had “frustratingly limited success” in getting businesses to remove content.
The advocate for online safety claimed that tech firms only appeared to respond when “news stories emerge” or when the government changed the law.
According to Mr. Russell, the platforms’ “business culture” needs to alter so they can respond to damaging information “proactively” rather than “reactively.”