Social media firms must stop pushing harmful content to children online with “aggressive” algorithms, the online regulator has demanded.

Ofcom has published a new set of guidelines for tech companies to ensure that children are better protected online.

The suggestions include more robust age-checks for young users, and changes to social media algorithms to stop the promotion of harmful material.

It comes as online safety campaign group Internet Matters reveals that one in seven teens under 16 has experienced image-based sexual abuse online.

The mother of murdered schoolgirl Brianna Ghey, Esther Ghey, who has campaigned to protect children from online harm following her daughter’s death, welcomed the new guidelines as “extremely positive” but said they could go a step further.

Mrs. Ghey told The Independent: “Sitting down with Ofcom was really positive. I feel they really want to make a change and want this to be as successful as possible. They have got young people’s best interests at heart.”

She said the guidance could be improved by requiring social media companies to allow parents to view content accessed by children, as well as the option of reporting problematic material on a child’s behalf.

She added: “Brianna was accessing self-harm sites and eating disorder pages on Twitter. If she wasn’t able to access this, she probably wouldn’t have been encouraged to harm herself in such a way.”

The new guidance from Ofcom has been written to help companies comply with their duties in the Online Safety Act, which makes platforms legally responsible for keeping people safe online

Under the rules, online media companies will need to assess whether children are likely to access their service and then complete a risk assessment to identify the risks that their products pose to children.

Ofcom has also said that firms must prevent children from seeing the most harmful content relating to suicide, self-harm, eating disorders and pornography. They should also minimise a child’s exposure to serious harms such as violent, hateful or abusive material and bullying content.

Ofcom has set out a number of things they suggest firms do to meet their legal obligations under the Online Safety Act, such as tracking unusual increases in harmful content on their platforms and using a “highly effective age assurance” to make sure children are old enough to use their apps.

Social media firms don’t have to follow the recommendations completely, but if they choose not to they will have to show how they’ve met their legal duties in another way.

Other proposals from the regulator include making sure that children are not recommended increasingly harmful or violent content on their social media feeds. Ofcom said that social media companies use algorithms to determine how content is shown to users based on their characteristics, inferred interests and behaviour. This is the key way that children come across content about suicide, self-harm or eating disorders, the report said.

Ofcom wants social media companies to ensure that this type of content is not shown to children. They also want the rules to make it easier for children to report content while online. Children should be able to accept or decline an invitation to a group chat, disable comments on their own posts, and block or mute other people, Ofcom suggested.

Tech firms also need to get better at moderating content on their platforms and removing it faster when it is flagged as age-inappropriate, Ofcom said.

Research from Internet Matters showed that 14 per cent of teenagers aged 16 and under said that they had experienced image-based sexual abuse. The findings come from a survey of 1,000 children aged 9-16.

The National Crime Agency recently issued a rare warning to schools about the rising dangers of criminals targeting children on social media and coercing them into sharing nude images. The fraudsters then threaten to share the photos unless money is paid.

Dame Melanie Dawes, Ofcom chief executive, said that for “too long” children’s experiences online “have been blighted by seriously harmful content which they can’t avoid or control”.

Referring to the new code of conduct, Dame Melanie said tech firms “will need to tame aggressive algorithms that push harmful content to children in their personalized feeds and introduce age-checks so children get an experience that’s right for their age”.

She said, once the guidelines are in force, “we won’t hesitate to use our full range of enforcement powers to hold platforms to account”.

Technology secretary Michelle Donelan said that the Ofcom rules were “clear”, adding: “Platforms must introduce the kinds of age-checks young people experience in the real world and address algorithms which too readily mean they come across harmful material online.”

Sir Peter Wanless, CEO at the NSPCC, said the draft codes set “high standards” for tech companies to keep children safe