New Online Safety Rules
The UK is introducing new rules to make the internet a safer place for children. These rules are part of the Online Safety Act and are overseen by the regulator, Ofcom. Tech companies and online services that can be accessed in the UK, regardless of where they are based, will need to follow these new requirements.
A key part of the new rules is the need for stronger age verification. Services that host content like pornography or material promoting self-harm, suicide, or eating disorders must implement effective ways to prevent children from seeing it. Ofcom has provided guidance on what counts as "highly effective" age assurance, including methods like photo ID matching or facial age estimation. Simply asking users for their birth date is not considered sufficient.
Another focus is on changing content feeds. Platforms that use algorithms to recommend content to young people will need to adjust these algorithms to filter out harmful material from children's feeds. This aims to stop children from being led towards dangerous or inappropriate content.
Tech firms are now legally required to take proactive steps to find and remove illegal content, such as child sexual abuse material. They also need to act quickly to tackle harmful content when they become aware of it. This is a shift towards a more preventative approach to online safety.
Failing to comply with the new rules can result in significant penalties for tech firms. Ofcom has the power to issue large fines, up to £18 million or 10% of a company's global revenue, whichever is higher. In severe cases, Ofcom could even seek a court order to block access to a service in the UK.
The rules also aim to give children more choice and support online. This includes making it easier for them to report harmful content or activity and providing tools for them to have more control over their online experience, such as managing who can contact them.
Concerns about the risks posed by AI-generated risks are also being addressed. New measures are being introduced to criminalize the creation and distribution of AI-generated child sexual abuse material.
People Also Ask for
- What is the UK Online Safety Act?
- What does Ofcom do regarding online safety?
- What content is illegal under the Online Safety Act?
- What happens if tech companies don't comply with the Online Safety Act?
Ofcom Sets Safety Rules
The UK's online safety regulator, Ofcom, has finalized a set of rules aimed at providing significant new protections for children online. These measures, under the Online Safety Act, require tech firms to implement changes to make their platforms safer for younger users.
A key aspect of these rules involves changing how algorithms recommend content to young people. Platforms must also introduce enhanced age verification methods. These steps are designed to prevent children from easily accessing harmful material, such as content promoting self-harm, suicide, eating disorders, or pornography.
Tech companies are expected to comply with these new safety measures by July 25th, 2025. Failure to adhere to the regulations can result in significant penalties, including fines of up to £18 million or 10% of a company's global revenue, whichever amount is greater. In severe cases, Ofcom could even seek a court order to block access to a service in the UK.
Beyond algorithmic changes and age checks, the rules also require platforms to have systems in place for quickly reviewing and addressing harmful content, providing children with more control over their online experience (like managing content they dislike or blocking users), and making reporting and complaint processes easier to navigate.
People Also Ask
-
What is the UK Online Safety Act?
The Online Safety Act 2023 is a UK law designed to make online services safer. It places new duties on social media companies and search services to protect their users, particularly children, from harm and illegal content.
-
When do the new Ofcom rules for children's online safety come into effect?
Subject to parliamentary approval, tech firms are expected to start applying the safety measures set out in Ofcom's codes from July 25th, 2025.
-
What kind of content do the rules aim to tackle?
The rules target illegal content and content harmful to children, including material related to suicide, self-harm, eating disorders, pornography, and content encouraging serious violence or bullying. Platforms also need to address illegal harms like child sexual abuse material (CSAM).
-
What happens if tech companies don't comply?
Non-compliant companies can face significant fines, up to £18 million or 10% of their global revenue. In extreme cases, Ofcom has the power to seek a court order to block the service in the UK.
Relevant Links
Tech Firms Must Comply
Under new rules finalized by the UK regulator Ofcom, tech companies are now required to implement significant changes to better protect children online. These regulations aim to create a safer digital environment for young users.
Key requirements for platforms include:
- Adjusting the algorithms used to recommend content to young people to ensure they are not exposed to harmful material.
- Introducing stronger age verification measures to prevent children from accessing age-inappropriate content, particularly on sites hosting pornography or content promoting self-harm, suicide, or eating disorders.
- Taking proactive steps to identify and remove illegal harmful content, as outlined in the Online Safety Act.
Tech firms must comply with these new rules by July 25th. Failure to do so could result in significant penalties. While some view these rules as a "gamechanger" for online safety, others argue they do not go far enough in protecting vulnerable users.
Stronger Age Checks
New rules require online platforms to implement more robust systems for verifying user ages. This is a crucial step in protecting children from accessing content or features that are not appropriate for them.
Previously, relying solely on a user's self-declared age proved insufficient. The updated regulations aim to ensure tech companies take reasonable steps to confirm a user's age, providing a better safeguard for younger users.
Platforms will need to consider various methods for age verification, ensuring they are effective while also respecting user privacy. The goal is to create a safer online environment by limiting children's exposure to harmful material and interactions.
Changing Content Feeds
Under the new UK online safety rules, tech companies must make significant adjustments to how content is presented to younger users. A key focus is on the content feeds and recommendations that children see every day.
This means the algorithms used by platforms will need to be altered. These algorithms are the systems that decide which videos, posts, or other content appears in a user's feed based on what they've viewed or interacted with before.
The goal is to prevent children from being recommended or easily accessing harmful material. Content that encourages or depicts things like self-harm, suicide, eating disorders, or pornography must be blocked or filtered out more effectively for young audiences.
By changing how content feeds work, the new regulations aim to create a safer online environment where children are better protected from seeing potentially damaging material through recommendations.
Stopping Harmful Content
A key focus of the new online safety rules in the UK is stopping children from seeing harmful content. The regulator, Ofcom, is putting in place specific requirements for tech companies to address this issue.
Platforms that host content like pornography, or material that promotes self-harm, suicide, or eating disorders, are required to take stronger action. This means they must implement measures to prevent children from accessing these types of content.
One significant change involves the algorithms used by these platforms. These algorithms often recommend content to users based on their activity. Under the new rules, platforms will need to change how these algorithms work for young people to reduce the risk of exposing them to harmful material.
Concerns have been raised by families and campaigners about the potential for children to still encounter dangerous content, even with existing protections. Reports have highlighted instances where young users could still be shown sexualised content or hateful comments. The new rules aim to strengthen the systems in place to prevent such occurrences.
The rules also address emerging risks, such as those posed by AI-generated content. There are concerns about AI being used to create harmful deepfakes or distressing videos, and platforms are expected to have systems to identify and remove such material promptly.
Ultimately, the goal is to create a safer online environment for children by making tech firms more accountable for the content available on their platforms and requiring them to actively work towards stopping the spread of harmful material.
Penalties for Failing
Tech firms that do not comply with the new online safety rules in the UK face serious consequences. The regulator, Ofcom, has the power to take action against companies that fail to meet their obligations under the Online Safety Act.
The penalties can be significant. Ofcom can impose substantial financial penalties, potentially reaching up to £18 million or 10% of a company's annual global turnover, whichever amount is greater. These fines are designed to act as a strong incentive for companies to prioritize user safety, especially the protection of children.
In the most severe cases of non-compliance, Ofcom also has the ability to seek a court order. This could potentially lead to a service being blocked from access in the UK. Additionally, senior managers can be held criminally liable if they fail to ensure companies comply with certain requests from Ofcom, particularly concerning child safety duties.
Family Concerns
Families across the UK share deep concerns about their children's safety in the online world. While new rules aim to provide stronger protections, many parents and guardians remain anxious about the potential risks children face daily.
A primary worry is the exposure to harmful content. This includes material promoting self-harm, suicide, or eating disorders, as well as sexualized content. Despite platforms having existing rules, the ease with which children can still encounter such material is a significant point of distress for families.
There are also questions about the effectiveness of safety measures implemented by tech firms. For instance, concerns have been raised that features like "Teen Accounts" might not go far enough to prevent exposure to risks, and that age verification methods can be easily bypassed, allowing younger users to access content intended for older audiences.
Emerging threats, such as the misuse of AI-generated content, add another layer of concern. The potential for AI to create distressing or harmful material involving minors, or to replicate victims without consent, highlights the constantly evolving nature of online dangers that families worry about.
Ultimately, for many families, the new regulations represent a step towards a safer online environment, but the focus remains on whether tech companies will effectively implement and enforce these rules to genuinely protect children from harm.
AI Generated Risks
Artificial intelligence (AI) brings many changes, including new risks for children online. One concern is the creation of disturbing AI-generated content.
Reports have highlighted distressing examples, such as AI-generated videos depicting victims of crimes. These can cause significant harm to families and individuals.
The UK's Online Safety Act considers such harmful AI-generated content illegal. Tech platforms are required to detect and remove this material quickly under the new rules.
Ensuring platforms effectively use AI themselves to find and remove harmful AI-generated content is a critical challenge. Companies need to take robust action to prevent children from encountering such material.
Platform Safety Checks
New rules in the UK require tech companies to implement significant safety checks on their platforms to protect children. These checks are designed to make online spaces safer for young users.
Key requirements include:
- Reviewing and changing algorithms that recommend content to children to avoid showing harmful material.
- Introducing stronger measures for age verification to prevent children from accessing age-restricted content, such as pornography or content promoting self-harm.
- Taking robust action to remove or limit access to illegal or harmful content quickly.
Regulators, like Ofcom in the UK, have finalized codes of practice outlining these requirements. Platforms that fail to comply by the specified deadlines could face significant penalties, including large fines.
While regulators call these changes a "gamechanger", some campaigners argue that the rules may not go far enough to fully protect children, highlighting ongoing issues with harmful content still being accessible or difficult to remove.
People Also Ask
-
What are the new UK online safety rules for tech companies?
The new rules under the UK's Online Safety Act require tech firms to identify and manage risks of harm to their users, particularly children. This includes tackling illegal content and protecting children from harmful material like content promoting suicide, self-harm, eating disorders, and pornography. They also need to implement measures such as stronger age checks and changing how content is recommended to young people.
-
How do the new UK online safety rules protect children?
The rules provide transformational new protections for children online. Tech firms must prevent children from accessing harmful and age-inappropriate content, implement effective age assurance, configure algorithms to filter harmful content from children's feeds, and make it easier for children to report problems and have more control over their online experience.
-
What penalties do tech companies face under the UK Online Safety Act?
If tech companies fail to comply with the Online Safety Act, they can face significant penalties. Ofcom, the regulator, can issue fines of up to £18 million or 10% of a company's annual global turnover, whichever amount is greater. In severe cases, Ofcom can even seek a court order to block a service from being accessed in the UK.
-
When do tech firms need to comply with the new rules?
Tech firms have deadlines to comply with different aspects of the Act. For illegal harms, they had until March 16, 2025, to complete risk assessments and begin implementing safety measures. For child safety measures, including changes to age checks and content feeds, firms must introduce these by July 25, 2025.