Ofcom’s latest draft guidance to deal with misogynistic content online promotes safety-by-design approach, and what are other upcoming duties for tech companies under the Online Safety Act, 23’? Suyash Srivastava https://berkeleyglobalsociety.com/en/perspectives/extraordinary-justice-for-the-ordinary-people-of-cambodia-case-002-02/ article April 1, 2025 Introduction The Office of Communications (“Ofcom”) has, in its latest draft guidance, proposed new aspiring standards for tech companies operating in the UK as practical steps to protect women and girls against, among others, online misogyny, domestic abuse, image-based sexual abuse, gender based harassment, and other safety risks women may face online. The consultations on draft guidance, currently still undergoing, were scheduled as part of Phase II out of the 3 phases for the implementation of the Online Safety Act, 2023 (“Act”). While some tech companies have already taken steps towards making online space safe for women and girls, others have not. The Act is applicable, inter alia, to search services, social media companies, gaming companies, dating apps, and online forums having significant numbers of UK users/or targeting UK market regardless of where they are based, in order to prevent people from partaking in illegal content and online harms against women and, especially, children (defined, in the Act, as anyone under the age of 18)). So, what duties and obligations are set by Ofcom for tech companies to make their online experience safer for women and girls? Let’s break down the different phases in which Ofcom articulates the interventions (and undertakes) of service providers: Phase I Illegal Contents Code of Practice In Phase I, the first legal obligation for the tech companies is the compliance with the Illegal Contents Code of Practice, under which providers of regulated user-to-user services and search services (“Part 3 services under the Act”) will have a duty to keep people from illegal harm. These Codes of practice were required to be issued under the Act by Ofcom for providers of Part 3 services, recommending measures of compliance for duties imposed under the Act. Tech companies, from March 17th, 2025 onwards, will need to start implementing the safety measures under this Code or implement alternative (and comparable) measures in their place, maintaining, in both cases, an annual record of risk management activities in relation to online safety. For compliance with the Code, the Ofcom has respectively published 84 page guidance for user-to-user services and a separate 55 page guidance for search services. While Ofcom will watch over companies’ compliance regarding online safety measures, the implementation of these measures, as they will involve the processing of personal data, for which the Information Commissioner Office (“ICO”) will be the statutory regulator. Illegal content risk assessment Another legal duty, which Ofcom has put forward as part of Phase I, is the Illegal content risk assessment, which requires tech companies to assess risks on their services associated with priority offences and other illegal content under the Act. Basically, the risk assessment should assess how likely the tech companies’ users are likely to encounter illegal content. This milestone obligation came into force on December 16th, 2024, and the tech companies had until March 16th, 2025, to complete the four step risk assessment process – regarding which Ofcom has published a guideline in its 84 pages risk assessment guidance. It is compulsory for the companies to whom this applies to comply with their duties under the Act and the Ofcom will start asking specific service providers to disclose their risk assessments by the end of March 2025. Phase II Age assurance for pornography providers Under Phase II, service providers of pornographic content, covered S. 81 of the Act, will be obliged to ensure the age verification or age estimation of their users (Ofcom is calling it “age assurance”), with a process aimed and suitable to avoid that children were to encounter or partake in pornographic content. Ofcom, under S. 82 of the Act, has published a 50 page guidance, illustrating how the Part 5 providers under the Act can effectively and immediately take steps to ensure the age of their users. It is bound to notice that these duties have already come into effect as of January 17th, 2025, so – probably in the following months – we will have concrete data on the implementation of the measures and procedures recommended by Ofcom. Children’s Access Assessment This obligation falls on all Part 3 services under the Act (meaning user-to-user services and search services), and provides for them to complete a Children’s Access Assessment (“Assessment) by April 16th, 2025. This assessment, essentially, regards whether or not their services are likely to be accessed by children. The assessment should be fairly straightforward for tech companies, and the service providers which already have or are likely to have underage users will be accordingly required to implement safety measures to protect children online. It is be noted that this assessment requirement is different from the assessment under ICO’s children code, which is for the purposes of complying with the data protection law. Children’s Risk Assessment Service providers having or likely to have children users (if as resulting from the Children Access Assessment they will have conducted) will be required to carry out the Children Risk Assessment under the Act, concerning which Ofcom is still in the consulting phase with final guidance expected to be published in spring 2025. This Assessment will require tech companies to understand and identify the risk and likelihood of harm to children, and subsequently to implement safety measures to protect children online. Once final guidance will be out, tech companies will have 3 months to complete their first children’s risk assessment. Further, a Part 3 service provider can only conclude that the children will not be able to normally access the service if they use some form of age assurance and implement effective access controls to concretely assure the ages of their users. On this, a 26 page guidance on highly effective age assurance methods is provided by Ofcom; this guidance is different from (but consistent with) the highly effective age assurance guidance for Part 5 service providers, mentioned above. Children Protection Code While Ofcom has published final codes and risk assessment guidance for platforms on how to tackle illegal content (as mentioned above), they have yet to publish final codes an guidelines concerning protection of Children, which is a priority for Ofcom. Once these codes come into force, Ofcom’s role will be to hold these online service companies accountable. Comments on draft practical guidance for tech companies on protection of women and girls suggested by Ofcom Ofcom is focusing on nine areas where they expect online platforms to improve women/girls by taking responsibility and adopting a safety-by-design approach for tech companies to include in their features and functionalities. Hereinafter, you will find a a schematic summary on online safety harms to women and girls, indicating also key areas for proposed action to be implemented by tech companies: Breaking down of Ofcom’s focus of online safety harms to women/girls Proposed action by Ofcom for Tech companies Online Misogyny Gender based online harassment Online image abuse Online domestic abuse 1. Technology more than ever before is facilitating misogynistic ideas in reaching young men (70% of boys aged between 11-14 have been exposed to misogynistic views) and women. Online misogyny is on the rise and it can reach you in various forms. 2. Women & girls are often subjected to hate speech, such as sexist insults or degrading comments on social media platforms (e.g. Tiktok, Facebook, and Instagram). 3. It has become very common for women/girls to receive harassment and threats by unknown people in the form of abusive messages, doxxing (sharing personal information such as phone number, address, email to inflict harm to them), etc. Sexist misinformation is, also, often used to create a narrative about women’s role and worth in society aiming at reducing their impact in their community. 4. Objectification & exclusion of women/girls from online spaces is another classic example of bullying based on their appearance and bodies, as is slut-shaming, both resulting in female user being forced to leave online spaces, forums and discussions. 1. Online spaces can often expose women/girls to worldwide attacks from unknown people in the form of cyberstalking. Messages can be persistent with invasive monitoring of online and offline activities of the victim, coupled with threats. 2. Trolling & pile-onsare other forms of coordinated attacks that can come from multiple users or an entire group. 1. There has been an increased trend of impersonation of women/girls on online platforms. Fake profiles are created due to hate/jealousy to spread false information and damage reputations. 2. Revenge porn is used to share private intimate (often sexual) images without consent. 3. DeepFake imagesgenerated through AI technologies are often sexual images and videos used to blackmail or retaliate, through humiliation, against the victim. 1. Often known/unknown people stalk their victims tracking their location via apps or spywares. 2. Threats & blackmails are often conducted using digital evidence such as message and images to control or manipulate. 1. Governance and accountability processes to address gender based online harms, such as by conducting surveys or consulting subject matter experts. 2. Risk assessments on gender based online harms, such as by conducting user surveys or talking to victims of online harms. 3. Transparency in the form of publication of different kinds of gender-based harms assessed on the platform, and the subsequent processed user-reporting, actions taken, and punitive outcomes of said reporting. 4. Abusability evaluation and product testing to be conducted by service providers to ensure their product and features are not prone to be exploited by certain malicious actors. 5. Setting of safer defaults such as user prompts asking users to confirm sharing potential gender based harmful material from circulation. 6. Reduce circulation of online gender-based harms, such as serivce providers ensuring that harmful content does not recirculate on their platform and removing explicit pictures shared without users’ consent. 7. Giving users better control over experience such as bundling of settings at a single place so that it becomes easier for women affected by pile-ons to protect their accounts. 8. Ensure victims of gender-based harms can report in a safe (and anonymous) manner with the service provider taking swift action about their complaints, such as immediate removal from the platform of explicit images shared without consent. 9. Service providers to ensure actions being taken about complaints and having appropriate mechanisms and safety features to address and handle large amounts of complaints, and taking down of malicious users from their platform. Phase III Under Phase III, some service providers, based on their number of users and functionality, will be identified, categorized and subjected to additional obligations depending on their categorization by Ofcom. These additional obligations will be brought in via additional legislation which is to be tabled before Parliament; at the moment no action concerning these additional obligation is impending on tech companies. Final thoughts While the online regime itself is hugely ambitious, Ofcom through various Codes and guidance aims to hold tech companies accountable in the future in a bid to make their online experience safer for women, girls and children. How much of an active compliance from tech companies and how many enforcement actions will actually be taken by Ofcom against service providers for non-compliance, is something only time will tell. About Author Suyash Srivastava is a privacy lawyer based in London. https://berkeleyglobalsociety.com/members/suyash-srivastava/