Digital Trust and Safety and Internet Fragmentation

Online Safety Regulations and government actions can potentially have a severe impact on Internet connectivity and people’s online presence. In addition to undertaking fundamental rights reviews of these regulations, it is important to better understand how digital trust and safety can create hurdles for online presence and Internet connectivity.
What do we mean by Internet connectivity and online presence?
By Internet connectivity, we mean that users should be able to connect to the global Internet, access online services, and maintain an online presence on the global Internet. Applying the framework of Internet fragmentation, Internet fragmentation occurs when users’ online presence is severely diminished, and there are no alternatives to maintain that presence or access the Internet.
Under what circumstances does digital trust and safety put connectivity at risk?
It is difficult to enumerate all the instances in which digital trust and safety put connectivity at risk, but this piece is an attempt to highlight some of the risks.
Regulation without carefully considering the impact on access and connectivity
Some direct safety regulations affect Internet access and connectivity because they fail to consider their impact on access and online presence. For example, the Digital Services Act (DSA) includes provisions for hosting companies, which are part of Internet infrastructure. Blocking access at that level can be disproportionate.
However, Article 6(1)(b) of the DSA requires hosting companies to:
“(b) upon obtaining such knowledge or awareness [of illegal content or activity], [the hosting company] acts expeditiously to remove or to disable access to the illegal content.”
The problem with asking infrastructure providers and hosting companies to take vaguely defined actions is that providers may choose disproportionate methods or use techniques that could impact access to other online services.
However, not all safety regulations are equally problematic. Notably, user access considerations in the UK Online Safety Act and Ofcom’s code of practice are well thought out. Ofcom, in its call for consultation, proposes that users’ access to online services should only be disrupted under egregious circumstances, such as terrorism.
Trust and Safety and Digital Sovereignty Claims
Digital sovereignty claims are not new. Since the early 2000s, countries have tried to assert their “digital sovereignty” over global social media platforms. The case of LICRA and UEJF v Yahoo! is a famous example of early attempts to enforce local laws (in this case, French laws) on global platform.
Over time, nation-states increasingly sought to enforce their local laws—often related to trust and safety—on global platforms. Initially, these efforts targeted ISPs, involving blocking and filtering websites that failed to comply with national trust and safety measures. Typically, less democratic countries used careless filtering and blocking under the guise of “maintaining morality” online and protecting users.
This trend eventually spread to countries with higher levels of Internet freedom. Democratic nations, confident in their democratic processes, began adopting similar censorship techniques to “go tough on crime” while assuming they could maintain Internet freedom.
For example, the European Council agreed on the DSA, emphasizing fundamental rights and asserting that what is illegal offline should also be illegal online. However, this approach overlooks the fundamental differences between the two realms and risks enabling government overreach. Safety hazards offline often have different consequences than those online.
As anti-censorship tools became more sophisticated, assertions of “digital sovereignty” grew more challenging. Governments responded by introducing more creative measures. Even laws celebrated for protecting user rights, such as Brazil’s Marco Civil, required digital platforms to have a legal representative within the country. While this initially seemed democratic and supportive of digital sovereignty, cases like India demonstrated that such requirements often led to blocked access and discrimination rather than the punishment of platform providers.
In Brazil, the government’s zeal to enforce content moderation orders escalated to threatening to shut down the Internet satellite system when a platform failed to comply with a trust and safety order.
What is the way forward?
How can we maintain a safe online environment, protect human rights, and ensure unfettered Internet access for everyone?
Digital trust and safety should be provided to everyone who is on the Internet, regardless of where they are located. Setting a benchmark for platforms to be able to follow (while not disrupting their unique governance model) could help with providing a global Internet that is human rights respecting.
Moreover, we should move away from assessing decisions based on politically charged contexts or whether a country is democratic, undemocratic, or somewhere in between. Instead, we should use the following frameworks:
1. Human Rights Impact Framework
– This framework evaluates trust and safety decisions based on their impact on fundamental human rights, including freedom of expression, privacy, and non-discrimination.
2. Internet Impact Framework
– This framework assesses how trust and safety measures affect Internet access, connectivity, and the integrity of the global Internet.
By applying these frameworks, we can prevent Internet fragmentation, uphold our commitment to providing global Internet access, and ensure a safe, free, and open online environment for everyone, regardless of political allegiances or digital sovereignty claims.





