digital medusa logo

Contributing to Open Source Digital Governance 

Open source digital governance is the talk of town these days. The Internet community has been focusing on sharing best practices and solutions to governance problems openly. Practitioners and scholars have advocated for the concept of open source tools in trust and safety. Some tech-companies have used open source tool-kits and domain name abuse initiatives to address governance and compliance issues in the domain name space. Others have adopted open source governance risk and compliance software. Another kind of “open source” initiative is “tech against terrorism”. That initiative is issue specific (it works only on terrorist content) and helps companies by sharing information and knowledge. In a similar vein, the Prosocial Network Design rates and reviews prosocial interventions and their effectiveness for encouraging healthy behavior online and meaningful human connections. There are also some general open source initiatives, such as Open Sanctions that tech-companies can use to comply with sanctions and provide their services globally.   

These are important initiatives. However, open source digital governance is currently fragmented and missing key services for increasing trust and safety. It also does not address governance holistically, so that while we fix one part of the system, we do not harm the other part.   

What is open source digital governance? 

Open source initiatives provide governance solutions openly and transparently, usually licensed for the public and to use free of charge. They go beyond open source tools and recommendations and provide the actual process and policies. They can range from human rights impact assessments, to compliance systems, to governance and privacy impact assessments. As well as reducing the cost of governance for Internet platforms and Internet infrastructure providers, the processes and advice of open source mechanisms can be more transparent and evolve with time, because they can evolve with the community of users. The designers of open source services understand the importance of a global and interconnected Internet. Open source services can be more transparent and community-oriented than their commercial counterparts and constantly refine their digital governance methods.

Where do we need open source digital governance? 

We need holistic digital governance that tech-companies and technology providers throughout the Internet stack, can use for general governance purposes but also specific issues. Here are some examples of open source governance solutions for trust and safety, sanctions compliance, and human rights impact assessment.


  • Trust and Safety

Platforms that are large and meet the number of users’ threshold have to comply with many of the Digital Services Act (a European Union law) provisions. However, trust and safety practices are not just for bigger platforms. To keep operating, smaller platforms also need to have certain governance structures in place and govern their platforms. There are a myriad of commercial digital trust and safety providers and third-party vendors. However, there are few open source compliance services that could guide companies that cannot afford these services. Open source compliance mechanisms can help here with bringing trust and safety to digital services and products. There can also be specific open source digital audit processes and risk assessments that certain regulations require.  

  • Sanctions and connectivity 

Many Internet service providers (ISPs) and online service providers have to comply with economic sanctions, laws, and regulations. Smaller players and those companies with risk averse lawyers might either decide not to provide their services to these countries or hire third party compliance vendors. Third party sanction compliance vendors can be expensive, their processes could be opaque and they might be risk averse and not have a sound understanding of how access to the Internet could be access to essential services. Open source compliance can help solve these issues and allow companies to provide services to sanctioned countries and remain compliant with economic sanctions. 

  • Human rights impact assessment

Human rights impact assessment processes measure and analyze the impact of digital products on human rights. They especially draw upon international human rights principles but also use social sciences research methods. Human rights experts and consultants usually undertake the HRIA. Socially minded and big platforms can afford to undertake a human rights impact assessment. The human rights impact assessment principles and processes are known to experts and mentioned in their reports. However, they are not easy for non-experts to use and replicate. Human rights impact assessment is a very important process, and it especially helps evolve the policies and processes of tech-companies, so that they do not repeat past mistakes. 

Small companies and companies that do not have available money for human rights impact assessment could use open source human rights impact assessment tools to measure the impact of their digital products on human rights. Open source HRIA also can help standardize the processes and methods for HRIA, and result in the review of the methods themselves. Communities and vulnerable groups can use open source HRIA to measure how certain digital products and services affect human rights from their perspective. This can help us understand how different rights are impacted in different contexts and by different communities.

What is next? evolving digital governance processes 

We should contribute to and build open source digital governance processes. Many initiatives contribute to open source digital governance. Integrity Institute,  Trust and Safety Professional Association, and many civil society organizations provide best practices and recommendations as well as toolkits for governance of digital products. We should map these processes, analyze the gaps and also ask what other open source toolkits might help us with providing Internet and digital trust and safety to everyone. Open source digital governance processes can help with mapping these toolkits, provide concrete and holistic governance models but also, through human rights impact assessment, contribute to the evolution and reform of our governance mechanisms. In the next blog, we will explain the importance of open source human rights impact assessment processes. 


Bringing Accountability and Transparency to Under-Scrutinized Digital Platforms

Editor’s Note: This blog was published by the National Democratic Institute 

The policies and products of major tech platforms such as Facebook, Instagram, Twitter, and YouTube receive a significant amount of attention and engagement from researchers, journalists, and civil society organizations alike. The National Democratic Institute (NDI) has previously engaged these platforms to recommend interventions for ending online violence against women in politics and advocate for robust data access for non-academic researchers, among other topics. However, there are other digital platforms—including those that might be smaller in scale, are commonly used in only a few countries or by specific communities, or are relatively new to the market—that are also important to political processes around the world.

NDI is exploring how lessons learned from engagement with the aforementioned “legacy” platforms can inform recommendations to help other platforms ensure their policies and products make a positive impact for democracy. As the larger, better-resourced platforms walk back their commitments to protecting users from disinformation and online harassment, advocacy to encourage “alternative” or “emerging” platforms to uphold (or even factor into their design) the Democratic Principles for the Information Space is more important than ever.

NDI recently organized a roundtable discussion with civil society representatives and researchers to gather feedback about the risks and opportunities these platforms present in diverse contexts, including during pivotal democratic moments such as elections.


During the discussion, participants generally agreed there is significant value in dedicating time and resources toward researching and engaging with under-scrutinized platforms. However, the group grappled with which platforms to prioritize and how to develop terminology for talking about these platforms that is inclusive but not overly broad. NDI distributed a survey prior to the roundtable that asked respondents about their use of a range of platforms: audio-based apps such as ClubHouse and Discord, apps with primary user bases in one country or region such as Line and KakaoTalk, recently developed apps such as BeReal and Lemon8, encrypted messaging apps like Telegram, and widely popular but relatively new (compared to legacy platforms) apps like TikTok. There is significant diversity among these platforms in terms of their user base, longevity, and primary functions that make assessing them as a whole all the more challenging.

The terms “alternative” and “emerging” were considered as potential classifiers, but not all of these platforms are “emerging” in the sense that they are new to the market or even rising in popularity, and a platform that is “alternative” in one context may be mainstream in another. The majority of these apps are social media or communication platforms, but participants also considered how other digital products like cloud services could be used in contexts where access to these platforms is restricted. Though no consensus was reached on the scope of platforms under consideration or the best terminology to use, it was evident throughout the discussion that any recommendations attempting to target a variety of platforms should be appropriately nuanced to facilitate adoption across a range of contexts.


One characteristic that unifies these platforms is their relative inexperience in building up systems and policies compared to the legacy platforms, and a lack of diverse regional expertise (though the regional expertise of legacy platforms arguably leaves much to be desired). Channeling engagement through coalitions may be a useful strategy, as these platforms’ capacity to engage with civil society organizations and researchers around the world may be limited. Established trust and safety associations, such as the Digital Trust & Safety Partnership, the Trust & Safety Professional Association, and the Integrity Institute, offer different models for information sharing and collective action. Some coalitions may facilitate direct participation from platforms themselves, though the willingness of platforms to voluntarily commit to engagement may vary depending on the platforms’ resources and the political context in the country where the platform is based. Connections between a platform and government authorities may also shape how the platform approaches engagement with civil society and researchers on topics like content moderation, data privacy, and election integrity policies.

Different modalities of engagement will likely be required depending on a platform’s user base (whether national, regional, or global), whether a platform’s moderation teams are open to having discussions about identified threats, and the existing rules a platform has in place. A decision tree may be a useful tool to help civil society organizations determine which method of engagement is most effective and which recommendations to prioritize in advocacy to a given platform.


In addition to direct engagement with platforms, the roundtable participants also considered other mechanisms to incentivize platforms to incorporate recommendations from civil society into their policies and products. For example, pressuring investors to comply with international human rights standards could be an effective strategy for incentivizing smaller platforms funded by Silicon Valley venture capital groups. App stores and payment processors could also be a potential tool for incentivizing platforms to take certain actions, but there is a risk of app stores arbitrarily blocking apps (including in compliance with government requests) without transparency around the decision. Litigation against platforms is becoming increasingly common, but may be abused in illiberal contexts to entrench state power by imposing restrictions on free expression.

Platforms are not immune to misuse and abuse just because they have a smaller user base or have not received as much attention from the international research community. After the Oversight Board of Meta recommended the Cambodian Prime Minister’s Facebook account be suspended for posting a video threatening his political opponents with violence, the Prime Minister announced he would be leaving the platform, instead relying on Telegram to share his message (the Prime Minister also has a TikTok account). Tech companies of all shapes and sizes need to be prepared for mitigating the risk of bad actors and harmful content that may migrate to their platforms. NDI will leverage insights from this roundtable discussion, one-on-one conversations with relevant stakeholders, and desk research as it continues to refine its approach to these important questions.


As there was a lot of interest in sharing knowledge and expertise about under-scrutinized platforms, Digital Medusa has convened an informal mailing list to share knowledge among civil society members and explore research ideas. If you are interested, please join us: 

Domain name registries and registrars: the new digital trust and safety wardens in Bluesky

Just recently, Bluesky -the decentralized social network running on an open protocol called AT Protocol- announced that as a mechanism for supporting its business financially, it will directly sell domain names as handles for its users. The sales will be processed through an Internet Corporation for Assigned Names and Numbers (ICANN) accredited registrar, called Namecheap. Currently, the handles on social media platforms are internal handles and not independent domain names. As I will explain, using domain names as handles on social media networks might create a change in digital trust and safety landscape. 

ICANN, Registries and Registrars

ICANN is the organization that globally coordinates the policies related to allocation of top level domain names such as Alice.LoL and Farzaneh.TEAM. It sets some high-level policies regarding domain name allocation and through contractual agreements with registries (the operators of .LOL) and registrars (the operators of EXAMPLE.LOL), ICANN governs these entities globally. The registry operators can also assert some control over registrars and impose extra governance criteria on them. 

Understanding this detail is important to comprehend how the governance of Bluesky handles could be affected in the future and what the positive and negative aspects are. Domain name registries and registrars have had interesting governance stories, albeit scarce. For example, NTIA and Neustar (the registry for .US) had a policy of using the Pacifica list of seven words to police domain name registrations. NTIA and Neustar canceled someone’s domain name that had one of the seven words in it. Despite the fact that they overturned their decision, it took some time to 1) find what the policy based on which they canceled the domain was 2) argue against having a Pacifica list policy. Therefore, theoretically, if someone has a problematic word in the handle, the handle could be removed because of that specific registry’s policies.   

The contractual agreement between Namecheap and Bluesky

It is not clear what sort of contractual agreement exists between Namecheap and Bluesky, but usually registrars also have “resellers”. Thus, the hierarchical structure looks this way: ICANN imposes certain contractual obligations on registries and registrars. Registries can also impose some other contractual obligations and registrars can impose some more contractual obligations on their resellers. Registrars are also obliged to enforce ICANN policies on resellers. We don’t know the exact nature of the contractual relationship between Namecheap and Bluesky, but it could be the reseller relationship. 

Trust and safety at Bluesky and the new trust and safety wardens?

Seems like Bluesky is going to have its own trust and safety and abuse policies, but as I explain below, it might be a hybrid between registries’ policies, registrars’ policies and Bluesky’s own internal policies.

Namecheap will have a very pronounced role in governing Bluesky users’ behavior for those who used Namecheap as their provider for their handle. We know that the Bluesky domain name registrants have to follow the registration agreement (acceptable use). Namecheap transparently mentions what those processes are. However, some policies are not very detailed. For example, it does not mention how it decides on whether a certain handle is  involved with terrorist activities or promoting terrorist activities. It simply states: “Abuse Reports stand for any other inappropriate content, including but not limited to: identity theft, unauthorized redirect/frame/IP pointing, defamation, terrorism propaganda, HYIP, warez, etc.” It has more elaborate policies on copyright infringements. 

It is very possible that Namecheap will get involved with “trust and safety” and content issues on Bluesky, intellectual property rights infringement disputes. Bluesky might have inadvertently outsourced some of its trust and safety function to Namecheap. 

This might go beyond Namecheap and involve other registries and registrars. At the moment Bluesky allows for users to have their domains as their handles and they do not have to be registered at Namecheap. They can have a totally different registrar, in a different jurisdiction, with a different set of rules. But maybe since there is no contractual relationship between Bluesky and other registrars, they take little action. 

What would be the role of registries? 

The registries sometimes have their own internal content governance policies which they impose on the registrars that register domain names or directly on the registered name holder. This is especially the case if they operate newer names (such as .LOL as opposed to .COM) since they might have a direct relationship with the domain name registrant. For example, UNIREGISTRY(section VI) requires  .SEXY domain name holders not permit unsuitable content for viewing by a minor to be viewed from the main top-level directory (content that is immediately visible if a user navigates the domain name). In another case, for domain names ending in .HIV that deny the existence of HIV, the registry reserves the right to delete the domain name (to be clear, many other registries have similar policies, I am just trying to give examples that can make the issue more tangible).

And what’s the role of ICANN?

Hopefully, ICANN will have no role in this. We have worked long and hard to keep ICANN away from content regulation. It should remain that way. We certainly do not want ICANN to moderate and govern online behavior. However, when it comes to the domain name registrants’ privacy or issues of Domain Name System abuse (that are purely technical), ICANN will have some role to play as it governs the registries and registrars about those aspects.  

What does this mean for the governance of the handles and behavior of the users and users’ access? 

This might actually be a good deal for a decentralized Internet. It could improve authentication and interoperability, and bring discussion of domain names back to the public discourse.. It could also potentially decentralize trust and safety as the decision-making power on trust and safety and other issues will be redistributed. However, the registrars’ involvement with “handle” governance on social media platforms could cause a lot of problems including: 1)(if Bluesky scales) some registrars are not ready for coping with a high volume of complaints that are common on social media platforms. 2) domain name deletion and disabling domain names could be less proportionate than other methods. For example, there could be sub-domains that engage with legal activities that could be affected if the domain name is deleted because of the violation of respective policies. 3) access to domain names that could be brought up in this case as well. Sometimes due to economic sanctions, registrars confiscate or delete domain names. So if using domains as social network handles takes off, sanctions might even affect access to those handles. Some registrars over-comply with sanctions and do not allow for residents of a sanctioned country to register a domain name. In effect, ‌sanctions can spread more to social network governance and users as well.

These are some speculations since Bluesky has not yet amassed a large network of users. However, it would be worthwhile to monitor the issue as it relates to trust and safety at domain name level and the new roles and responsibilities that it could create for registries and registrars in digital trust and safety space.



Interested in Digital Medusa’s articles? Subscribe here:

Internet multistakeholder model: A trade association with multistakeholder theater

Did you know that we are swimming in Domain Name System abuse? As an Internet user, you probably were not aware. Apparently, doomsday is near and the Internet is going to explode in our face if we do not do something about “domain name system abuse”. This doomsday narrative has nearly jeopardized multistakeholder governance. However, it may also compel us to reconsider the multistakeholder model and its relevance in governing the Internet and its associated technologies.

The DNS! 

One of the most important properties of the Internet is the domain name system (DNS). You almost always interact with the domain name system when you go online. A very limited part of the domain name system is governed by an organization called Internet Corporation for Assigned Names and Numbers (ICANN). The organization runs on a multistakeholder governance structure where the government advisory committee, non-commercial stakeholders groups and commercial ones get involved with setting policies for the allocation of domain names. 

The DNS abuse

I won’t tell you what DNS abuse is because different policy positions result in different definitions.  You can read my opinion here. However, domain name abuse should not be regulated by ICANN as long as it  relates to regulation of content and services online. This is a very delicate distinction, and sticking to the technical definition of abuse is difficult. This is especially true as certain stakeholders such as intellectual property stakeholders almost always want to expand the mission of ICANN and argue that intellectual property abuse is DNS abuse. 

We don’t want ICANN to be a content regulator because we don’t want a centralized body that can be a global police that takes down domain names and polices the registries and registrars (the companies and operators of domain names). Autonomy is good and suspension of services on the Internet is bad for speech, creativity, business, and for keeping the Internet global and open. 

I am not denying that DNS abuse exists. We operate on the global Internet and most of us want to keep it like that. When technical abuse happens actors are involved mostly globally and it happens in a shared space. So it is important through collective action to clean up the house. 

However, it is worth noting that DNS abuse did not increase even during and after COVID-19, when we saw an unprecedented usage of the Internet. Despite this, Intellectual Property lawyers, commercial security companies and public safety working group at the Government Advisory Committee at ICANN  urged the registries and registrars to do something about DNS abuse. The registries and registrars held many meetings with the non-commercials and other stakeholders. 

The non-commercial group maintained that since abuse has not increased even during COVID-19 what is there to be done and why do we have to do something about it at ICANN? But the political will has won. Registries and registrars have started bilateral negotiations with ICANN (not through the multistakeholder model, however after getting a blessing from the multistakeholder arm of a council at ICANN) to change their contracts to add more provisions about DNS abuse. 

The contractual amendment is up for public comment stating 1) not to include content (which would not have been valid anyway because ICANN bylaws won’t allow them) in the definition of DNS abuse, and 2)recommends adding a limited definition of spam; 3) it suggests changing the term from “security threat” to “DNS abuse”. The rationale behind this change is “This clarifies that registry operators must periodically conduct a technical analysis to assess whether domains in the top-level domain (TLD) are being used to perpetrate DNS Abuse and maintain statistical reports on identified DNS Abuse.” It is not very clear why the term needs to change to require registry operators conduct technical analysis. They can be required to conduct the analysis without having to call it “DNS abuse”. 

As the call for comments expresses: “The proposed amendments would enhance obligations by requiring registrars and registry operators to promptly take reasonable and appropriate action to stop or otherwise disrupt DNS Abuse.” It then claims that it relies on a definition from the security and stability advisory committee, (a closed group of security and stability professionals at ICANN) to add spam to the definition but as long as spam is used for delivery of DNS abuse. When looking at the paragraph mentioned in the SSAC document, SSAC actually refers to other initiatives where it has gotten the definition from. This is important, because it seems like they have not done a lot of direct consultation with technical experts to provide the limited definition of spam. As the definition stands for now, it is very much prone to over-reporting as the spam might not be used for DNS abuse right away but potentially used in the future to deliver malware, phishing and other technical abuse. It should be noted that this definition of Spam is also mentioned in other ICANN security programs

My impression is that ICANN is more and more turning into a trade association with multistakeholder theater to grant legitimacy to its decisions. Obviously we have failed to solve ‌“DNS abuse” through a multistakeholder approach but through multistakeholder pressure we have made registries and registrars take action (unnecessary but still). I believe at  ICANN we did not need to do anything. There are arguments to be made that the 2013 contractual clause was ambiguous about what the registrars should do and this limited ICANN’s Compliance team’s ability to enforce the contract. However, would it have been better to work on compliance interpretation and get the collective action going (like we did in the case of Conficker in 2008) or open up a can of worms that really went against the spirit of the multistakeholder model? The sooner we admit how the multistakeholder model of the Internet has evolved, the sooner we can decide whether it makes sense to have a trade association with multistakeholder theater that provides registries and registrars with advice, which they can take or leave. 

XR: Innovate and Protect with Community Governance

The World Economic Forum published a report in 2019 that tackled the importance of building beneficial and reliable governance standards for the Fourth Industrial Revolution. Of course, obtaining proper governance within the complex environment of regulatory bodies, private companies, and the speed at which technology advances is difficult. Nevertheless, engaging in ongoing discussions regarding appropriate governance mechanisms for emerging technologies that are adaptable and protective of users remains crucial.

Community governance is a participatory approach to decision-making where a community of stakeholders collaboratively governs and manages a particular domain. This approach emphasizes a bottom-up, inclusive process that considers all stakeholders’ diverse perspectives. In emerging technologies such as extended reality (XR), community governance is particularly relevant as it allows for innovation while protecting communities from abuse and evolving private governance mechanisms. However, many current XR environments have terms and conditions that are not bottom-up or community-oriented, highlighting the importance of developing community governance mechanisms to ensure that these technologies are used to benefit everyone.

Implementing community governance in the early stages of XR development can set the tone for future innovations and help mitigate potential problems. Community governance allows for a more democratic and inclusive decision-making process where various perspectives and interests are considered. This can lead to more informed and equitable decision-making, setting a precedent for future regulations prioritizing the users’ interests. Additionally, it can help protect users and communities from potential abuses of the technology and help promote innovation and creativity, which can lead to new and innovative approaches to technology development and use.

The rapid fruition of Extended Reality (XR) ideas and products has proven to be much more than a gimmick for the average consumer, having become lucrative and pervasive in government and private spaces. XR, an umbrella term for Augmented Reality, Virtual Reality, and Mixed Reality has been around since the 1800s and is used in many everyday use cases, such as video games or Google Earth. Governments and private companies still find themselves at a crossroads when discussing further development, research, and governance within XR.

Many XR environments’ current terms and conditions are often developed and enforced by a centralized authority, such as the platform owner or developer. For example, the Stockholm-based VR and AR studio’s terms and conditions state, “accessing the Game, you agree to abide by the Terms, and a legally binding agreement is created between you and Resolution Games.” In many cases, XR companies provide a sentence within their terms and conditions explicitly stating the relationship is only between the user and the owning company. This approach can lead to a lack of transparency, accountability, and inclusivity in the decision-making process, as the perspectives and interests of users and other stakeholders may not be considered. This can result in issues such as content moderation policies that disproportionately impact certain groups or the use of user data without their consent.

Like the Internet or other emerging technologies, XR has various facets that contribute to a user’s experience, ultimately bringing into question individual rights and protections. For example, a clear definition of a “virtual crime” within XR has not been established. Even though it is more apparent in other technologies, like account hacking or ransomware, defining virtual crimes becomes murky when differentiating real-life experience and the type of reality that XR provides. Additionally, privacy and security concerns have emerged considering XR usage of a user’s data for its functionality. Bloomberg Law published an article discussing eye-tracking and its ability to collect information, such as mental and emotional state, age, gender, and sexual orientation.

According to the General Services Administration, no laws or policies govern XR usage. Governing XR technologies is a multifaceted challenge, with governments creating their policies and regulations for aspects of XR but not XR itself. For instance, the EU’s General Data Protection Regulation (GDPR) defines items like personal data tied to XR. Similarly, the U.S. has created an “AI Bill of Rights” that touches upon aspects of XR but does not directly discuss XR.

Conversations around governance within XR have been prevalent but have not been collaborative or community-based. Many existing proposals for XR governance have only come from private companies or international organizations. Meta, formally known as Facebook, is dedicated to building the “Metaverse,” an immersive XR environment. In doing so, it has published its expectations and regulatory practices on data and safety, essentially leaving the responsibility to oversee these concerns to itself.

On the other hand, international organizations have provided a much more in-depth analysis of the negative implications of XR and how to address them. The XR Safety Initiative (XRSI) published a paper addressing many issues within XR environments, like digital divides and profiling, providing multiple regulatory suggestions ranging from hardware and language. XRSI introduces “Internet hindsight,” which is the lack of regulation and oversight for the Internet that led to misuse, abuse, and the breakdown of trust, arguing that it is happening or can happen to XR systems. One example of lack of Internet hindsight was the data concerns that arose when it was revealed that Cambridge Analytica had misused data for Facebook users in 2018 during the 2016 U.S. elections.

In April of 2022, the Bipartisan Policy Center published a report that revealed a high estimated increase in the market size of XR usage, not only in everyday usage in video games but also in healthcare and engineering. Revealing this information not only shows the popularity of XR in various fields but asks for laws and policies to be created because of this popularity. While XR support is becoming more apparent, XR can be detrimental to individuals without proper governance.

The Bipartisan Policy Center, the XRSI, and other organizations continue to prove that community governance is needed to ensure XR user protection. Community governance calls for implementing rules and regulations through a collective effort of various bodies, from governments to individuals. Digital environments like the Internet or XR are vast, and the task of governing them can be daunting, considering varying jurisdictions, opinions, customs, and other factors. 

Such community governance would involve an organized coalition of various stakeholders, including governments, XR professionals, and international organizations. Instilling a community governance model can benefit current users, protect future users, and mitigate issues that have not been realized yet. Methods of thinking like “Internet Hindsight” would assist in mitigating these issues by providing cases of issues that may not have been experienced in XR but can be possible. Furthermore, community governance would provide insight from various corners of XR experiences and important developments and innovations. This way, future developments will be prioritized correctly and handled ethically, reducing the biases that come with private companies. 

Digital technologies will continue evolving as more users participate in the experiences and creators develop new ideas. However, that does not mean that users should wait for a lawsuit to find assistance through unfortunate negative experiences, nor that private companies should take the lead on governing XR environments and products. XR is quickly growing and being adopted in many areas, but users remain unprotected if community governance is not implemented. 

Digital trade protectionism does not protect workers rights and could cost us the Internet

Cross-border data flows and, as a result, the global Internet are being threatened by usual and unusual suspects. Some nation-states have proposed for some time digital sovereignty laws. But now, even governments that have been advocates for the global Internet are changing course. Think tanks and others who were in favor of the global and interoperable Internet also have turned against it. On top of all this, those opposed to digital free trade have once again risen, with the same tired arguments: cross-border data flow is bad for workers’ and people’s rights. This blog is hopefully a first step to start a conversation that reminds us of the myriad benefits of a global Internet that allows cross-border data flow. It will also provide a few counter arguments for the America’s Unions (AFL-CIO) digital trade agenda published recently. Let us be clear: ordinary workers have faced trouble in recent decades. But the AFL-CIO proposals are detrimental to the global Internet and Internet connectivity. Worse, they are not likely to protect the tech workers the AFL-CIO wants to protect.

Back in 2017, some civil society organizations, including some labor unions, allegedly spoke on behalf of “the global civil society” and argued that the prohibition of WTO on data localization as a trade barrier would disallow countries from coming up with laws to protect the public interest and protect privacy and other fundamental rights. These arguments resurfaced again recently as Biden’s administration announced a worker-centered trade policy

The recent AFL-CIO workers’ agenda starts from the mistaken premise that governments cannot regulate data or big tech companies because of digital trade policies. This, it says, leads to lack of protection for workers. Digital trade agreements do not prohibit countries from regulating data or any digital services and products. Every year, there are  myriad of laws and regulations around the world that increasingly regulate data and tech companies: consider Europe’s Digital Market Act and Digital Services Act, or India’s IT Act, for example. As UNCTAD reports, 71% of countries worldwide have come up with data protection and privacy legislation. 

The workers’ agenda is filled with assertions that trade agreements do not consider workers and people’s fundamental rights and the agenda presents data localization and more regulation as the solution. One argument is that digital apps and social media platforms have eroded privacy. However, trade agreements, unlike human rights instruments, can be effective and binding. Civil society groups have even used trade agreements and regional economic cooperation groups such as Asia-Pacific Economic Cooperation to advance privacy in cross-border data flow. 

Data localization measures tend to work for the benefit of existing, vested interests and against those who might try something new. It is not clear how such measures will make the Internet become a healthier space. 

This takes us to the other argument made in the statement that implies to care about workers’ creativity. To protect their creative rights, it asks for more aggressive copyright protection and less fair-use that supports the interest of creators. Fair use is the mechanism used by creators to assert their rights in the face of large, well sourced corporations. It is unclear how eroding fair use through digital trade agreements can help with workers’ rights and creativity. Especially as the statement itself acknowledges that by advancing intellectual property rights of big-tech corporations, the US government expanded their access to the global market at the expense of others. 

There are many more arguments in the statement that support a protectionist digital trade agenda to seemingly protect the workers and users’ rights. It even goes as far as supporting contested arguments such as online platforms should be held accountable for the third party user generated content and provision of bulk access to source codes and algorithms for the governments to address harmful practices and content is necessary. 

To protect workers and users on the Internet when dealing with digital products and services is a wonderful goal. However, it is unclear how the AFC-CIO solution can actually help with reaching that goal. Fortunately, there are both private and regulatory initiatives around the world that address the well-being of the workers in tech companies. For example, the Digital Trust and Safety Partnership, an industry consortium, includes investing in the wellness and resilience of teams dealing with sensitive materials as part of its Best Practices Framework. Tech workers have also started creating workers’ union inside tech-companies. We need to think about supporting alternatives instead of attacking the critical characteristics of the Internet. It is those characteristics that make meaningful connectivity possible.

Access to the Internet is not just access to another form of communication. It is access to essential services and a lifeline during a crisis. This was quite clear during the years of a global pandemic. We need to stand up against what hampers our interconnectivity. Our solutions should not cost us the Internet.   

Defeating Digital Perseus: 2022 Version

2022 was a tough year for the Internet. Digital Perseus came out in full force to fragment the Internet, to stop unfettered access and sometimes even friends turned into Digital Perseuses. But overall, it has been a productive year for Digital Medusa. Despite all the trouble and barriers that Digital Medusa faces, this year was filled with exciting projects. 

Sanctions and the Internet

We have been dealing with sanctions and their effect on the Internet for years. Ordinary users of the Internet in sanctioned countries,sanction regulators and those who have to comply have been struggling. As Digital Medusa had done some work on Iran and Afghanistan on sanctions, it made sense to make it an agenda for 2022 as mentioned in Digital Medusa’s last year blog. Little did we know that sanctions became the talk of town and everyone would want to get involved with it one way or another due to the unfortunate barbaric war Russia started in Ukraine. Then the Iranian uprisings happened and more sanctions ensued. As Internet governance organizations and other service providers on the Internet are increasingly dealing with sanctions imposed on many countries, RIPE NCC funded Digital Medusa to undertake some preliminary research on the effect of sanctions on access to the Internet. Read more about the projects and progress here.

Christchurch Call and Global Internet Forum to Counter Terrorism

Digital Medusa was more active this year as a member of CCAN (a network that provides advice to governments about handling terrorist, violent extremist behavior, attending the multistakeholder leaders summit as well as writing a report about the human rights impact of crisis protocols during terrorist attacks with an online angle. 

Human Rights Impact Assessment: DNS Over HTTPS (DoH)

The DNS over HTTPS is a protocol that brings privacy to Domain Name System queries. Taraaz and Digital Medusa got involved with a project that assessed the human rights impact of a product that used DNS over HTTPS. Our partners plan to publish this report in the coming months.

United Nations and Internet Governance Syllabus

The Internet Governance Forum at the United Nations commissioned Digital Medusa to do an Internet Governance syllabus as a guide for Internet governance educators. The syllabus can be found here.

Digital Medusa is an organization

I promised Digital Medusa won’t remain a one woman show and I more or less made it happen: GEORGIA EVANS has finalized a report on how Canada upholds its Christchurch Call commitments, ZHENYE (RYAN) PAN helped with mapping the actors in sanction and Internet space and attended the workshop on sanctions at the IGF. LAURA VUILLEQUEZ did some preliminary work on sanctions and the Internet literature review and mapped the European Trust and Safety actors.
ANGIE OREJUELA has helped with so many aspects of preparing and presenting the research update regarding trust and safety and Internet and sanctions and RITHIKA SHENOY works on the humanitarian aspect of access to the Internet and has co-authored a few funding proposals with us. Working with the other Medusans was the best part of 2022. Their ideas, their enthusiasm and words of encouragement got us where we are at.


In 2023, Digital Medusa will continue to protect the core values of our digital space: interconnectivity, interoperability, security and the global and open nature of the Internet. We will do so by promoting decentralization of the Internet, increasing access to the global Internet especially during crises and contributing to governance mechanisms that help connectivity and trust and safety. In order to do that, we will vigorously work on and provide a few services in 2023: 

  1. Outreach and engagement: hopefully Digital Medusa will continue with Digital Trust and Safety Partnership’s outreach and engagement but will try to provide this aspect as a service

  2. Research and impact assessment: we will provide various governance impact assessment analysis and do research on the Internet stack.

  3. Policy and advocacy: we will promote policies and work with various vulnerable communities around the world who do not have access to the Internet or are in crisis such as Afghanistan and help them be connected and use the Internet to have access to essential services and education. 

Dear Digital Peruses

2022 was only the beginning of Digital Medusa. Despite your every effort in weakening the Internet and Internet governance organizations, the Internet is here to stay: “Don’t ever say it’s over if I am breathing”. 

A multistakeholder summit? the case of Christchurch Call 

Too many summits are high-level, ineffective meetings filled with well-meaning but empty speeches. But the multistakeholder Christchurch Call summit this year differed greatly from the usual UN General Assembly meetings. New Zealand and France used their political capital in this meeting to bring together representatives of different stakeholders to have a discussion about countering terrorism online.To the participants’ surprise, most of the interventions were conversational and civil society was included on an equal footing. 

This blog includes Digital Medusa’s opinions about the summit and the Christchurch Call. 


In 2019, New Zealand and France convened Christchurch Call to Action after the terrorist attack that killed 51, and injured 50, Muslims in a mosque in Christchurch. The horrendous attack was live streamed on Facebook and other tech-corporations’ platforms. 

When the Call was launched in 2019, civil society globally criticized their own lack of presence and complained of being treated as an afterthought. But for the past 3 years, we have witnessed a slow but sustainable change and a move towards convening a true multistakeholder forum. It has become a forum that can go beyond mere lip service, that is genuinely multistakeholder, and that takes part in the Christchurch Call commitments to preserve a secure, open, interoperable Internet while fighting with terrorist behavior online.

The governments of New Zealand and France convened the Christchurch Call Advisory Network (CCAN) which comprises civil society organizations including impacted communities, digital rights organizations and the technical community that aims to defend a global, interoperable Internet. While the role of civil society organizations in other similar forums is often contested and not very clear, CCAN has made real progress towards meaningful participation. It is important now that progress continues beyond just attending a high level meeting with the leaders of the world. 

Crisis Response Management and Internet Infrastructure

During the summit, we discussed the crisis response management and protocols. A lot of progress has been made by various countries to create a voluntary but cohesive protocol management that can also adapt to the global nature of the Internet. However, we increasingly see calls for content moderation at the Internet architecture level (domain names, hosting providers, cloud services etc). A proportional moderation of content at the infrastructure level might not be possible in all cases. Especially during a crisis, we have to be extremely careful with the techniques we use to filter and block access to domains and websites, as it might not be possible to do proportionally. Such techniques might hamper access to other services online. We also need to evaluate the impact on human rights of each at each stage of crisis response. A global interoperable and interconnected Internet needs a holistic approach to safety—one that does not focus exclusively on blocking, take-downs and after the incident responses, but that offers a systematic way of tackling the issues raised by horrific content online.

Researchers Access to Data

Perhaps surprisingly, Digital Medusa deviates from various researchers and civil society organizations in calls for researchers’ access to data. While the Digital Services Act will facilitate such access, I do not believe we have the governance structures in place to validate research, nor to provide the privacy-enhancing structures to diminish abuse of personal and private data. New Zealand, France and a few others announced an initiative that can address this issue while also facilitating researchers’ access to data. The effectiveness of such an initiative remains to be seen, especially as it primarily focuses on fixing problems by focusing on technology.  

Internet of the Future

It is natural to think that, if bad content online is the source of danger, then all that is needed is to remove that content and moderation. But content removal does not on its own bring safety to the Internet. For our future efforts, we need holistic approaches. We also need to work with impacted communities and operators on the Internet. Content removal and take-downs on the Internet can have a major impact on the well being of individuals. Careless removal and takedown can affect access to a host of essential online services and it can also hamper uprisings and information sharing online during a crisis. I hope that content moderation will become only one tool (and not even the most important), and we come up with more innovative ways to deal with governance of our conduct on the Internet.  


The Domain Name Multistakeholder Theatre

At the early stages of the Internet, domain names ( were the point of entry for the majority of people’s online presence. As a result the allocation of these domain names mattered for the Internet. The general public, using the Internet for personal growth and development and citizen journalism cared about their domain names. Small businesses cared as well about their domain name, if the amusing case of is any indication

1998 saw the creation of a new body that, at a high level, would govern overall the allocation of domain names.  It was called Internet Corporation for Assigned Names and Numbers, or ICANN. For the reasons outlined above, ICANN mattered a lot to the Internet, so its policies affected a large number of people on the Internet. 

That has changed. The Internet has developed such that, while the Domain Name System is ever more important to technical operation, the role of ICANN (while still critical) has been diminished. Most of the time, people’s point of entry to the Internet does not expose them to domain names anymore.* 

Despite this change, some at ICANN still believe that ICANN is in charge of security and stability of the Internet as a whole and believe there is much at stake at ICANN. They also believe that the multistakeholder model that ICANN runs can only be done through bloated layers of processes and bureaucracy. This was evident in the recent ICANN Hague meeting which happened last week. 

For any issue that ICANN has to deal with, it comes up with an elaborate process that involves a wide variety of stakeholders (though it is dubious what stakes some have) and a very detailed process that will take months to operationalize. This was obvious from the re-opening of the issue of Closed Generics, a term for an obscure operational wrinkle where allocation of generic names such as .books to corporations like Amazon is disputed.  While the policy development group did not come up with a resolution on Closed Generics, the Board (instead of making a decision) sent the issue back to the community to make a decision. The community obviously came up with an elaborate process of having a facilitator, a bunch of representatives and so on.  

The problem with having a multistakeholder theater is that it leads ICANN away from its important but narrowly-limited mission. There are a bunch of regulators around the world that want to see things are happening. If ICANN is not doing that narrow, limited mission that it has, then the regulators will regulate. So while it is very gratifying to be transcribed, have high level panels with distinguished stakeholders and talk about issues and reopen them many times, we might have an irrelevant multistakeholder body soon. 


*This might be anecdotal but a cursory look at the number of domain name registration (from Verisign report) is indicative of such change: “New .com and .net domain name registrations totaled 10.6 million at the end of the fourth quarter of 2021, compared to 10.5 million domain name registrations at the end of the fourth quarter of 2020.” we can compare this number to Facebook’s new users which is 500,000 every day., another reason might be that while apps and other Internet services use domain name system extensively, the general Internet user doesn’t use it directly.

How to multistakeholder wash Internet disconnection: On the multistakeholder Internet governance sanction regime

Demilitarization of the Internet is a goal we should all aspire to. This can be done in various ways, such as effective nongovernmental attribution of cyberattacks, emphasizing the importance of bringing in stakeholders other than governments and the military as well as self-governance.

Recently, some have used the awful war Russia has started against Ukraine to come up with a statement and a solution of how we go about imposing sanctions that can demilitarize the Internet and overcome propaganda. I call that statement multistakeholder washing of Internet disconnection. Multistakeholder washing is the process of taking a process controlled by an elite group, and dressing it up in the clothes of multiple stakeholders. The statement about sanctions is a good case study:

1.  Stages of multistakeholder washing 

First step: In order to multistakeholder wash an idea, a limited set of stakeholders— sometimes excluding affected communities, and frequently including people with a lot of resources and power— get together and come up with their own solution. We can call this group the Wise Ones. They do this initial step behind closed doors, to get the statement out; otherwise it will get too noisy and involve too many people. 

Second step: the Wise Ones publish the statement, and use their connections to promote the idea (e.g. ensuring that media outlets have early access)  The Wise Ones also tell everyone that they are open to feedback while shutting down opposing views and at the same time starting to operationalize their idea. These kinds of approaches are not unknown to those who practice multistakeholderism. Including only “insiders” preliminary stages of statements is a tactic that serves to control the process as much as possible.

Some consortiums or other unilateral processes start first as single stakeholder initiatives and later on try to adopt a multistakeholder approach. That is not the approach used here. In this model, the Wise Ones claim they are multistakeholder already. 

Third step: Save the whole world (which most of the time actually means the West). 

2. Who are the stakeholders?

Part of the legitimacy of the Wise Ones depends on a pose of neutrality and inclusion. Usually, the Wise Ones solutions are proposed for highly contentious issues where there is a lot of disagreement. So, the Wise Ones often claim broad legitimacy from unnamed supporters who, unfortunately, cannot name themselves publicly. This step we can call “inclusion of the unnamed”.

In the current example, for instance, we don’t know who the stakeholders are, other than the ones who signed the statement. The ones on the statement are mostly Western, mostly male, and mostly never lived in sanctioned countries, or operated networks there. 

We can see the action of inclusion of the unnamed in a claim that one of the leaders of this statement, Bill Woodcock, made on LinkedIn:

“Ten days and 87 authors, from every part of the Internet governance community… This is how we do multistakeholderism, and ensure that the Internet is not used as a tool of war or oppression.

There are 36 people who have signed this open letter.  We have no way of knowing who the other 51 are. Perhaps they do bring to the proposal the perspective of people who have lived in sanctioned countries and dealt with the result, but the included list of people who did sign on does not give one a lot of confidence. In this case, Mr Woodcock should have clarified every part of the Internet community that they managed to convince to agree with this initiative! We shouldn’t need to wonder why this document came together so quickly. 

3. That unprecedented challenge we knew about for so many years 

A third element of multistakeholder washing is the assertion that the issue being confronted is entirely new, which requires the heroic intervention of the Wise Ones to confront. For instance, in the present case, the document makes such an assertion from the very beginning. It says: “The invasion of Ukraine poses a new challenge for multistakeholder Internet infrastructure governance.”

The invasion of Ukraine does not pose a new challenge for multistakeholder Internet infrastructure governance. This is a challenge that those working on the statement want to pay attention to only now and want to do something collectively only now. Many people raised the challenges that faced the Internet during conflicts and wars. Afghanistan and Syria are only two wars that raised very similar challenges. 

Let’s reframe this sentence to what it really is about. The invasion of Ukraine reinforced this challenge, which triggered the West to finally pay attention to it in a collective manner. It’s good to have people paying attention to these issues, but only if we actually consider that, like the Internet, this challenge has a global dimension and includes more communities than Western based entities. 

4. Adopting tired, old approaches that have been tried and tested and failed

A peculiar element of multistakeholder washing is that it frequently presents, as new and revolutionary, solutions or approaches that have previously been tried and found wanting. This may be because the Wise Ones group excludes too many participants who would have been able to point out the similarity to previous approaches to a problem.

In this case, for instance, the technocrats who make up the Wise Ones claim,

“The effectiveness of sanctions should be evaluated relative to predefined goals. Ineffective sanctions waste effort and willpower and convey neither unity nor conviction.” and “Sanctions should be focused and precise. They should minimize the chance of unintended consequences or collateral damage. Disproportionate or over-broad sanctions risk fundamentally alienating populations.”

Sanctions have to be effective and precise. This is not a unique and ingenious principle. Governmental sanctions were never adopted without predefined goals. (US sanctions had human rights goals in mind.) They didn’t want to be ineffective either, hence they considered fines. There were attempts to be precise too, so they came up with a list. But these lists have historically affected those vulnerable communities oppressed by dictatorships more than the dictators themselves. Those who have worked on the issue have documented this through years of monitoring and observing the situation. Because businesses want to do business, and don’t want to get fined, they automatically act cautiously where sanctions might affect them. This leads to over-compliance with the sanctions, and on the Internet that means the disconnection of whole communities of people. Despite the fact that the US office of treasury emphasized every step of the way that ordinary people should not be affected by sanctions and that the specially designated national (SDN) list was in effect to come up with proportional sanctions to limit collateral damage, businesses (tech and non-tech) just stopped doing anything with the residents of those based in sanctioned countries. Internet companies sometimes even refuse to provide their products to businesses that are not residents of the sanctioned countries but that provide services to such countries. Read more about that here. None of that is a new development, and if there is something truly new in this sanction proposal it is pretty hard to see what. 

5. Only military and propaganda agencies and their information infrastructure are potential targets of sanctions

Another odd element of multistakeholder washing is that the proposals usually make exaggerated promises of effectiveness. Part of the reason that multistakeholder processes can be frustrating is because they include so many participants, which can slow progress. But that wide inclusion tends to make for an effective system because, as the open source software advocates like to say, with enough eyeballs all bugs are shallow. When an exclusive group pretends to be multistakeholder, the advantage of different perspectives is lost.

In the example of the sanctions proposal under discussion, part of the supposed virtue is the narrow target. But blocking “propaganda agencies” will not lead to demilitarization of the Internet, but to politicization of the Internet. For some, the Voice of America, is a propaganda agency. For others, other countries’ outlets are. What are the parameters to decide what a propaganda agency is? Who decides, and how?

Also, the claim that the sanctions will only target certain entities and networks is naive. One does not even need the experience to see this. Militaries in dictatorships, and especially in sanctioned countries, will use networks of civilians (by force if necessary). They own many channels of communications and sometimes have their representatives in those networks. It was only today that London Internet Exchange announced that it had to comply with sanctions and suspended the membership of two Russian AS numbers that belonged to telecommunication agencies in Russia. This might be because the owners of those ASes were in the legal sanction list. But the disconnection will potentially hamper many more people.  

This is why the “list based approach” never worked when it came to sanctions. Since the powerful in sanctioned countries can navigate around the list, they will not be affected. The sanctions can’t catch the powerful, but they do catch the “small flies” that don’t have the resources of oligarchs or military. 

6. The multistakeholder community is here to save the day

Part of the reason multistakeholder washing is attractive is that the idea of a multistakeholder process conveys a certain kind of legitimacy. In the worst examples, that legitimacy is held up against governments asking them not to impose sanctions! This issue shows up prominently in this principle: “It is inappropriate and counterproductive for governments to attempt to compel Internet governance mechanisms to impose sanctions outside of the community’s multistakeholder decision-making process.”

Governments impose sanctions as a means of implementing their foreign policies. So, sanctions are inherently government action. An optional, non-governmental refusal to interact with someone else is not a sanctions regime. It’s a consumer boycott (in this case a military consumer boycott). Which would have been an interesting regime, and if that is what this group means, they should actually clarify it. 

But learning from the governments’ experiences about sanctions is crucial. Governments have been imposing sanctions on various countries and groups which hampered the access of ordinary people to services on the Internet and Internet infrastructure. You can’t stop them by having a principle that they shouldn’t impose sanctions. And if you provide the governments with a list, governments will add to their sanction list and fine every network that communicates with the sanctioned networks. This is how sanctions work.

The Networks themselves have already been complying with sanctions or enabling customers to comply with sanctions. Networks on the Internet have to follow the laws of the countries they are based in. In fact, Content Delivery Networks and others have allowed for businesses not to serve certain regions or countries and they don’t consult with any imaginary multistakeholder community, because they have to follow the law. The US regularly confiscates domain names because they were owned or related to some military force and had undertaken disinformation campaigns (see one example). What is this multistakeholder community going to do in the future when the US does something like that again, using this new multi stakeholder-approved list? Is that the outcome this group wants? 

The Wise Ones also recommend due process and consensus to come up with the magic list. Due process usually is provided after the fact. This must mean that they will have a list of organizations, IP addresses and domain names and if those people complain, then there will be a process to unblock them. Which is good, but again another tried and tested method that is not efficient nor fair. (you see a lot of “due process” arguments in content take-down that completely ignore the deprivation of access to crucial services to people) What is not well thought out here is how wrongful disconnection is going to be prevented? What are the remedies? These are the fundamental questions that the proposal assures will be solved by consensus among the multistakeholder community. But waving these problems away as a simple matter of consensus is simple wishful thinking. The entire problem of sanctions is a political one of who is to be sanctioned, by whose authority, and with what effect. In answer to that problem, the Wise Ones offer “due process and consensus”. In other words, on the basic central issue, this proposal makes no proposal at all. 

How to move ahead?

Multistakeholder washing creates the illusion of a multistakeholder process when the process is actually exclusionary. It is probably not that surprising that this would be used to build a recommendation for a sanctions regime. For sanctions regimes are inherently exclusionary. They consider nation states as the unit of analysis and if you have decided to sanction some ruling party in a country, you are naturally not going to include them in the discussions. Which is fine, but then your process will not be multistakeholder, you can pick another name for it. You can call it the Networks We Don’t Like!

Many of us agree we need to stop the militarization of the Internet and attempt to demilitarize it. But can we do that with a “sanction regime” and a “list based” approach that can be abused and lead to disconnection of ordinary people from the Internet? The evidence so far would appear to be no, which is what would have been evident to the people who proposed this sanctions model if they had actually engaged the wide range of stakeholders that is a necessary condition to meet all the principles the authors laid out. Businesses, network operators and others are free to take private actions and talk to the networks they like and boycott the networks they don’t. But perhaps it is better to acknowledge that this is not a multistakeholder process and it will not be possible to uphold those principles laid out in the document, i.e. people’s access to the Internet will be hampered. 



About The Author

Farzaneh Badii

Digital Medusa is a boutique advisory providing digital governance research and advocacy services. It is the brainchild of Farzaneh Badi[e]i.Digital Medusa’s mission is to provide objective and alternative digital governance narratives.