digital medusa logo

Contributing to Open Source Digital Governance 

Open source digital governance is the talk of town these days. The Internet community has been focusing on sharing best practices and solutions to governance problems openly. Practitioners and scholars have advocated for the concept of open source tools in trust and safety. Some tech-companies have used open source tool-kits and domain name abuse initiatives to address governance and compliance issues in the domain name space. Others have adopted open source governance risk and compliance software. Another kind of “open source” initiative is “tech against terrorism”. That initiative is issue specific (it works only on terrorist content) and helps companies by sharing information and knowledge. In a similar vein, the Prosocial Network Design rates and reviews prosocial interventions and their effectiveness for encouraging healthy behavior online and meaningful human connections. There are also some general open source initiatives, such as Open Sanctions that tech-companies can use to comply with sanctions and provide their services globally.   

These are important initiatives. However, open source digital governance is currently fragmented and missing key services for increasing trust and safety. It also does not address governance holistically, so that while we fix one part of the system, we do not harm the other part.   

What is open source digital governance? 

Open source initiatives provide governance solutions openly and transparently, usually licensed for the public and to use free of charge. They go beyond open source tools and recommendations and provide the actual process and policies. They can range from human rights impact assessments, to compliance systems, to governance and privacy impact assessments. As well as reducing the cost of governance for Internet platforms and Internet infrastructure providers, the processes and advice of open source mechanisms can be more transparent and evolve with time, because they can evolve with the community of users. The designers of open source services understand the importance of a global and interconnected Internet. Open source services can be more transparent and community-oriented than their commercial counterparts and constantly refine their digital governance methods.

Where do we need open source digital governance? 

We need holistic digital governance that tech-companies and technology providers throughout the Internet stack, can use for general governance purposes but also specific issues. Here are some examples of open source governance solutions for trust and safety, sanctions compliance, and human rights impact assessment.


  • Trust and Safety

Platforms that are large and meet the number of users’ threshold have to comply with many of the Digital Services Act (a European Union law) provisions. However, trust and safety practices are not just for bigger platforms. To keep operating, smaller platforms also need to have certain governance structures in place and govern their platforms. There are a myriad of commercial digital trust and safety providers and third-party vendors. However, there are few open source compliance services that could guide companies that cannot afford these services. Open source compliance mechanisms can help here with bringing trust and safety to digital services and products. There can also be specific open source digital audit processes and risk assessments that certain regulations require.  

  • Sanctions and connectivity 

Many Internet service providers (ISPs) and online service providers have to comply with economic sanctions, laws, and regulations. Smaller players and those companies with risk averse lawyers might either decide not to provide their services to these countries or hire third party compliance vendors. Third party sanction compliance vendors can be expensive, their processes could be opaque and they might be risk averse and not have a sound understanding of how access to the Internet could be access to essential services. Open source compliance can help solve these issues and allow companies to provide services to sanctioned countries and remain compliant with economic sanctions. 

  • Human rights impact assessment

Human rights impact assessment processes measure and analyze the impact of digital products on human rights. They especially draw upon international human rights principles but also use social sciences research methods. Human rights experts and consultants usually undertake the HRIA. Socially minded and big platforms can afford to undertake a human rights impact assessment. The human rights impact assessment principles and processes are known to experts and mentioned in their reports. However, they are not easy for non-experts to use and replicate. Human rights impact assessment is a very important process, and it especially helps evolve the policies and processes of tech-companies, so that they do not repeat past mistakes. 

Small companies and companies that do not have available money for human rights impact assessment could use open source human rights impact assessment tools to measure the impact of their digital products on human rights. Open source HRIA also can help standardize the processes and methods for HRIA, and result in the review of the methods themselves. Communities and vulnerable groups can use open source HRIA to measure how certain digital products and services affect human rights from their perspective. This can help us understand how different rights are impacted in different contexts and by different communities.

What is next? evolving digital governance processes 

We should contribute to and build open source digital governance processes. Many initiatives contribute to open source digital governance. Integrity Institute,  Trust and Safety Professional Association, and many civil society organizations provide best practices and recommendations as well as toolkits for governance of digital products. We should map these processes, analyze the gaps and also ask what other open source toolkits might help us with providing Internet and digital trust and safety to everyone. Open source digital governance processes can help with mapping these toolkits, provide concrete and holistic governance models but also, through human rights impact assessment, contribute to the evolution and reform of our governance mechanisms. In the next blog, we will explain the importance of open source human rights impact assessment processes. 


Critical Trust & Safety practices for Tech platforms

From a quick Google Search, one can uncover hundreds of best practices in the field of trust and safety that will help keep users safe on the Internet while browsing, dating or  purchasing goods. Since economically and technically it can be challenging for tech companies (regardless of their size) to adopt all the trust and safety practices in existence, we need to identify the most critical practices that tech companies and our social systems can’t do without.   We, at Digital Medusa,  have taken the first step towards achieving this goal  by coming up with  evidence-based research that indicates 5 of the most critical trust and safety practices that companies  can integrate into their products and features to minimise risk and maintain trust and safety of their digital products. We will explain our method in more detail below, however, like every research methodology, this too has its limitations. The two indicators that we decided to choose for this preliminary research to prioritize the practices were: regulatory frameworks and public perception. In our future research we need to understand which other indicators could be relevant to achieve a more accurate prioritization method.   


Our baseline was the Digital Trust and Safety Partnership’s (DTSP) Safe Framework, which outlines 35 trust and safety best practices. This framework is structured around five commitments: Product Development, Governance, Enforcement, Improvement, and Transparency with practices listed in no particular order.

To determine which practices should be implemented first, we developed two criteria. Our aim was to rank them based on the cost of non-compliance, measured by the metric of ‘severity.’ The two criteria we used were:

  1. Regulatory Risks: We analyzed 15 global regulations governing the trust and safety space 
  2. Public Perception: We analyzed how civil society organizations and the public at large prioritizes the practices through measuring the number of civil society organizations working on a particular best practice, along with the number of lawsuits from not adhering to specific practices. 

For each best practice and commitment, we assigned a severity score ranging from 0 to 5 based on the above criteria. A higher severity score indicated the importance of integrating that practice, as non-compliance might result in significant costs for companies and losing legitimacy among the public at large and civil society organizations.

After conducting an in-depth analysis, we ranked the top trust and safety practices for each commitment. Non-compliance with these practices could lead to penalties such as service suspension in affected jurisdictions or even divestiture of business. They are also perceived as important by the public. The following practices received a cumulative severity score of 3.5 and above across all commitments:

We were also able to rank all DTSP practices from most severe to good to have with ‘red’ text indicating most severe and ‘green’ indicating good to have:

Insights and Lessons

Our research revealed an interesting divergence between the practices deemed important by regulators and those shaping public opinion. This finding offers valuable insights for both companies and policymakers worldwide. Certain practices, such as User Control, Transparency Reports, Research Academic Support and Complain Intakes have more regulatory interests than their effects on public opinion.

Trust and safety practices are vital for tech-companies and tech organizations. By understanding which practices to prioritize, companies can mitigate risks,ensure compliance, self-govern better and respond to internal and external emerging safety issues. As the regulatory landscape and public opinion continue to evolve, it’s crucial to keep this research up to date. We hope to also expand the research to overcome its current shortcomings and integrate other crucial indicators such as technical feasibility and human rights consideration.

Note:  Digital Medusa undertakes outreach and engagement for DTSP but this particular research project is an independent study by Shubhi Mathur and does not represent the views of DTSP or its members.

Bringing Accountability and Transparency to Under-Scrutinized Digital Platforms

Editor’s Note: This blog was published by the National Democratic Institute 

The policies and products of major tech platforms such as Facebook, Instagram, Twitter, and YouTube receive a significant amount of attention and engagement from researchers, journalists, and civil society organizations alike. The National Democratic Institute (NDI) has previously engaged these platforms to recommend interventions for ending online violence against women in politics and advocate for robust data access for non-academic researchers, among other topics. However, there are other digital platforms—including those that might be smaller in scale, are commonly used in only a few countries or by specific communities, or are relatively new to the market—that are also important to political processes around the world.

NDI is exploring how lessons learned from engagement with the aforementioned “legacy” platforms can inform recommendations to help other platforms ensure their policies and products make a positive impact for democracy. As the larger, better-resourced platforms walk back their commitments to protecting users from disinformation and online harassment, advocacy to encourage “alternative” or “emerging” platforms to uphold (or even factor into their design) the Democratic Principles for the Information Space is more important than ever.

NDI recently organized a roundtable discussion with civil society representatives and researchers to gather feedback about the risks and opportunities these platforms present in diverse contexts, including during pivotal democratic moments such as elections.


During the discussion, participants generally agreed there is significant value in dedicating time and resources toward researching and engaging with under-scrutinized platforms. However, the group grappled with which platforms to prioritize and how to develop terminology for talking about these platforms that is inclusive but not overly broad. NDI distributed a survey prior to the roundtable that asked respondents about their use of a range of platforms: audio-based apps such as ClubHouse and Discord, apps with primary user bases in one country or region such as Line and KakaoTalk, recently developed apps such as BeReal and Lemon8, encrypted messaging apps like Telegram, and widely popular but relatively new (compared to legacy platforms) apps like TikTok. There is significant diversity among these platforms in terms of their user base, longevity, and primary functions that make assessing them as a whole all the more challenging.

The terms “alternative” and “emerging” were considered as potential classifiers, but not all of these platforms are “emerging” in the sense that they are new to the market or even rising in popularity, and a platform that is “alternative” in one context may be mainstream in another. The majority of these apps are social media or communication platforms, but participants also considered how other digital products like cloud services could be used in contexts where access to these platforms is restricted. Though no consensus was reached on the scope of platforms under consideration or the best terminology to use, it was evident throughout the discussion that any recommendations attempting to target a variety of platforms should be appropriately nuanced to facilitate adoption across a range of contexts.


One characteristic that unifies these platforms is their relative inexperience in building up systems and policies compared to the legacy platforms, and a lack of diverse regional expertise (though the regional expertise of legacy platforms arguably leaves much to be desired). Channeling engagement through coalitions may be a useful strategy, as these platforms’ capacity to engage with civil society organizations and researchers around the world may be limited. Established trust and safety associations, such as the Digital Trust & Safety Partnership, the Trust & Safety Professional Association, and the Integrity Institute, offer different models for information sharing and collective action. Some coalitions may facilitate direct participation from platforms themselves, though the willingness of platforms to voluntarily commit to engagement may vary depending on the platforms’ resources and the political context in the country where the platform is based. Connections between a platform and government authorities may also shape how the platform approaches engagement with civil society and researchers on topics like content moderation, data privacy, and election integrity policies.

Different modalities of engagement will likely be required depending on a platform’s user base (whether national, regional, or global), whether a platform’s moderation teams are open to having discussions about identified threats, and the existing rules a platform has in place. A decision tree may be a useful tool to help civil society organizations determine which method of engagement is most effective and which recommendations to prioritize in advocacy to a given platform.


In addition to direct engagement with platforms, the roundtable participants also considered other mechanisms to incentivize platforms to incorporate recommendations from civil society into their policies and products. For example, pressuring investors to comply with international human rights standards could be an effective strategy for incentivizing smaller platforms funded by Silicon Valley venture capital groups. App stores and payment processors could also be a potential tool for incentivizing platforms to take certain actions, but there is a risk of app stores arbitrarily blocking apps (including in compliance with government requests) without transparency around the decision. Litigation against platforms is becoming increasingly common, but may be abused in illiberal contexts to entrench state power by imposing restrictions on free expression.

Platforms are not immune to misuse and abuse just because they have a smaller user base or have not received as much attention from the international research community. After the Oversight Board of Meta recommended the Cambodian Prime Minister’s Facebook account be suspended for posting a video threatening his political opponents with violence, the Prime Minister announced he would be leaving the platform, instead relying on Telegram to share his message (the Prime Minister also has a TikTok account). Tech companies of all shapes and sizes need to be prepared for mitigating the risk of bad actors and harmful content that may migrate to their platforms. NDI will leverage insights from this roundtable discussion, one-on-one conversations with relevant stakeholders, and desk research as it continues to refine its approach to these important questions.


As there was a lot of interest in sharing knowledge and expertise about under-scrutinized platforms, Digital Medusa has convened an informal mailing list to share knowledge among civil society members and explore research ideas. If you are interested, please join us: 

Domain name registries and registrars: the new digital trust and safety wardens in Bluesky

Just recently, Bluesky -the decentralized social network running on an open protocol called AT Protocol- announced that as a mechanism for supporting its business financially, it will directly sell domain names as handles for its users. The sales will be processed through an Internet Corporation for Assigned Names and Numbers (ICANN) accredited registrar, called Namecheap. Currently, the handles on social media platforms are internal handles and not independent domain names. As I will explain, using domain names as handles on social media networks might create a change in digital trust and safety landscape. 

ICANN, Registries and Registrars

ICANN is the organization that globally coordinates the policies related to allocation of top level domain names such as Alice.LoL and Farzaneh.TEAM. It sets some high-level policies regarding domain name allocation and through contractual agreements with registries (the operators of .LOL) and registrars (the operators of EXAMPLE.LOL), ICANN governs these entities globally. The registry operators can also assert some control over registrars and impose extra governance criteria on them. 

Understanding this detail is important to comprehend how the governance of Bluesky handles could be affected in the future and what the positive and negative aspects are. Domain name registries and registrars have had interesting governance stories, albeit scarce. For example, NTIA and Neustar (the registry for .US) had a policy of using the Pacifica list of seven words to police domain name registrations. NTIA and Neustar canceled someone’s domain name that had one of the seven words in it. Despite the fact that they overturned their decision, it took some time to 1) find what the policy based on which they canceled the domain was 2) argue against having a Pacifica list policy. Therefore, theoretically, if someone has a problematic word in the handle, the handle could be removed because of that specific registry’s policies.   

The contractual agreement between Namecheap and Bluesky

It is not clear what sort of contractual agreement exists between Namecheap and Bluesky, but usually registrars also have “resellers”. Thus, the hierarchical structure looks this way: ICANN imposes certain contractual obligations on registries and registrars. Registries can also impose some other contractual obligations and registrars can impose some more contractual obligations on their resellers. Registrars are also obliged to enforce ICANN policies on resellers. We don’t know the exact nature of the contractual relationship between Namecheap and Bluesky, but it could be the reseller relationship. 

Trust and safety at Bluesky and the new trust and safety wardens?

Seems like Bluesky is going to have its own trust and safety and abuse policies, but as I explain below, it might be a hybrid between registries’ policies, registrars’ policies and Bluesky’s own internal policies.

Namecheap will have a very pronounced role in governing Bluesky users’ behavior for those who used Namecheap as their provider for their handle. We know that the Bluesky domain name registrants have to follow the registration agreement (acceptable use). Namecheap transparently mentions what those processes are. However, some policies are not very detailed. For example, it does not mention how it decides on whether a certain handle is  involved with terrorist activities or promoting terrorist activities. It simply states: “Abuse Reports stand for any other inappropriate content, including but not limited to: identity theft, unauthorized redirect/frame/IP pointing, defamation, terrorism propaganda, HYIP, warez, etc.” It has more elaborate policies on copyright infringements. 

It is very possible that Namecheap will get involved with “trust and safety” and content issues on Bluesky, intellectual property rights infringement disputes. Bluesky might have inadvertently outsourced some of its trust and safety function to Namecheap. 

This might go beyond Namecheap and involve other registries and registrars. At the moment Bluesky allows for users to have their domains as their handles and they do not have to be registered at Namecheap. They can have a totally different registrar, in a different jurisdiction, with a different set of rules. But maybe since there is no contractual relationship between Bluesky and other registrars, they take little action. 

What would be the role of registries? 

The registries sometimes have their own internal content governance policies which they impose on the registrars that register domain names or directly on the registered name holder. This is especially the case if they operate newer names (such as .LOL as opposed to .COM) since they might have a direct relationship with the domain name registrant. For example, UNIREGISTRY(section VI) requires  .SEXY domain name holders not permit unsuitable content for viewing by a minor to be viewed from the main top-level directory (content that is immediately visible if a user navigates the domain name). In another case, for domain names ending in .HIV that deny the existence of HIV, the registry reserves the right to delete the domain name (to be clear, many other registries have similar policies, I am just trying to give examples that can make the issue more tangible).

And what’s the role of ICANN?

Hopefully, ICANN will have no role in this. We have worked long and hard to keep ICANN away from content regulation. It should remain that way. We certainly do not want ICANN to moderate and govern online behavior. However, when it comes to the domain name registrants’ privacy or issues of Domain Name System abuse (that are purely technical), ICANN will have some role to play as it governs the registries and registrars about those aspects.  

What does this mean for the governance of the handles and behavior of the users and users’ access? 

This might actually be a good deal for a decentralized Internet. It could improve authentication and interoperability, and bring discussion of domain names back to the public discourse.. It could also potentially decentralize trust and safety as the decision-making power on trust and safety and other issues will be redistributed. However, the registrars’ involvement with “handle” governance on social media platforms could cause a lot of problems including: 1)(if Bluesky scales) some registrars are not ready for coping with a high volume of complaints that are common on social media platforms. 2) domain name deletion and disabling domain names could be less proportionate than other methods. For example, there could be sub-domains that engage with legal activities that could be affected if the domain name is deleted because of the violation of respective policies. 3) access to domain names that could be brought up in this case as well. Sometimes due to economic sanctions, registrars confiscate or delete domain names. So if using domains as social network handles takes off, sanctions might even affect access to those handles. Some registrars over-comply with sanctions and do not allow for residents of a sanctioned country to register a domain name. In effect, ‌sanctions can spread more to social network governance and users as well.

These are some speculations since Bluesky has not yet amassed a large network of users. However, it would be worthwhile to monitor the issue as it relates to trust and safety at domain name level and the new roles and responsibilities that it could create for registries and registrars in digital trust and safety space.



Interested in Digital Medusa’s articles? Subscribe here:

XR: Innovate and Protect with Community Governance

The World Economic Forum published a report in 2019 that tackled the importance of building beneficial and reliable governance standards for the Fourth Industrial Revolution. Of course, obtaining proper governance within the complex environment of regulatory bodies, private companies, and the speed at which technology advances is difficult. Nevertheless, engaging in ongoing discussions regarding appropriate governance mechanisms for emerging technologies that are adaptable and protective of users remains crucial.

Community governance is a participatory approach to decision-making where a community of stakeholders collaboratively governs and manages a particular domain. This approach emphasizes a bottom-up, inclusive process that considers all stakeholders’ diverse perspectives. In emerging technologies such as extended reality (XR), community governance is particularly relevant as it allows for innovation while protecting communities from abuse and evolving private governance mechanisms. However, many current XR environments have terms and conditions that are not bottom-up or community-oriented, highlighting the importance of developing community governance mechanisms to ensure that these technologies are used to benefit everyone.

Implementing community governance in the early stages of XR development can set the tone for future innovations and help mitigate potential problems. Community governance allows for a more democratic and inclusive decision-making process where various perspectives and interests are considered. This can lead to more informed and equitable decision-making, setting a precedent for future regulations prioritizing the users’ interests. Additionally, it can help protect users and communities from potential abuses of the technology and help promote innovation and creativity, which can lead to new and innovative approaches to technology development and use.

The rapid fruition of Extended Reality (XR) ideas and products has proven to be much more than a gimmick for the average consumer, having become lucrative and pervasive in government and private spaces. XR, an umbrella term for Augmented Reality, Virtual Reality, and Mixed Reality has been around since the 1800s and is used in many everyday use cases, such as video games or Google Earth. Governments and private companies still find themselves at a crossroads when discussing further development, research, and governance within XR.

Many XR environments’ current terms and conditions are often developed and enforced by a centralized authority, such as the platform owner or developer. For example, the Stockholm-based VR and AR studio’s terms and conditions state, “accessing the Game, you agree to abide by the Terms, and a legally binding agreement is created between you and Resolution Games.” In many cases, XR companies provide a sentence within their terms and conditions explicitly stating the relationship is only between the user and the owning company. This approach can lead to a lack of transparency, accountability, and inclusivity in the decision-making process, as the perspectives and interests of users and other stakeholders may not be considered. This can result in issues such as content moderation policies that disproportionately impact certain groups or the use of user data without their consent.

Like the Internet or other emerging technologies, XR has various facets that contribute to a user’s experience, ultimately bringing into question individual rights and protections. For example, a clear definition of a “virtual crime” within XR has not been established. Even though it is more apparent in other technologies, like account hacking or ransomware, defining virtual crimes becomes murky when differentiating real-life experience and the type of reality that XR provides. Additionally, privacy and security concerns have emerged considering XR usage of a user’s data for its functionality. Bloomberg Law published an article discussing eye-tracking and its ability to collect information, such as mental and emotional state, age, gender, and sexual orientation.

According to the General Services Administration, no laws or policies govern XR usage. Governing XR technologies is a multifaceted challenge, with governments creating their policies and regulations for aspects of XR but not XR itself. For instance, the EU’s General Data Protection Regulation (GDPR) defines items like personal data tied to XR. Similarly, the U.S. has created an “AI Bill of Rights” that touches upon aspects of XR but does not directly discuss XR.

Conversations around governance within XR have been prevalent but have not been collaborative or community-based. Many existing proposals for XR governance have only come from private companies or international organizations. Meta, formally known as Facebook, is dedicated to building the “Metaverse,” an immersive XR environment. In doing so, it has published its expectations and regulatory practices on data and safety, essentially leaving the responsibility to oversee these concerns to itself.

On the other hand, international organizations have provided a much more in-depth analysis of the negative implications of XR and how to address them. The XR Safety Initiative (XRSI) published a paper addressing many issues within XR environments, like digital divides and profiling, providing multiple regulatory suggestions ranging from hardware and language. XRSI introduces “Internet hindsight,” which is the lack of regulation and oversight for the Internet that led to misuse, abuse, and the breakdown of trust, arguing that it is happening or can happen to XR systems. One example of lack of Internet hindsight was the data concerns that arose when it was revealed that Cambridge Analytica had misused data for Facebook users in 2018 during the 2016 U.S. elections.

In April of 2022, the Bipartisan Policy Center published a report that revealed a high estimated increase in the market size of XR usage, not only in everyday usage in video games but also in healthcare and engineering. Revealing this information not only shows the popularity of XR in various fields but asks for laws and policies to be created because of this popularity. While XR support is becoming more apparent, XR can be detrimental to individuals without proper governance.

The Bipartisan Policy Center, the XRSI, and other organizations continue to prove that community governance is needed to ensure XR user protection. Community governance calls for implementing rules and regulations through a collective effort of various bodies, from governments to individuals. Digital environments like the Internet or XR are vast, and the task of governing them can be daunting, considering varying jurisdictions, opinions, customs, and other factors. 

Such community governance would involve an organized coalition of various stakeholders, including governments, XR professionals, and international organizations. Instilling a community governance model can benefit current users, protect future users, and mitigate issues that have not been realized yet. Methods of thinking like “Internet Hindsight” would assist in mitigating these issues by providing cases of issues that may not have been experienced in XR but can be possible. Furthermore, community governance would provide insight from various corners of XR experiences and important developments and innovations. This way, future developments will be prioritized correctly and handled ethically, reducing the biases that come with private companies. 

Digital technologies will continue evolving as more users participate in the experiences and creators develop new ideas. However, that does not mean that users should wait for a lawsuit to find assistance through unfortunate negative experiences, nor that private companies should take the lead on governing XR environments and products. XR is quickly growing and being adopted in many areas, but users remain unprotected if community governance is not implemented. 

A multistakeholder summit? the case of Christchurch Call 

Too many summits are high-level, ineffective meetings filled with well-meaning but empty speeches. But the multistakeholder Christchurch Call summit this year differed greatly from the usual UN General Assembly meetings. New Zealand and France used their political capital in this meeting to bring together representatives of different stakeholders to have a discussion about countering terrorism online.To the participants’ surprise, most of the interventions were conversational and civil society was included on an equal footing. 

This blog includes Digital Medusa’s opinions about the summit and the Christchurch Call. 


In 2019, New Zealand and France convened Christchurch Call to Action after the terrorist attack that killed 51, and injured 50, Muslims in a mosque in Christchurch. The horrendous attack was live streamed on Facebook and other tech-corporations’ platforms. 

When the Call was launched in 2019, civil society globally criticized their own lack of presence and complained of being treated as an afterthought. But for the past 3 years, we have witnessed a slow but sustainable change and a move towards convening a true multistakeholder forum. It has become a forum that can go beyond mere lip service, that is genuinely multistakeholder, and that takes part in the Christchurch Call commitments to preserve a secure, open, interoperable Internet while fighting with terrorist behavior online.

The governments of New Zealand and France convened the Christchurch Call Advisory Network (CCAN) which comprises civil society organizations including impacted communities, digital rights organizations and the technical community that aims to defend a global, interoperable Internet. While the role of civil society organizations in other similar forums is often contested and not very clear, CCAN has made real progress towards meaningful participation. It is important now that progress continues beyond just attending a high level meeting with the leaders of the world. 

Crisis Response Management and Internet Infrastructure

During the summit, we discussed the crisis response management and protocols. A lot of progress has been made by various countries to create a voluntary but cohesive protocol management that can also adapt to the global nature of the Internet. However, we increasingly see calls for content moderation at the Internet architecture level (domain names, hosting providers, cloud services etc). A proportional moderation of content at the infrastructure level might not be possible in all cases. Especially during a crisis, we have to be extremely careful with the techniques we use to filter and block access to domains and websites, as it might not be possible to do proportionally. Such techniques might hamper access to other services online. We also need to evaluate the impact on human rights of each at each stage of crisis response. A global interoperable and interconnected Internet needs a holistic approach to safety—one that does not focus exclusively on blocking, take-downs and after the incident responses, but that offers a systematic way of tackling the issues raised by horrific content online.

Researchers Access to Data

Perhaps surprisingly, Digital Medusa deviates from various researchers and civil society organizations in calls for researchers’ access to data. While the Digital Services Act will facilitate such access, I do not believe we have the governance structures in place to validate research, nor to provide the privacy-enhancing structures to diminish abuse of personal and private data. New Zealand, France and a few others announced an initiative that can address this issue while also facilitating researchers’ access to data. The effectiveness of such an initiative remains to be seen, especially as it primarily focuses on fixing problems by focusing on technology.  

Internet of the Future

It is natural to think that, if bad content online is the source of danger, then all that is needed is to remove that content and moderation. But content removal does not on its own bring safety to the Internet. For our future efforts, we need holistic approaches. We also need to work with impacted communities and operators on the Internet. Content removal and take-downs on the Internet can have a major impact on the well being of individuals. Careless removal and takedown can affect access to a host of essential online services and it can also hamper uprisings and information sharing online during a crisis. I hope that content moderation will become only one tool (and not even the most important), and we come up with more innovative ways to deal with governance of our conduct on the Internet.  


Plans for the new year: defeating Digital Perseus

I officially launched Digital Medusa in September 2021. It has been challenging but also very fulfilling, and any step towards defeating digital Perseus is worthwhile. Below, I summarize some of what Digital Medusa has done over the past four months and a limited list of what will happen in the new year:

Social Media Governance 

  1. I joined the co-chairs of the Christchurch Call Advisory Committee— a civil society group that advises the New Zealand and France governments on the Christchurch Call commitments, which aim to moderate terrorist, violent extremist content. 
  2. We (Jyoti Panday, Milton Mueller, Dia Kayyali and Courtney Radsch) came up with a framework on analyzing multistakeholder governance initiatives in Content Governance. The framework will be published as a White Paper of Internet Governance Project. Let us know if you have any comments. 
  3. I joined a panel of the Paris Peace Forum on Christchurch Call. Read all about it. Watch.
  4. My research on Telegram governance became more popular after the Capitol riot in January 2021. NYT piece mentions my research
  5. I found an amazing network of people who work on prosocial design. Prosocial design and governance are alternative approaches to heavy content moderation and punitive measures for platform governance. We plan to discuss prosocial governance more in 2022. 

Internet Infrastructure

  1. I joined a group convened by Mark Nottingham to discuss how legislative efforts can hamper interoperability of the Internet, and the available remedies. 
  2. Because of the Taliban reign in Afghanistan, I wrote about how sanctions will affect Afghanistan’s access to the Internet. We also had a webinar (thanks to Urban Media Institute) with the Afghan colleagues to discuss the developments/setbacks. The video will be available on this website
  3. Fidler and I published an article in the Journal of Information Policy about Internet protocols and controlling social change. We argue that to understand Internet protocols’ effect on society we need to put them in context. Implementation matters and making Internet protocols aligned with human rights without considering context might not bring the social change needed. A lot of discussion went on about this paper on the Internet History mailing list, and there are some very interesting insights (the thread is filled with ad hominem attacks against the authors but even those attacks are good anthropological research materials.)


What will happen in 2022?


  1. I am helping draft an Internet Governance syllabus that the community can use to convene Internet governance schools and trainings. I am doing this work for the Internet Governance Forum, and it will be in a consultative manner. The plan is to come up with a global syllabus, including core modules but also modules that are elective. There will be a lot of focus on what Schools on Internet Governance (SIGs) do and helping developing countries to more easily convene schools and training on Internet governance. 
  2. Digital Medusa will do more vigorous research about sanctions that affect access to the Internet.
  3. Along with the Christchurch Call Advisory Network members, Digital Medusa is planning to be very active and find effective ways to contribute to CCAN and the Christchurch Call community. 
  4. Digital Medusa will undertake research and advocate for prosocial governance instead of just focussing on “content moderation” in Social Media Governance


Digital Medusa, for now, includes my (FB) activities. Hopefully, in the new year we can go beyond one Digital Medusa and attract more partners. 

Happy new year to all! To a year with fewer Digital Perseus moments and fresher digital governance point of views. 


Multistakeholder Content Governance

With multistakeholder governance gaining popularity in content governance, some initiatives have been keen on using the term to describe their governance model. The conversations around the multistakeholder nature of these processes motivated us to provide a draft framework to assess multistakeholder models in content governance.

We also held a session about multistakeholder content governance at the Internet Governance Forum 2021.*

During the session we talked about three initiatives: Christchurch Call to Action, Global Internet Forum to Counter Terrorism and the Facebook Oversight Board. It is of note that the first two initiatives have a narrow mandate: to eradicate and prevent terrorist, extremist content across platforms and online service providers. The Oversight Board however, has a much broader mandate that relates to content in general, but it’s limited to Facebook and Instagram.

There are a few important points that emerged from the session:

  1. Multistakeholder governance goes beyond nation-states
  2. Multistakeholder participation can happen at various stages of decision-making
  3. Authority of the stakeholder groups is not directly related to their influence

Going beyond nation-states in content governance

Imagine if instead of just opening “public policy” offices in different countries, the online platforms would consider using a multistakeholder model to govern their users on their platform. This is not to say that we can or should give a global dimension to every issue and apply the multistakeholder model. But there are some issues that because of the global nature of these platforms, we can address with a multistakeholder model.

The Internet has revealed that the arbitrary nation-states’ borders are not an optimal unit for governance. The multistakeholder models allow us to use other units for governance. Sometimes local issues don’t belong to a certain geography and are shared with many others around the world.

Also, platforms content policies can and will affect the other parts of the Internet and its architecture, so including stakeholders that operate the infrastructure in these discussions can also help with preserving the open and global nature of the Internet.

When does multistakeholder participation start? 

The participation of different stakeholder groups in governance processes does not always start from the very beginning. Sometimes the initiatives start as a public-private partnership or by the industry.

During the Christchurch Call, since the governments and tech-corporations negotiated the commitments bilaterally, other stakeholder groups were left out and their role was not clear. The governments then decided to give a more formulated role to civil society. They convened the Christchurch Call Advisory Network that included civil society members and focused on including civil society in the implementation phase of the commitments.

Another example of this is the Global Internet Forum to Counter Terrorism which was an industry led initiative in the beginning and is now trying to infuse some multistakeholder structure to the process.

Converting some or all parts of a top-down led process to a multistakeholder process comes with its own challenges. For example the Christchurch Call Advisory Network has to work with the Christchurch Call text, which has used broad terms such as “online service providers” and terms that have very contested definitions such as terrorism.

Should stakeholder groups have authority or influence?

This is an interesting debate since it’s about soft versus hard power. If we look at the history of Internet Corporation for Assigned Names and Numbers, we might argue that over time stakeholders became more powerful and had a vote in policy-making decisions. But at the same time, the Government Advisory Committee, which supposedly was set up to give “advice” became more and more powerful. This was to the extent that its advice became de facto binding on ICANN board of directors (with some minimal exceptions).

While there is no clear-cut answer to what role should different stakeholders have, authority might not always bring about influence. So the initiative can be very multistakeholder and tick all the multistakeholder boxes, but in the end, the decisions are made by one powerful stakeholder.

So why do we need this framework?

We need to rescue “multistakeholder governance” by demystifying it. “Multistakeholder” processes are not all the same. The degree of involvement of different stakeholders in decision-making differs from one initiative to another. Using a framework to find out about these differences can help us understand what we need to improve and what is working. The framework might help us get a clearer picture of the governance models in content governance. It is not about who is more or less multistakeholder, it is about how these initiatives operationalize multistakeholder models, the effectiveness of these approaches and the future improvements. 




*We, a group of academics and civil society actors including Dr. Courtney Radsch, Dia Kayyali, Dr Milton Mueller and Jyoti Panday, suggested a framework for multistakeholder content governance. During the session we had a conversation with Dr. Ellen Strickland from the New Zealand government, Rachel Wolbers, Public Policy Manager at Facebook Oversight Board, and Dr. Erin Saltman from Global Internet Forum to Counter Terrorism, to discuss the framework for multistakeholder content governance.





Threatening Social Media Platforms With Traffic Throttling

Recently, I prepared a lecture for the Asia Pacific School of Internet Governance. In midst of my research, came across an old piece of news. Last year, Facebook claimed that it had only agreed to comply with the Vietnam requests to take-down anti-state materials, because the government had threatened to throttle traffic to Facebook. Content-removal, automation of take-downs etc are not the only ways that the governments and other actors regulate social media platforms. One aspect that I think we should think more about is the role of governments in regulating social media platforms via Internet infrastructure. When governments have the liberty to use Internet infrastructure to regulate the actors on the Internet, then we need to think about the appropriate ways that social media platforms should respond to this. Should they, like Facebook, agree to government requests in the face of such threats?

About The Author

Farzaneh Badii

Digital Medusa is a boutique advisory providing digital governance research and advocacy services. It is the brainchild of Farzaneh Badi[e]i.Digital Medusa’s mission is to provide objective and alternative digital governance narratives.