digital medusa logo

A multistakeholder summit? the case of Christchurch Call 

Too many summits are high-level, ineffective meetings filled with well-meaning but empty speeches. But the multistakeholder Christchurch Call summit this year differed greatly from the usual UN General Assembly meetings. New Zealand and France used their political capital in this meeting to bring together representatives of different stakeholders to have a discussion about countering terrorism online.To the participants’ surprise, most of the interventions were conversational and civil society was included on an equal footing. 

This blog includes Digital Medusa’s opinions about the summit and the Christchurch Call. 

Background

In 2019, New Zealand and France convened Christchurch Call to Action after the terrorist attack that killed 51, and injured 50, Muslims in a mosque in Christchurch. The horrendous attack was live streamed on Facebook and other tech-corporations’ platforms. 

When the Call was launched in 2019, civil society globally criticized their own lack of presence and complained of being treated as an afterthought. But for the past 3 years, we have witnessed a slow but sustainable change and a move towards convening a true multistakeholder forum. It has become a forum that can go beyond mere lip service, that is genuinely multistakeholder, and that takes part in the Christchurch Call commitments to preserve a secure, open, interoperable Internet while fighting with terrorist behavior online.

The governments of New Zealand and France convened the Christchurch Call Advisory Network (CCAN) which comprises civil society organizations including impacted communities, digital rights organizations and the technical community that aims to defend a global, interoperable Internet. While the role of civil society organizations in other similar forums is often contested and not very clear, CCAN has made real progress towards meaningful participation. It is important now that progress continues beyond just attending a high level meeting with the leaders of the world. 

Crisis Response Management and Internet Infrastructure

During the summit, we discussed the crisis response management and protocols. A lot of progress has been made by various countries to create a voluntary but cohesive protocol management that can also adapt to the global nature of the Internet. However, we increasingly see calls for content moderation at the Internet architecture level (domain names, hosting providers, cloud services etc). A proportional moderation of content at the infrastructure level might not be possible in all cases. Especially during a crisis, we have to be extremely careful with the techniques we use to filter and block access to domains and websites, as it might not be possible to do proportionally. Such techniques might hamper access to other services online. We also need to evaluate the impact on human rights of each at each stage of crisis response. A global interoperable and interconnected Internet needs a holistic approach to safety—one that does not focus exclusively on blocking, take-downs and after the incident responses, but that offers a systematic way of tackling the issues raised by horrific content online.

Researchers Access to Data

Perhaps surprisingly, Digital Medusa deviates from various researchers and civil society organizations in calls for researchers’ access to data. While the Digital Services Act will facilitate such access, I do not believe we have the governance structures in place to validate research, nor to provide the privacy-enhancing structures to diminish abuse of personal and private data. New Zealand, France and a few others announced an initiative that can address this issue while also facilitating researchers’ access to data. The effectiveness of such an initiative remains to be seen, especially as it primarily focuses on fixing problems by focusing on technology.  

Internet of the Future

It is natural to think that, if bad content online is the source of danger, then all that is needed is to remove that content and moderation. But content removal does not on its own bring safety to the Internet. For our future efforts, we need holistic approaches. We also need to work with impacted communities and operators on the Internet. Content removal and take-downs on the Internet can have a major impact on the well being of individuals. Careless removal and takedown can affect access to a host of essential online services and it can also hamper uprisings and information sharing online during a crisis. I hope that content moderation will become only one tool (and not even the most important), and we come up with more innovative ways to deal with governance of our conduct on the Internet.  

 

Plans for the new year: defeating Digital Perseus

I officially launched Digital Medusa in September 2021. It has been challenging but also very fulfilling, and any step towards defeating digital Perseus is worthwhile. Below, I summarize some of what Digital Medusa has done over the past four months and a limited list of what will happen in the new year:

Social Media Governance 

  1. I joined the co-chairs of the Christchurch Call Advisory Committee— a civil society group that advises the New Zealand and France governments on the Christchurch Call commitments, which aim to moderate terrorist, violent extremist content. 
  2. We (Jyoti Panday, Milton Mueller, Dia Kayyali and Courtney Radsch) came up with a framework on analyzing multistakeholder governance initiatives in Content Governance. The framework will be published as a White Paper of Internet Governance Project. Let us know if you have any comments. 
  3. I joined a panel of the Paris Peace Forum on Christchurch Call. Read all about it. Watch.
  4. My research on Telegram governance became more popular after the Capitol riot in January 2021. NYT piece mentions my research
  5. I found an amazing network of people who work on prosocial design. Prosocial design and governance are alternative approaches to heavy content moderation and punitive measures for platform governance. We plan to discuss prosocial governance more in 2022. 

Internet Infrastructure

  1. I joined a group convened by Mark Nottingham to discuss how legislative efforts can hamper interoperability of the Internet, and the available remedies. 
  2. Because of the Taliban reign in Afghanistan, I wrote about how sanctions will affect Afghanistan’s access to the Internet. We also had a webinar (thanks to Urban Media Institute) with the Afghan colleagues to discuss the developments/setbacks. The video will be available on this website
  3. Fidler and I published an article in the Journal of Information Policy about Internet protocols and controlling social change. We argue that to understand Internet protocols’ effect on society we need to put them in context. Implementation matters and making Internet protocols aligned with human rights without considering context might not bring the social change needed. A lot of discussion went on about this paper on the Internet History mailing list, and there are some very interesting insights (the thread is filled with ad hominem attacks against the authors but even those attacks are good anthropological research materials.)

 

What will happen in 2022?

 

  1. I am helping draft an Internet Governance syllabus that the community can use to convene Internet governance schools and trainings. I am doing this work for the Internet Governance Forum, and it will be in a consultative manner. The plan is to come up with a global syllabus, including core modules but also modules that are elective. There will be a lot of focus on what Schools on Internet Governance (SIGs) do and helping developing countries to more easily convene schools and training on Internet governance. 
  2. Digital Medusa will do more vigorous research about sanctions that affect access to the Internet.
  3. Along with the Christchurch Call Advisory Network members, Digital Medusa is planning to be very active and find effective ways to contribute to CCAN and the Christchurch Call community. 
  4. Digital Medusa will undertake research and advocate for prosocial governance instead of just focussing on “content moderation” in Social Media Governance

 

Digital Medusa, for now, includes my (FB) activities. Hopefully, in the new year we can go beyond one Digital Medusa and attract more partners. 

Happy new year to all! To a year with fewer Digital Perseus moments and fresher digital governance point of views. 

 

Multistakeholder Content Governance

With multistakeholder governance gaining popularity in content governance, some initiatives have been keen on using the term to describe their governance model. The conversations around the multistakeholder nature of these processes motivated us to provide a draft framework to assess multistakeholder models in content governance.

We also held a session about multistakeholder content governance at the Internet Governance Forum 2021.*

During the session we talked about three initiatives: Christchurch Call to Action, Global Internet Forum to Counter Terrorism and the Facebook Oversight Board. It is of note that the first two initiatives have a narrow mandate: to eradicate and prevent terrorist, extremist content across platforms and online service providers. The Oversight Board however, has a much broader mandate that relates to content in general, but it’s limited to Facebook and Instagram.

There are a few important points that emerged from the session:

  1. Multistakeholder governance goes beyond nation-states
  2. Multistakeholder participation can happen at various stages of decision-making
  3. Authority of the stakeholder groups is not directly related to their influence

Going beyond nation-states in content governance

Imagine if instead of just opening “public policy” offices in different countries, the online platforms would consider using a multistakeholder model to govern their users on their platform. This is not to say that we can or should give a global dimension to every issue and apply the multistakeholder model. But there are some issues that because of the global nature of these platforms, we can address with a multistakeholder model.

The Internet has revealed that the arbitrary nation-states’ borders are not an optimal unit for governance. The multistakeholder models allow us to use other units for governance. Sometimes local issues don’t belong to a certain geography and are shared with many others around the world.

Also, platforms content policies can and will affect the other parts of the Internet and its architecture, so including stakeholders that operate the infrastructure in these discussions can also help with preserving the open and global nature of the Internet.

When does multistakeholder participation start? 

The participation of different stakeholder groups in governance processes does not always start from the very beginning. Sometimes the initiatives start as a public-private partnership or by the industry.

During the Christchurch Call, since the governments and tech-corporations negotiated the commitments bilaterally, other stakeholder groups were left out and their role was not clear. The governments then decided to give a more formulated role to civil society. They convened the Christchurch Call Advisory Network that included civil society members and focused on including civil society in the implementation phase of the commitments.

Another example of this is the Global Internet Forum to Counter Terrorism which was an industry led initiative in the beginning and is now trying to infuse some multistakeholder structure to the process.

Converting some or all parts of a top-down led process to a multistakeholder process comes with its own challenges. For example the Christchurch Call Advisory Network has to work with the Christchurch Call text, which has used broad terms such as “online service providers” and terms that have very contested definitions such as terrorism.

Should stakeholder groups have authority or influence?

This is an interesting debate since it’s about soft versus hard power. If we look at the history of Internet Corporation for Assigned Names and Numbers, we might argue that over time stakeholders became more powerful and had a vote in policy-making decisions. But at the same time, the Government Advisory Committee, which supposedly was set up to give “advice” became more and more powerful. This was to the extent that its advice became de facto binding on ICANN board of directors (with some minimal exceptions).

While there is no clear-cut answer to what role should different stakeholders have, authority might not always bring about influence. So the initiative can be very multistakeholder and tick all the multistakeholder boxes, but in the end, the decisions are made by one powerful stakeholder.

So why do we need this framework?

We need to rescue “multistakeholder governance” by demystifying it. “Multistakeholder” processes are not all the same. The degree of involvement of different stakeholders in decision-making differs from one initiative to another. Using a framework to find out about these differences can help us understand what we need to improve and what is working. The framework might help us get a clearer picture of the governance models in content governance. It is not about who is more or less multistakeholder, it is about how these initiatives operationalize multistakeholder models, the effectiveness of these approaches and the future improvements. 

 

 

 

*We, a group of academics and civil society actors including Dr. Courtney Radsch, Dia Kayyali, Dr Milton Mueller and Jyoti Panday, suggested a framework for multistakeholder content governance. During the session we had a conversation with Dr. Ellen Strickland from the New Zealand government, Rachel Wolbers, Public Policy Manager at Facebook Oversight Board, and Dr. Erin Saltman from Global Internet Forum to Counter Terrorism, to discuss the framework for multistakeholder content governance.

 

 

 

 

Threatening Social Media Platforms With Traffic Throttling

Recently, I prepared a lecture for the Asia Pacific School of Internet Governance. In midst of my research, came across an old piece of news. Last year, Facebook claimed that it had only agreed to comply with the Vietnam requests to take-down anti-state materials, because the government had threatened to throttle traffic to Facebook. Content-removal, automation of take-downs etc are not the only ways that the governments and other actors regulate social media platforms. One aspect that I think we should think more about is the role of governments in regulating social media platforms via Internet infrastructure. When governments have the liberty to use Internet infrastructure to regulate the actors on the Internet, then we need to think about the appropriate ways that social media platforms should respond to this. Should they, like Facebook, agree to government requests in the face of such threats?

About The Author

Farzaneh Badii

Digital Medusa is a boutique advisory providing digital governance research and advocacy services. It is the brainchild of Farzaneh Badi[e]i.Digital Medusa’s mission is to provide objective and alternative digital governance narratives.