Risking Human Rights is Risking Digital Trust and Safety

The field of Digital Trust and Safety gains more traction, especially as regulators increasingly aim to regulate people’s behavior online. Therefore, it is important to look closely at trust and safety practices and consider their impact on human rights and the Internet (I will discuss the Internet’s impact in another blog). Every time celebrate a product, a service or a practice that serves digital trust and safety, we should ask: what are the potential human rights implications of these initiatives?
I refer to the human rights framework here not because it is the easiest to implement but because it is a concept that is globally known to the governments through the United Nations Declaration on Human Rights (UDHR) and to the businesses through the UN Guiding Principles on Business and Human Rights. Digital trust and safety practices can jeopardize certain rights on the Internet, such as privacy and freedom of expression.
As the Internet has become the medium for exercising and infringing on human rights, we should learn from the past and understand the implications of trust and safety on human rights by looking at other corners of the Internet (I don’t know if the Internet is round or square, but in this blog, it is square).
Take security, for example. Cybersecurity became a hot topic in the early 2000s. Similar to the field of trust and safety, third party cybersecurity vendors sprang up. Brenden Kuerbis and I wrote a paper in 2017 about the institutional landscape of cybersecurity and we talked about the important role of the market in providing security, especially after the cyber attack has taken place.
The security market has evolved and many third-party vendors are now under scrutiny for the products and services they sell to the governments. Sometimes these third-party vendors are incentivized to portray an exaggerated picture of the security landscape in order to create a market for their products and services. Various governments (even democratic ones can hire them) to label activists as threat actors and profile and monitor them.
The field of digital trust and safety can be very similar to the cybersecurity field, and we can learn many lessons from that field to avoid repeating the past mistakes.
I will provide an easy example now (at least easier than encryption) and in the future, we can continue drawing parallels between these two fields:
In the past, domain name registrants’ (digitalmedusa.org) personal, sensitive information such as their email address, phone number, mailing address was published on the Internet through a protocol we call WHOIS. WHOIS was unfortunately a misleading name for the protocol because the protocol’s purpose was not at all to identify who is behind a domain name but to make contacting the domain name registrant easier in case a cyber incident happened.
For a long time, the sensitive information of domain name registrants was published on the Internet. Some third-party cybersecurity firms used this information to provide their cybersecurity services and products. These security services were legitimate and effective. However, for a while there was a big push by certain actors and organizations not to redact this private information on the Internet. One reason was providing security. Another reason was easier and less costly access to people’s private and personal information. There are some tradeoffs between privacy and security (but most of the time they go hand in hand). However, we should not make the tradeoff so easy and cheap for one right to lead to the infringement of another right. In this case, we could have redacted (and we started doing so for a while with the Privacy and Proxy Services) domain name registrants private and personal information but could have had a tiered access mechanism in place to allow legitimate access to such information. But we didn’t until the General Data Protection Regulation came into force and made us do so.
What does this have to do with digital trust and safety? Take monitoring abuse, for example. It’s one of the digital trust and safety practices, and it is helpful in preventing and mitigating harm. Those who monitored WHOIS (the cybersecurity researchers), they also wanted to monitor abuse and find the patterns of abuse in domain registration and domain name system. But monitoring abuse can intrude on privacy (because it effectively involves surveillance) and individuals can also perform it by utilizing open source intelligence techniques that gather public data, which just like WHOIS database, may include private information and metadata. Sometimes it can lead to profiling a group of people and lead to discrimination.
Considering these privacy implications, how should third-party vendors try to monitor abusive behavior? A success story would not be that the monitoring led to dismantling a high number of networks and a high number of accounts. There have to be qualitative indicators to understand the success of the project. During the monitoring, how did we preserve the privacy of the users? Did we measure the potential impact of monitoring abusive patterns on freedom of expression? Did we stop or correct the practices that increased the risk of privacy violations?
Every time we talk about using monitoring abuse as a mechanism for trust and safety or when we talk about other trust and safety practices, we must conduct human rights impact assessment and consider the privacy and other rights implications. We should also think about what remedies we are offering if privacy or other rights could be at risk.
In our trust and safety practices and efforts to counter abuse, we must have qualitative measures alongside quantitative ones. Trust and safety will not be maintained if these practices endanger human rights.




