Can Encrypted Messaging Be Safe, Inclusive, and Respect Human Rights?

By Wafa-Ben Hassine, Principal, Responsible Technology

I believe it can. And it must. As a human rights attorney, I come at this issue thinking about people first.

Whether its messaging a family member asking them to bring home a loaf of bread, or discussing next year’s business strategy with a colleague, secure encryption ensures we can have private conversations and guarantees the integrity of our communication. Users’ quick adoption of encrypted, private messaging speaks volumes of its importance: billions of users worldwide use encrypted messaging on a daily basis as an indispensable tool in their livelihoods.

While the examples above are innocuous, we know that encrypted messaging platforms can also be used for more nefarious purposes. And with these platforms’ breadth, reach, and the undetectable nature of the content of conversations, we are deeply concerned about the harms that their deployment creates. Examples include lynch mobs in India, widespread disinformation on the coronavirus pandemic in Mexico and the US, swaying election results in Brazil, facilitating global child abuse, and providing white supremacists the channels needed to spread Nazi propaganda in Germany and the US.

But we can’t just blame encryption for these harms. Encryption is universally regarded as a prerequisite for the exercise of rights like freedom of expression, and freedom to privacy online. It is a means to a fundamental end. With the Internet increasingly permeating every aspect of our lives, encryption is a technology that will not and should not go away anytime soon. Health and financial institutions also utilize encryption to ensure that individuals’ personal information is safe from identity theft and fraud.

We need to disrupt the status quo that materially benefits the large tech companies that provide these platforms (often for free), who then turn a blind eye to the harms they can cause. Researchers, civil society groups, and governments should be able to have select information on how the technology is operated and used — e.g. market sizes based on geography, metadata about how information travels, peak use and dynamics, etc. — all appropriately aggregated so as to protect individual privacy — so they can better understand the nature, type and impact of these harms, and ultimately, help propose ways to fix them. And tech platforms who provide these services should be ultimately accountable for ensuring their safety and quality.

I joined Omidyar Network to lead and support the type of work that can change that status quo. Our goal is to work with our partners to advance a robust, evidence-driven, and sensible public conversation that ensures encrypted messaging is safe, trustworthy, inclusive, and doesn’t sacrifice encryption and privacy. One that, based on my professional background, respects and upholds human rights.

To do that, we must start by envisioning the kind of messaging we want to see. The debate around whether we should, or should not encrypt, is a non-starter. Instead, we should ask: What kind of information do researchers need to help understand and prevent some of the issues we see? What pressure points can we identify to tilt the balance with policymakers? What does accountability look like in an encrypted ecosystem that in some instances, verifiably led to rampant organized disinformation and propaganda campaigns? What can different stakeholders do to stop the exploitation of technological gains benefitting privacy?

Answers to these questions will help us collectively conceive of more effective ways to mitigate any harmful byproducts of private, secure messaging, and begin to truly enjoy all that it has to offer.

As social activist and author Alice Walker said: “Look closely at the present you are constructing: it should look like the future you are dreaming.”