In April, we introduced a blog series exploring the application of existing data protection laws to emerging technologies, related power dynamics, and implications for future design and thinking on personal data. Since then, we’ve seen many changes to global data protection frameworks. This includes a new data protection bill in Zimbabwe, the passage of a data protection act by Jamaica’s House of Representatives, and new proposed regulations for the California Consumer Privacy Act. We are hopeful that the final versions of these instruments, after public engagement, will reflect and anticipate new uses of data and the rise of emerging technology.
In this second post, we explore the relationship between data protection laws and social payment programs.
Rapid digitization of social payments in response to the global pandemic
Governments are rapidly deploying social payments (such as unemployment or food benefits) to address the crushing personal, financial losses and economic stress resulting from the pandemic. As of June, the World Bank estimates $589 billion in government COVID-19 relief spending on social protection programs globally. About 195 countries have introduced, scaled, or adapted social payment programs reaching upwards of 1.1 billion people. While some in need may already be covered by existing government social payment programs, many are excluded globally, including those working in the growing gig economy.
At the same speed, governments around the world are seeking to rapidly digitize their social payment programs and accelerate digital payments in support of social distancing, government efficiency, and ease of use for recipients. Challenges to these efforts, however, include a continued preference for cash in many societies, existing strains in digital payments infrastructure, liquidity challenges by financial agents, and disparities in access to digital infrastructure.
In addition, the excessive and unnecessary collection of biometric and other sensitive data, shortcomings in legislative oversight, and mission creep heighten the risks of surveillance and the potential for privacy violations for people who are receiving social payments digitally.
The UN Special Rapporteur’s 2019 report on the digital welfare state cautioned:
[S]ystems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish . . . [T]he irresistible attractions for Governments to move in this direction are acknowledged, but the grave risk of stumbling, zombie-like, into a digital welfare dystopia is highlighted . . . [Big technology companies] operate in an almost human rights-free zone, and that this is especially problematic when the private sector is taking a leading role in designing, constructing and even operating significant parts of the digital welfare state.
We share this concern, and provide suggestions for a better way forward.
Pitfalls to avoid
Several governments have shifted data protection priorities in response to COVID-19. The Thai government delayed the launch of its personal data protection regulations for a year, and UK data protection authorities paused important investigations into violations of the law. In contrast, the Nigerian government accelerated their strategies, now requiring all public institutions to digitize all public databases within 60 days. The new risks introduced by these decisions affirm why we cannot strictly rely on general data protection and privacy laws to mitigate the risks posed by new and emerging technologies. They also emphasize the importance of laws that foresee potential risks and track the arc of technological evolution to build in protections that will outlast this moment in history. Case law suggests that we need not always reinvent the wheel to provide adequate safeguards in the face of new forms of digitization, including social payments. Some recent examples include:
- The Court of the Hague recently ruled that the use of an automated fraud detection system known as System Risk Indication(“SyRI”) to enforce welfare benefits constituted an unjustified interference with the right to privacy. The court found that SyRI’s broad application and lack of transparency went beyond what was necessary or proportionate for the intended purpose of fraud detection, violating the fundamental international human rights law principles of necessity and proportionality. This ruling speaks to the potential strength of established human rights laws and fundamental principles to provide safeguards against rapidly emerging technologies.
- Last year, the Irish Data Protection Commissioner found that the widespread use of the Department of Employment Affairs and Social Protection’s Public Services Card (PSC) violated European data protection law. The commissioner determined that mandating the use of the PSC, or processing data from it, by any public or private entities other than the issuing Department, had no basis in law. The Commissioner further determined that the PSC, issued for a narrow social purpose, cannot be used by the private sector for age verification or similar purposes. The PSC example demonstrates the need to establish a clear, lawful basis for collecting personal data in social protection programs, and to further limit additional uses or purposes of that data collection.
- In 2018, India’s Supreme Court upheld the constitutionality of the Aadhaar biometric identity system but struck down the ability for commercial actors to require Aadhaar-based authentication for uses like opening a banking or mobile phone account. Though subsequently challenged, this ruling set an important precedent for limiting private sector uses of data harvested in public social protection schemes.
Pathways to social payment programs that respect people’s rights
Data protection should be enshrined into law and establish clear rules for data sharing between government agencies and other public-sector stakeholders. These rules should explicitly prohibit the sharing or sale of participants’ personal data with private companies. Any digital social payment program should:
- Contain grievance and redress mechanisms for payment recipients, as they often are not in a position of political power to challenge government decisions. Moreover, the lawful basis for their participation in these programs, typically consent, is often dubious.
- Mandate disclosures on the use of data and facilitate individual rights with respect to those uses, recognizing that meaningful consent is often not possible. Payment recipients often have little choice but to provide their data in exchange for desperately needed benefits. Informed consent may also prove challenging if legal terms and conditions of enrolling in such programs are unclear, especially when no alternatives are available.
- Make non-digital alternatives available to recipients and recognize that digitizing social payments may raise significant concerns about social control and stigmatization. There is no meaningful choice between compulsory digitization or exclusion.
- Be mindful of the risk of private sector leakages in public sector social protection programs, particularly through functions that are outsourced to private companies. Recent examples of outsourcing include the use of electronic payment welfare cards in Australia, New Zealand, and South Africa.
- Consider if there is an opportunity to delink SIM registration from national ID programs in certain settings to mitigate the risks of surveillance humanitarianism. Additionally, clear sunset clauses and data deletion requirements must be included into authorizing legislation.
- Plan for post-crisis audits to assess how data was actually used and restrict any uses that were justified in the short-term but are unnecessary in the long-term. In these instances, it is critical to ensure the interests and fundamental rights of the individual data subject.
The role of technology and enhanced digital infrastructure in social payments should be to meet the immediate needs of recipients and serve as a platform for emergency support; not to further entrench the welfare surveillance state and curtail the fundamental rights of those in need of economic support.
This post is the second in a blog series on the pitfalls and pathways in the global data protection landscape and an exploration of how technology continues to evolve faster than the laws and norms that govern it. You can find the initial post here.