Biden Administration Made Two Announcements on AI; Don’t Let One Overshadow the Other

This article was originally published on The Messenger here

By Michele Lawrence Jawando

President Biden’s Executive Order on AI is appropriately ambitious. From outlining new standards for AI safety to supporting workers and promoting innovation, the president’s announcement signals a leading role for the United States in the development of this transformative technology. Just as important, Biden’s EO reflects a deep understanding that harnessing AI’s benefits — and protecting against its harms — requires responsible action now.

But even the order’s loudest supporters, myself included, recognize that putting pen to paper is just the beginning. There is still a significant gap between the ambitions outlined in the president’s announcement and the work needed to coordinate the constellation of agencies, institutions, and sectors to make responsible AI a reality.

Enter Vice President Kamala Harris, who followed the president’s announcement by unveiling $200 million in philanthropic funding for responsible AI at the UK Safety Summit in Bletchley Park.

To be sure, a $200 million philanthropic commitment may pale in comparison to the $32 billion Senate Majority Leader Chuck Schumer (D-N.Y.) is calling on Congress to allocate for AI. But Harris’s announcement — and the administration’s embrace of philanthropic support — represents a crucial step towards executing on our AI goals.

As a senior vice president at social change venture Omidyar Network, a Google alum, and former chief counsel and senior policy advisor in the U.S. Senate, I’ve built my career bringing diverse voices together to support technological innovation.  And I believe that by providing a check on private interests, unearthing best practices, and aligning around global standards, philanthropy has a pivotal role to play in building a responsible AI future. Here’s why:

First, philanthropy can serve as a bridge between the public and private sectors’ interests.

History has shown us that it’s essential to measure a technology’s impact on individuals and society holistically — not just through economic gains or losses alone.

For example, while automation fostered entirely new industries and innovations that strengthened company bottom lines, it also led to massive worker displacement and long-term economic inequality. And while social media has become a trillion dollar global industry, it’s also fueled an unprecedented public health crisis of loneliness.

When it comes to AI governance, philanthropy can push beyond the traditional incentive structures of profits and market dominance and help bridge the gap between private interests and the common good. Research initiatives like the Institute for Security & Technology’s work to determine AI’s impact on human memory, reasoning, and trust, for example, are intentionally designed to mitigate AI’s impacts on our humanity. These projects may not be a top priority for private companies, but they are crucial to ensuring AI technology works in service of society.

Philanthropy can also serve as “proof of concept” for AI guardrails. Although public support for AI safeguards is overwhelming, the industry’s high barriers to entry can stifle the range of product options available — from auditing tools to transparency mechanisms and safety checks. Rather than accept that incumbents have the best expertise in checking their own AI tools, philanthropy can help fund a more dynamic landscape of potential AI guardrails and identify those that work best.

Funding projects designed to build civil society “go-to” options for model evaluation and improvement tools like red-teaming, algorithmic audits, and public rating systems for large language models will go a long way towards ensuring we are scaling the best, most effective policy solutions possible. Good work is being done across the board here — from Data and Society’s Algorithmic Impact Methods Lab to Dr. Rumman Chowdhury’s Humane Intelligence work to Stanford’s Human Centered AI project and the AI Democracy Project — that philanthropy can and must sustain.

Finally, philanthropy can help convene global players around consistent standards. Bringing harmonization and interoperability to the approach governments around the world pursue on AI is critical, and there’s a lot for the philanthropic sector to do to augment ongoing diplomatic work.

Consider, for example, that philanthropy is pitching in with urgent, catalytic funding for the UN High-level Advisory Body on Artificial Intelligence. The UN AI Advisory Body is bringing together up to 32 global experts across disciplines to offer diverse perspectives and options for how AI can be governed for the common good in alignment with the UN’s Sustainable Development Goals. Philanthropy’s support for this effort in the short-term — before member-state contributions become available — can ensure that we are not caught flat-footed on global coordination for the international governance of AI.

In this moment of profound technological transformation, the United States is setting a clear, bold vision to harness the power of AI. But in order to execute on our ambitious goals — and do so at the scale and speed this moment requires — philanthropy must get involved. I urge philanthropic funders to recognize the gravity of this moment and go all-in on ensuring AI serves the many, not the few. A responsible AI future is possible, but it will take all of us.