Open AI saga raises important questions about the future of generative AI

Two robot hands reaching each other with blue background

By Anamitra Deb, Managing Director, Responsible Technology

Many questions remain about the sudden ouster of Sam Altman as CEO of OpenAI. As of now, it is still unclear whether Altman (and many others) will be absorbed by Microsoft, return to OpenAI (at the expense of the Board) or something else altogether. But TV-worthy plot twists aside, the weekend also raised major questions about AI governance, concentrated power and the best way to ensure we put safety, responsibility, and societal impact at the heart of the AI conversation.

OpenAI’s tumultuous weekend is the most visible indication of several growing rifts in the AI ecosystem about what AI development should prioritize, how it should be governed and by whom. People have widely differing opinions about what failed in terms of corporate governance this past weekend, and whether there can be sufficient alignment between the stated mission and position of OpenAI’s not-for-profit board and the for-profit entity it manages. Or, whether this might have industry-wide ramifications corporate governance, leading to precedent and changes at similar companies.

Even among those who acknowledge that some degree of guardrails and governance are necessary to ensure responsible development, there’s one camp that argues AI’s potential existential threat to humanity means that governance and regulation should focus solely on long-term harms. The other (predominantly researchers and civil society organizations) is focused on addressing AI’s immediate and known harms, from algorithmic bias to increasingly complex financial scams. The reality is that we can’t guard against the long-term risks of AI, nor can we best harness its unbelievable potential, without first confronting the known harms and risks of AI in the here and now. Might we use this moment to require that all governing boards of GenAI companies (and their major investors) have specific and public disclosures about their safety and responsibility commitments that are audited by civil society?

But as the battle lines in the AI arms race continue to evolve, there is an even more important question we should be asking. How can we ensure that the outsize power and influence over AI’s development rests in the hands of the many, not the few? One thing this weekend’s news made clear is that far too much power is held by only a few people in Silicon Valley — and that’s a recipe for disaster.

Omidyar Network has been working for years to ensure technology benefits the many instead of the few, and for this to happen, its decision-making, value-capture, and market development cannot be concentrated. Nor can the entire field of AI startups that we laud for their innovation and performance be completely dependent on a few Big Tech companies for access to critical infrastructure in cash, cloud, compute, and market distribution, as they are today. We have seen the tale-as-old-as-time story of power and money blinding even the most well-intentioned tech visionaries, with society left to pick up the pieces.

Although we may not know exactly how generative AI will evolve, one thing we can do is to invest in an inclusive, participatory infrastructure that prioritizes human impacts and ensures meaningful and diverse decision making over the current and future trajectory of this powerful technology. Over the last several months, we have seen leaders come together in various forums around the need for responsible governance and guardrails. The question is will we seize this moment as an opportunity to rewrite the status quo? There is an opportunity before us to use generative AI to push for new governance, economic, and social paradigms that prioritize distributed innovation and decision-making, and critically, individual, community, and societal well-being (and even climate impacts) above profits alone.

As Stanford’s renowned AI scientist Fei-Fei Li recently wrote, the North Star for AI now must be “reimagining AI from the ground up as a human-centered practice… AI must become as committed to humanity as it’s always been to science.”

To do that, at least one priority for the short-term must be ensuring the root of what makes us human — our relationships, emotions, and collective experience — are included in any analysis of generative AI’s impacts. The harms and benefits of generative AI can’t be measured just in terms of economic gains or losses. Any effective infrastructure must be intentional about augmenting human capacity while protecting humanity itself: our personal relationships, feelings of belonging, and interconnectedness.

Empowering a diverse set of stakeholders to make meaningful and human-centered decisions on AI now will build the foundation for greater human flourishing to come. Getting that right will require an all-hands on deck effort in multiple areas — especially as so many scramble to reorient themselves around the new normal that changes at OpenAI have likely precipitated.

In this moment of upheaval, we all must commit to rejecting an either/or mindset and resolve to developing AI in service of the many instead of the few. This is a moment we can’t afford to miss.

Here’s our position on bending generative AI’s trajectory toward a responsible technology future.