Beside Sutskever, the remaining directors include Adam D’Angelo, an early Facebook employee who has served since 2018 and is CEO of Q&A forum Quora, which licenses technology from OpenAI and AI rivals; entrepreneur Tasha McCauley, who took her seat in 2018; and Helen Toner, an AI safety researcher at Georgetown University who joined the board in 2021. Toner previously worked at the effective-altruism group Open Philanthropy, and McCauley is on the UK board of Effective Ventures, another effective-altruism-focused group.
During an interview in July for WIRED October cover story on OpenAI, D’Angelo said that he had joined and remained on the board to help steer the development of artificial general intelligence toward “better outcomes.” He described the for-profit entity as a benefit to the nonprofit’s mission, not one in contention with it. “Having to actually make the economics work, I think is a good force on an organization,” D’Angelo said.
The drama of the past few days has led OpenAI leaders, staff, and investors to question the governance structure of the project.
Amending the rules of OpenAI’s board isn’t easy—the initial bylaws place the power to do so exclusively in the hands of a board majority. As OpenAI investors encourage the board to bring Altman back, he has reportedly said he would not return without changes to the governance structure he helped create. That would require the board to reach a consensus with the man it just fired.
OpenAI’s structure, once celebrated for charting a brave course, is now drawing condemnation across Silicon Valley. Marissa Mayer, previously a Google executive and later Yahoo CEO, dissected OpenAI’s governance in a series of posts on X. The seats that went vacant this year should have been filled quickly, she said. “Most companies of OpenAI’s size and consequence have boards of 8-15 directors, most of whom are independent and all of whom has more board experience at this scale than the 4 independent directors at OpenAI,” she wrote. “AI is too important to get this wrong.”
Anthropic, a rival AI firm founded in 2021 by ex-OpenAI employees, has undertaken its own experiment in devising a corporate structure to keep future AI on the rails. It was founded as a public-benefit corporation legally pledged to prioritize helping humanity alongside maximizing profit. Its board is overseen by a trust with five independent trustees chosen for experience beyond business and AI, who will ultimately have the power to select a majority of Anthropic’s board seats.
Anthropic’s announcement of that structure says it consulted with corporate experts and tried to identify potential weaknesses but acknowledged that novel corporate structures will be judged by their results. “We’re not yet ready to hold this out as an example to emulate; we are empiricists and want to see how it works,” the company’s announcement said. OpenAI is now scrambling to reset its own experiment in designing corporate resilient governance to both superintelligent AI and ordinary human squabbles.
Additional reporting by Will Knight and Steven Levy.
Updated 11-19-2023, 5:30 pm EST: This article was updated with a past comment by Adam D’Angelo.