On March 1, 2024, Elon Musk filed a lawsuit in San Francisco against OpenAI. Musk, one of the original founders of OpenAI who subsequently resigned from the organization, alleging that the maker of ChatGPT violated their original contract as a nonprofit venture with the goal of developing AI for the benefit of humanity. It alleges that, with its deep relationship with companies such as Microsoft, OpenAI is flagrantly pursuing profit, shirking the humanitarian mission Elon Musk claims he initially invested in.
Worse, the failed ouster of Sam Altman in November 2023 showed that the cautious side of OpenAI is lost. Now, for an organization whose purported initial mission was to advance society through AI, it may very well be placing society in much greater danger. This is a clear example of a project that went astray and did not take the values and considerations of its major stakeholder, Elon Musk, into consideration.
Organizations and large-scale projects can go awry for many reasons. At one extreme, perhaps OpenAI always had a master hidden agenda to advance a technology with profit motives while announcing its altruistic mission of advancing society. Founded in 2015, OpenAI’s publicly stated motive was pure—developing “safe and beneficial” artificial general intelligence (OpenAI, 2018). But in 2019, with its partnership with Microsoft and a huge cash infusion of $1 billion, OpenAI transitioned to a hybrid company, a capped for-profit venture in which the profit is capped at 100 times any investment. This allows the organization’s for-profit subsidiary, Open AI Global LLC, to legally attract investments from others. It also allows OpenAI to distribute equity to its employees, which in a high-tech industry, is arguably essential to attract the best talent.
At another extreme, perhaps OpenAI was largely innocent in these profit motives in 2015. Found by a group of tech visionary stakeholders such as Elon Musk, Greg Brockman, Sam Altman, Ilya Sutskever, John Schulman and Wojciech Zaremba, the main motivation was to collectively create an environment for the ethical development of artificial intelligence. But along this journey, changes occurred. For example, Musk, who provided much of the original capital to start the organization under the impression that the technology was being created for the betterment of society, left in 2018 citing a potential conflict of interest in the future of his role as the CEO of Tesla, which was developing its AI for self-driving cars.
In addition, as any technology company can attest, talent is the biggest ingredient for success. OpenAI is no different, and when Microsoft decided to invest $1 billion in 2019, the window to profit opened completely. In OpenAI’s response to the lawsuit on March 5th, 2024, it explained the high cost of computing requiring “billions of dollars per year.” Thus, the original project went sideways as far as Musk was concerned. From the perspective of OpenAI, it seems the 180-degree change in the mission of the company was largely due to the everyday actions and reactions of its competitive environment.
The truth is most likely somewhere in the middle. As with any grand plans with many major players, motivations vary. It is completely likely that all founders were genuinely interested in the rapid development of AI, and all had the desire to seek ways to benefit humanity. But once the journey started, there are infinite paths and possibilities. The conflict shown in this lawsuit is not unique to OpenAI with its nonprofit mission and for-profit motives. For example, Goodwill has a retail for-profit operation that sells donated goods. American Automobile Association (AAA) has numerous for-profit subsidiaries that sell insurance and travel services. The biggest membership organization in the U.S. with about 38 million members, AARP operates mainly as a nonprofit, but it has many for-profit subsidiaries that offer insurance products and financial services. As CEO of PMO Advisory, I work closely with nonprofits that also have had experiences in which employees of nonprofits question the revenue motive of their development office. But unlike OpenAI, none of these nonprofits have received an infusion of $1 billion from a mammoth corporation like Microsoft, whereby an intimate partnership was formed.
Projects can go astray for many reasons. In the case of OpenAI and especially with the introduction of ChatGPT, the impact on our society has already been significant. Whether the future will be darker in the pursuit of profit, as Musk alleges, or lighter with OpenAI’s cited humanitarian work in Kenya and India by boosting farmer income and dropping costs, or more complex shades, only time will tell. One thing is certain, at least from an ex-founder’s perspective. A major stakeholder of OpenAI in the early days, Musk’s donation of over $44 million between 2016 and 2020 appeared to have contributed to the creation of one of the greatest competitors in the field of AI.
The key lesson for executives and boards is the necessity of being open and transparent with stakeholders about the motivations and the need to manage subsequent changes to the charter. This is why developing and maintaining updates to a project charter is vitally important for the organization. It helps to align the interests and expectations of key stakeholders. And when the guiding charter states “best interests of humanity throughout its development,” it can be rather confusing, at best, and inviting lawsuits at worst, when the organization’s actions and activities are something else.
Reference:
“OpenAI Charter: Our Charter describes the principles we use to execute on OpenAI’s mission.” Open AI, April 9, 2018.
“OpenAI and Elon Musk” Open AI, March 5, 2024.