A recent departure from OpenAI's economic research team has sparked concerns about the company's alleged shift towards AI advocacy. This move, according to four sources familiar with the matter, has led to a perceived pullback in publishing research that highlights the potential negative economic impact of AI.
One such departure was that of Tom Cunningham, who left OpenAI in September. Cunningham expressed his concerns about the growing tension between conducting thorough analysis and functioning as an unofficial advocate for the company. He believed that publishing high-quality research had become challenging due to this conflict of interest.
OpenAI's chief strategy officer, Jason Kwon, addressed these concerns in an internal memo. Kwon emphasized the company's responsibility as a leader in the AI sector, arguing that they should not only identify problems but also actively build solutions. He stated, "My perspective on difficult subjects is not to avoid discussing them, but rather to take ownership of the outcomes since we are the ones deploying AI into the world."
OpenAI spokesperson, Rob Friedlander, defended the company's approach, stating that they have expanded their economic research scope and hired their first chief economist, Aaron Chatterji. Friedlander highlighted the team's rigorous analysis, which aims to benefit OpenAI, policymakers, and the public by understanding AI's impact on the economy.
However, critics argue that OpenAI's research is becoming increasingly biased towards positive findings, potentially downplaying the economic downsides of AI. An anonymous outside economist who previously worked with the company alleges that OpenAI is selectively publishing work that portrays their technology in a favorable light.
This alleged shift in research focus comes at a time when OpenAI is strengthening its multibillion-dollar partnerships with corporations and governments, solidifying its position as a key player in the global economy. Experts believe that the technology OpenAI is developing could revolutionize the way people work, but there are significant questions about the timing and extent of this transformation.
Since 2016, OpenAI has regularly released research on labor reshaping and shared data with external economists. However, over the past year, two sources claim that the company has become more hesitant to release work highlighting AI's potential job displacement and other negative economic impacts.
Earlier this week, OpenAI published a report claiming that their AI products save enterprise users an average of 40 to 60 minutes daily and that there is "significant headroom" for increased AI adoption across the economy. This report has raised eyebrows, as it seems to contradict the concerns expressed by some of OpenAI's own researchers and economists.
The issue of research politics and self-reporting by AI labs is a controversial one. While companies often highlight research that benefits them, the leading AI labs have an unusual level of autonomy in reporting the risks and capabilities of their technology. This power dynamic has led to lobbying efforts worth $100 million to maintain the status quo, with Silicon Valley leaders fighting against proposed state-level AI regulations.
OpenAI's cautious approach stands in contrast to its rival, Anthropic, whose CEO, Dario Amodei, has repeatedly warned about the potential automation of entry-level white-collar jobs by 2030. Amodei's predictions have been criticized by the Trump administration, with David Sacks, the White House special adviser for AI and crypto, accusing Anthropic of fear-mongering as part of a regulatory capture strategy.
OpenAI's economic research efforts are currently led by Aaron Chatterji, who oversaw a significant report on ChatGPT usage. Sources indicate that Chatterji reports to Chris Lehane, OpenAI's chief global affairs officer, reflecting the team's close integration with the company's political and policy strategy.
This story raises important questions about the role of AI companies in shaping public perception and policy. Should these companies be allowed to self-report the risks and benefits of their technology, or is there a need for more independent oversight? What are your thoughts on this matter? Feel free to share your opinions in the comments below.