OpenAI Expands Lobbying Army To Influence Regulation

A new report from the Financial Times reveals that OpenAI is expanding its international lobbyist Army, aiming to sway politicians and regulators who are tightening their grip on artificial intelligence. 

OpenAI’s move to expand its lobbyist team from three at the start of 2023 to 35 and soon to 50 by the end of the year comes after sweetheart deals with Microsoft and Apple to infiltrate billions of smartphones worldwide. Also, just weeks ago, the startup dissolved a team focused on ensuring AI safety. 

“We are not approaching this from a perspective of we just need to get in there and quash regulations . . . because we don’t have a goal of maximizing profit; we have a goal of making sure that AGI benefits all of humanity,” said Anna Makanju, OpenAI’s vice-president of government affairs, referring to artificial general intelligence.  

OpenAI’s lobbyists are being positioned in countries to counter the spread of AI legislation around the world. They’re being sent to Belgium, the UK, Ireland, France, Singapore, India, Brazil, and the US, countries where AI legislation is the most advanced. 

“Initially, OpenAI recruited people deeply involved in AI policy and specialists, whereas now they are just hiring run-of-the-mill tech lobbyists, which is a very different strategy,” said one person who has directly engaged with OpenAI on creating legislation. 

“They’re just wanting to influence legislators in ways that Big Tech has done for over a decade,” the person said. 

Makanju said the startup is attempting to address some hangovers from the social media age, which has sparked great “distrust of Silicon Valley companies.” 

“Unfortunately, people are often seeing AI with the same lens,” she said, adding, “We spend a lot of time making sure people understand that this technology is quite different, and the regulatory interventions that make sense for it will be very different.”

David Robinson, head of policy planning at OpenAI, said the global affairs team has ambitious goals: “The mission is safe and broadly beneficial, and so what does that mean? It means creating laws that not only let us innovate and bring beneficial technology to people but also end up in a world where the technology is safe.” 

It’s unbelievable—the approach these executives took, pretending to hold themselves accountable while proposing new laws for their own products. 

OpenAI needs to start following laws, and the first one they can start with is copyright. 

It’s clear that the moment the safety team dissolved, OpenAI started ramping up its lobbying efforts to exert influence on AI legislation worldwide. This is Typical Silicon Valley behavior—nothing more. 


Originally Posted at;