AI Firms Acknowledge Their Reputation Issues. Can Policy Papers and Think Tanks Help Improve Their Image?

This week, OpenAI surprised the tech community, not with another iteration of ChatGPT or an expensive data center, but by releasing a policy paper aimed at reshaping the social contract by introducing “people-first ideas.” This represents a strategic move by leading AI companies to address rising public skepticism about artificial intelligence as surveys reveal ongoing disapproval of AI technologies.
OpenAI’s document, a concise 13-page policy paper titled Industrial Policy for the Intelligence Age, follows the company’s unexpected acquisition of the tech-focused podcast TBPN. Additionally, OpenAI has revealed plans to establish an office in Washington D.C., which will feature a space named the OpenAI workshop aimed at helping non-profits and policymakers understand and discuss the company’s technologies.
Concurrently, OpenAI’s competitor, Anthropic, announced the launch of its own think tank known as the Anthropic Institute, dedicated to examining how AI’s expansion will transform societal dynamics.
As the societal impacts of AI become more apparent and demands for scrutiny over major tech firms intensify, it seems the industry is acknowledging widespread concerns and seeking to redefine the discussion around AI.
Sam Altman, the CEO of OpenAI, discussed the challenges facing AI firms regarding public perception at a recent BlackRock conference in Washington D.C. He noted, “AI is not very popular in the US right now. Datacenters are being blamed for rising electricity prices, and many companies attributing layoffs to AI may not be accurate in their claims.”
However, the marketing campaign isn’t merely an effort to improve public opinion. Experts suggest that the formation of think tanks and the allocation of millions towards lobbying efforts might also aim to diminish independent regulatory initiatives.
“The OpenAI document hints at a desire for more regulatory scrutiny,” stated Sarah Myers West, co-executive director of the AI Now Institute, a non-profit that champions heightened public accountability in the AI sector. “However, on further inspection, their lobbying efforts have successfully aligned with an administration that favors deregulatory policies for AI.”
Neither OpenAI nor Anthropic responded when approached for comments.
PR by policy proposal: a four-day workweek and a public wealth fund
OpenAI’s policy paper represents a tonal shift, revealing the company’s concerns about public reception of its technology. Instead of focusing on how workers can adapt to avoid losses in the labor market, the document calls for creating “a resilient society” and urges policymakers to implement safeguards for the safe development of AI.
Among the proposals are eye-catching recommendations like a four-day workweek and the establishment of a “public wealth fund” designed to redistribute profits back to the public – echoing the common tech industry discussion around universal basic income.
The paper emphasizes that these proposals should not be viewed as definitive solutions but rather as “a foundational step for broader dialogue on ensuring AI serves the interests of all.”
“If policy fails to keep pace with technological advancements, the institutions and safety nets required to navigate this transition will lag behind,” states the document. “Facilitating access, autonomy, and opportunities in AI is a crucial task as we progress towards a future with superintelligent systems.”
Critics of the report argue that it serves primarily as a public relations tool rather than an actionable policy guideline. They contend that the emphasis often shifts accountability away from the company towards societal entities and lawmakers. OpenAI seems to paint a picture where an AI-centric world is inevitable, promoting grand objectives for government and society while resisting any notion that its technology can be regulated, experts assert.
“What they have skillfully done is outline welfare objectives while avoiding any genuine commitment to allocating resources to achieve those goals,” Myers West observed.
Critics further argue that while OpenAI calls on lawmakers and the public to shoulder the burden of responsibility, they are simultaneously lobbying vigorously for lenient regulations that would shield them from scrutiny.
“Relying on Congress for action allows these companies to operate without regulation,” Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, cautioned. “This scenario indeed serves their purpose.”
An intensifying AI lobby
To illustrate their growing influence, OpenAI revealed it spent nearly $3 million on lobbying in 2025. President Greg Brockman co-founded a pro-AI Super PAC which raised over $125 million last year. The PAC has already attacked congressional candidate Alex Bores, who advocates for AI regulation, through advertisements in New York. The company is also supporting legislation in Illinois designed to protect AI firms from liability when their technologies cause significant societal harm, such as triggering mass casualties or creating dangerous chemicals.
The nascent awareness regarding AI at the state level has provided the tech industry with an opportunity to shape how regulations may be established, Fitzgerald noted.
“They are exploiting the limited time and resources of state legislators to argue that any regulation will hinder innovation,” Fitzgerald added.
OpenAI is not the only player aggressively lobbying for its interests; Anthropic has also allocated more than $3 million to its lobbying initiatives and supported a different Super PAC with an alternative set of objectives that are more amenable to regulatory frameworks.
Despite Anthropic’s recent challenges with the Department of Defense regarding limits on military applications of its models, the AI sector still maintains strong ties to the current administration, which continues to advocate for its interests.
The Trump administration has sought to thwart state-level regulation of AI through various tactics, leveraging the industry’s argument that a patchwork of laws could impede technology advancement and economic growth. Last year, Trump signed a controversial executive order aimed at preventing states from implementing restrictions on AI. Recently, the White House pressured a Republican senator in Utah to refrain from proposing a bill that would establish transparency and protections for children concerning AI.
A public relations problem
The expansion of think tanks, public relations initiatives, and increased lobbying efforts coincides with an AI industry grappling with a significant perception issue within its home market. The impending midterm elections are likely to place AI under greater scrutiny.
Recent surveys indicate a rising public distrust in AI, not just concerning its potential impact on employment but also as an overarching societal influence. A Pew Research Center survey conducted last September revealed that merely 16% of Americans believe AI encourages enhanced creativity, and only 5% feel it fosters meaningful social connections. An NBC News poll from last month further indicated that just 26% of voters hold a favorable view of AI, with its net negative approval ranking 2 percentage points lower than that of U.S. Immigration and Customs Enforcement (ICE).
Determining the precise reasons behind this negativity toward AI – whether rooted in anxieties about job displacement, the industry’s initial warnings about its own technologies’ dangers, or a general wariness of major tech corporations – remains complex. However, it is evident that the AI industry has begun to assess the emerging movement against data centers, proposals for AI regulations, and overall public unease with significant concern.
In recent years, the industry has ramped up efforts to represent its views to both lawmakers and the public. Corporate research facilities have recruited many formerly independent academics and researchers, leading to shifts in how research is published, with more focus on in-house materials over peer-reviewed publications. Myers West notes that while respected researchers have transitioned from academia to these corporations, this trend raises serious questions about the independence of their research.
“I would argue their independence is negligible at best,” Myers West concluded.
Interested in growing your brand with smarter solutions? Get in touch with Auctera today.
