Tensions Rise in Silicon Valley Over AI Safety Advocacy
Key Figures Cast Doubt on AI Safety Groups
In recent developments, prominent figures from Silicon Valley, including David Sacks, the White House AI and Crypto Czar, and Jason Kwon, Chief Strategy Officer of OpenAI, stirred controversy with their remarks regarding organizations advocating for AI safety. Both individuals suggested that these advocates may not be acting purely in the public interest, but rather may be aligned with self-serving agendas or wealthy benefactors.
Allegations Prompt Defensive Responses from AI Safety Advocates
AI safety organizations have expressed concerns that the comments made by Sacks and Kwon represent a broader effort to intimidate critics within the tech industry. This is not an isolated incident. In 2024, some venture capital firms disseminated misleading claims that California’s Senate Bill 1047 could lead to imprisonment for startup founders, a narrative refuted by the Brookings Institution, yet Governor Gavin Newsom subsequently vetoed the bill.
Despite the intentions behind Sacks and Kwon’s statements being unclear, their remarks have raised alarms among various leaders in the AI safety community, leading many to seek anonymity when discussing the issue with TechCrunch. This reluctance indicates a growing apprehension about potential retaliation.
The AI Safety Debate: A Rising Conflict
The ongoing conflict highlights a significant divide within Silicon Valley regarding the responsible development of artificial intelligence versus the pursuit of substantial consumer products. This theme received further exploration in the latest episode of the Equity podcast, which delves into a new California law aimed at regulating chatbot safety and OpenAI’s controversial policies in relation to explicit content.
Sacks Critiques Anthropic’s Regulatory Efforts
On Tuesday, Sacks made headlines with a post on social media, criticizing Anthropic for allegedly using fear tactics to influence legislation in its favor. He accused the AI lab of orchestrating what he termed a "regulatory capture strategy" to benefit larger firms at the expense of smaller startups. This criticism followed Anthropic’s endorsement of California’s Senate Bill 53, which mandates safety reporting for large technology companies.
Sacks reacted to an essay published by Anthropic co-founder Jack Clark, which voiced concerns about AI’s potential societal impacts. While attendees considered Clark’s reflections as sincere, Sacks questioned their motivations, framing the discourse as a strategic maneuver.
OpenAI’s Legal Actions Raise Transparency Concerns
In a related development, Kwon announced that OpenAI had issued subpoenas to several AI safety nonprofits, including Encode, which advocates for responsible AI policies. Following Elon Musk’s lawsuit against OpenAI—challenging the organization’s nonprofit focus—Kwon expressed skepticism regarding the motivations of these organizations, raising questions about their funding and potential coordination with critics.
NBC News reported that these subpoenas requested detailed communications related to Musk, Meta CEO Mark Zuckerberg, and Encode’s support for legislative changes.
Internal Struggles Within OpenAI
Reports indicate a growing rift within OpenAI between its government affairs team and its research division. While researchers actively disclose the risks associated with AI technologies, the policy team has campaigned against SB 53, arguing for cohesive federal regulations instead.
Joshua Achiam, OpenAI’s head of mission alignment, candidly expressed concerns about the implications of the subpoenas, suggesting they may not reflect OpenAI’s best interests.
Industry Leaders Weigh In
Brendan Steinhauser, CEO of the AI safety nonprofit Alliance for Secure AI, remarked that OpenAI’s actions appear aimed at silencing critics. He emphasized that the AI safety community is not merely a vocal opposition led by Musk but rather a diverse group calling for accountability within the industry.
Moreover, Sriram Krishnan, a senior AI policy advisor, contributed to the conversation by urging safety advocates to engage more with end-users of AI technologies, emphasizing the need for a balanced perspective.
Public Sentiment and the Future of AI
Recent studies indicate a significant portion of the American public is more apprehensive than enthusiastic about the implications of AI, primarily concerning job displacement and misinformation. This societal anxiety may pose a challenge for the tech sector as it grapples with demands for ethical AI practices.
As the landscape of AI continues to evolve, the momentum of the AI safety movement seems to be gaining traction heading into 2026. The pushback from members of Silicon Valley may signify the growing impact of these safety-focused organizations as they seek to navigate the complexities of rapid technological advancement.



