Hustler Words – The absence of clear regulatory guidelines for artificial intelligence has been starkly illuminated by recent events, particularly Washington’s contentious interaction with Anthropic. In this vacuum, an independent, bipartisan collective of thought leaders has stepped forward, presenting a comprehensive framework for what they believe constitutes responsible AI development. This initiative, dubbed the "Pro-Human Declaration," offers a much-needed roadmap where government action has lagged.
Though finalized before the high-profile Pentagon-Anthropic dispute, the declaration’s release gained immediate relevance, underscoring the urgency of its message. Max Tegmark, an MIT physicist and AI researcher instrumental in organizing the effort, noted a significant shift in public opinion. "There’s something quite remarkable that has happened in America just in the last four months," Tegmark shared with this editor, pointing to recent polling indicating that "95% of all Americans oppose an unregulated race to superintelligence."

The newly unveiled document, endorsed by hundreds of experts, former government officials, and prominent public figures, posits that humanity stands at a critical juncture. One trajectory, labeled "the race to replace," envisions a future where humans are progressively sidelined—first as laborers, then as decision-makers—as power consolidates within unaccountable institutions and their advanced machines. The alternative path, championed by the declaration, leads to AI serving as a powerful tool for augmenting human potential.

Related Post
This human-centric future hinges on five fundamental principles: ensuring human oversight, preventing the concentration of power, safeguarding the human experience, preserving individual liberties, and holding AI developers legally accountable. Among its more assertive stipulations are a complete moratorium on superintelligence development until scientific consensus confirms its safety and genuine democratic consent is secured. It also mandates the inclusion of "off-switches" for potent systems and prohibits the creation of AI architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.
The declaration’s timing could not be more poignant. Just weeks prior, Defense Secretary Pete Hegseth controversially designated Anthropic—whose AI already operates on classified military platforms—a "supply chain risk." This unusual label, typically reserved for entities with ties to adversarial nations, followed the company’s refusal to grant the Pentagon unrestricted access to its technology. Hours later, OpenAI struck its own agreement with the Defense Department, a deal many legal experts view as challenging to meaningfully enforce. These incidents collectively exposed the profound cost of Congressional inaction on AI governance.
As Dean Ball, a senior fellow at the Foundation for American Innovation, articulated to The New York Times, "This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems."
Tegmark drew a relatable parallel to illustrate the current regulatory void. "You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe," he explained, "because the FDA won’t allow them to release anything until it’s safe enough."
While Washington’s internal political battles rarely generate the public momentum needed to enact new laws, Tegmark identifies child safety as a potential catalyst for breaking the current impasse. The declaration specifically advocates for mandatory pre-deployment testing of AI products, particularly chatbots and companion applications targeting younger users. These tests would assess risks such as increased suicidal ideation, exacerbation of mental health conditions, and emotional manipulation.
Tegmark posed a compelling question: "If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that. We already have laws. It’s illegal. So why is it different if a machine does it?"
He anticipates that once the principle of pre-release testing is established for products aimed at children, its scope will inevitably broaden. "People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government."
The declaration’s broad appeal is underscored by its diverse signatories, including figures as politically disparate as former Trump advisor Steve Bannon and Susan Rice, President Obama’s National Security Advisor. They are joined by former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.
"What they agree on, of course, is that they’re all human," Tegmark concluded. "If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side."
Connie Loizos has been reporting on Silicon Valley since the late ’90s, when she joined the original Red Herring magazine. Previously the Silicon Valley Editor of hustlerwords.com, she was named Editor in Chief and General Manager of hustlerwords.com in September 2023. She’s also the founder of StrictlyVC, a daily e-newsletter and lecture series acquired by Yahoo in August 2023 and now operated as a sub brand of hustlerwords.com.
You can contact or verify outreach from Connie by emailing [email protected] or [email protected], or via encrypted message at ConnieLoizos.53 on Signal.








Leave a Comment