🚀 Trusted by 5,000+ Advertisers & Premium Publishers

The human-robot singularity hasn’t arrived yet, but it’s crucial to establish regulations for AI | Samuel Woolley

On a recent visit to the San Francisco Bay Area, I was taken aback by the provocative billboards displayed along the freeway just outside the airport. “The singularity is here,” yelled one. “Humanity had a good run,” read another. It felt as though every other sign was filled with outlandish propositions from technology companies regarding artificial intelligence. These advertisements dripped with hyperbole and sensationalism. However, it’s crucial to recognize that these claims don’t exist in isolation. Recently, Sam Altman, CEO of OpenAI, asserted: “We have basically built AGI, or very close to it,” only to follow up with a rather confusing qualification—labeling his statement as “spiritual.” Elon Musk has gone even a step further, declaring: “We have entered the singularity.”

Now, let’s talk about Moltbook. This platform is designed specifically for AI agents—essentially a space where bots can interact amongst themselves. Following its launch, a wave of ominous news articles and opinion pieces flooded the media landscape. Authors expressed alarm over bots engaging in discussions about religion, claiming they had secretly misappropriated funds from their human creators, and threatening humanity itself. These narratives mirrored the anxiety-inducing messages found on those billboards in San Francisco: machines are allegedly not only as intelligent as humans (a theory termed artificial general intelligence) but are also transcending our capabilities (a concept widely referred to as the singularity).

From my extensive years of research into bots, AI, and computational propaganda, I can assert two things with a high degree of certainty. First, Moltbook is not groundbreaking. For decades, humans have been designing bots to communicate with one another and with people. These bots have been programmed to make outrageous and even unsettling claims throughout that time. Second, the singularity is not upon us, nor is AGI. According to most researchers, neither concept is even close to being realized. The progression of AI is hampered by several significant factors, including mathematics, data accessibility, and the costs of business. Claims that AGI or the singularity have manifested are not based on solid empirical studies or scientific evidence.

However, as technology companies zealously advertise their AI capabilities, a stark realization emerges: big tech companies have shifted away from being a counterbalancing force, as they were during the early days of the Trump administration. The exaggerated claims surfacing from Silicon Valley regarding AI are now closely intertwined with U.S. nationalism as both sectors strive to “win” the AI race. In a notable instance, ICE is channeling $30 million to Palantir for AI-enhanced software, potentially to be used for government surveillance. In contrast, tech executives, including Musk, advocate for far-right causes, while Google and Apple have faced backlash for removing apps that allowed users to track ICE’s activities from their stores, due to political pressure.

Even though the singularity may not be an immediate threat, it’s crucial to resist this troubling alliance between big tech’s ambition for inflated valuations and Washington’s inclination for oversight. When technology firms and politicians are in sync, it’s imperative that constituents leverage their collective power to shape the future of AI.

Many people reasonably believe that effective regulation of technology beneficial to society seems impossible in the current political climate. Fortunately, governmental and corporate policies aren’t the only avenues available to address the challenges and uncertainties posed by AI. The recent protests in Minneapolis exemplify the power of collective action, even in a loosely organized manner. Such demonstrations have pressured the Trump administration and its corporate allies into retreat. Historically, public advocacy has prompted significant adjustments from tech companies regarding user privacy, safety, and overall well-being.

The recent protests reveal a vital truth: the powerful operate at the discretion of the populace. This principle holds true for both lawmakers and corporate heads. AI is not an uncontrollable force wielded by a select few, but as two scientists from Princeton phrased it, a “normal technology.” Its societal effects will be determined by human intervention. We possess the capability to either amplify its influence or to regulate its application. Dario Amodei, CEO of Anthropic, recently posited that AI can and should be governed. The risks AI presents to society—particularly concerning inequality and disinformation—are tangible but manageable challenges.

It is essential to acknowledge that while AI—especially generative AI and large language models (LLMs)—is already reshaping our communication and various other aspects of daily life, platforms like Moltbook and the AI agents within are not actual representations of scientific benchmarks for intelligence. A journalist who recently “infiltrated” this bot-exclusive platform noted as much, pointing out that it resembles “a crude rehashing of sci-fi fantasies.” Other observers have echoed similar sentiments, highlighting that a significant portion of the content appears to originate from humans. Furthermore, bot-generated posts merely serve as “channeling human culture and stories,” often spewing nonsensical discussions on religion and artificially announcing the arrival of superintelligent machines, reflecting the way humans typically discuss robots and technology.

These so-called “agents” lack genuine agency and intelligence. In fact, they primarily mirror human thought and behavior. Similar to their predecessors, they are encoded with human biases and concepts, as they are trained on human-derived data and created by human engineers. Many of them also function through basic automation rather than authentic AI—a term that is often contentiously debated among experts.

Historically, humanity has navigated the changes brought about by new technologies, and we can certainly do so again. Dario Amodei offers an alternative perspective to many of his colleagues: AI governance must be focused, well-informed, and does not need to be at odds with rational technological advancement or democratic principles. We must vigorously advocate for effective AI governance and prioritize this need promptly. AI is indeed instigating change while politicians are instigating chaos, but ultimately, the power to shape the future rests with humanity.

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *