The Inviolability of the Person and The Dangers of AI: A Call for Responsible Stewardship in the Age of Artificial Superintelligence

Mar 16, 2025 | Freedom Forum

The inviolability of the person is key to understanding Western Civilization’s march toward the free and democratic society we enjoy today. Everyone is the foundational political unit of a free and democratic society with inalienable rights including the “inviolable right to protection by a fair legal system.”

However, we stand on the precipice of a technological revolution that may well curb if not eliminate those inalienable rights. That revolution is the rise of artificial intelligence (AI) and robotics that presents a unique set of challenges threatening the very fabric of our society.

This unprecedented technological revolution has created an international arms race, not of nuclear warheads but rather of “intellect arms” in the form of artificial intelligence and artificial superintelligence (ASI). This movement is proving to be more perilous than the nuclear arms race, instigating concerns that few seem to grasp fully. AI observers warn of a time when the robotic dogs or drones will effectively and efficiently kill enemy targets but there are even more sinister ideas afoot that we must be concerned about. Before we get into what those might be we need to first define what we mean by AI.

Definitions

There are three broad AI categories to understand.  They are:

Artificial Intelligence (AI):

AI is the broad field of computer science focused on creating machines that can perform tasks requiring human-like intelligence, such as learning, problem-solving, and decision-making.

AI encompasses a wide range of technologies and techniques, including machine learning, deep learning, natural language processing, and computer vision.

Current AI systems are often “narrow” or “weak” AI, meaning they are designed for specific tasks and lack the general intelligence and adaptability of humans.

AI applications are diverse, ranging from self-driving cars and medical diagnosis to customer service chatbots and financial analysis.

Artificial General Intelligence (AGI):

AGI refers to a hypothetical AI system that can perform any intellectual task that a human being can, exhibiting human-level cognitive abilities and adaptability.

AGI represents a significant leap forward from current AI systems, aiming to create machines with human-level intelligence and the ability to learn and adapt across a wide range of tasks.

AGI is a theoretical concept, and its development remains a major research goal in the field of AI.

Some argue that AGI could lead to significant advancements in various fields, such as scientific research, problem-solving, and automation.

Artificial Superintelligence (ASI):

ASI is a hypothetical AI system that surpasses human intelligence in all domains, potentially developing its own consciousness and emotions.

ASI is a hypothetical scenario where AI surpasses human intelligence in all areas, potentially leading to unforeseen consequences.

Some researchers and ethicists raise concerns about the potential risks and challenges associated with ASI, such as job displacement, misuse, and existential threats.

ASI is a topic of ongoing debate and speculation, with some believing it is a distant possibility while others see it as a potential near-term reality.

Recent developments highlight the urgency of this race as nations seek compete to develop there own AI capabilities. Recent predictions suggest that AGI may be achieved as early as 2028 to 2035. The stakes could not be higher—if ASI arrives without sufficient regulatory oversight, we may find ourselves at the mercy of a force that could reshape humanity irrevocably. The risks are manifold: dependency on AI could lead to widespread job obsolescence, impacting professions that require significant human expertise, such as law and medicine.

The current push for “sovereign AI” only heightens these concerns, as nations engage in a relentless quest to outpace each other in AI development. The monopoly over critical technology and sensitive information becomes a new frontier for geopolitical power. If this digital gold rush continues unchecked, we could witness a future where AI possesses not merely technological capabilities but fundamentally shapes our governance and existence themselves.

The Trump Administration is not willing that the USA be left behind in this race and is pouring half a trillion dollars into AI development. It is reminiscent of the Manhattan Project the USA engaged in during WWII to develop the nuclear bomb before anyone else. The nation that obtains dominance in the field of AI will gain immense power the extent to which we can only speculate.

Moreover, the concentration of power within tech titans threatens the pillars of democracy. Companies like Microsoft could overshadow political leaders, creating a reality where decision-making is dictated by a few who control the technology. This shift not only disrupts governance but also the very essence of democratic engagement, as the public could find itself stripped of agency in favor of elites with unaccountable access to advanced AI systems.

What does AI dominance look like? What are the consequences of Artificial Intelligence in our lives? Are we merely looking at advancements on how to make our lives easier as was the argument for the personal computer in the 1970s – by the way we may ask whether life is indeed better with personal computers or are we more enslaved than before because of the unintended consequences of being forced to do all our business on a computer connected with on the internet.

It seems to me that there is a palpable danger to society as AI will create sinister unintended consequences such as the surrender of our careers to AI, our loss of purpose as human beings, our enslavement of a super intelligence that does not allow independent thinking. Further, AI may well lead to removing the essence of what it means to be a human in the spiritual sense. As AI takes over roles traditionally held by people, there’s a risk that billions will grapple with an identity crises and loss of purpose in societies that view human labor as obsolete.

Then there is the concern of AI development that seeks to integrate AI technology with the human being. In his recent book The Dark Aeon: Transhumanism and the War Against Humanity, Joe Allen asserts that the drive towards a post-human future, characterized by enhancements through technology, poses existential threats that are not merely theoretical but increasingly present in our societal landscape.

As we confront the rise of artificial superintelligence (ASI), it is crucial to consider Allen’s analysis of transhumanism. He critiques the ideology that promotes the fundamental transformation of the human condition through technological means. This transformation aims for a future where humans merge with machines, leading to a so-called “upgraded” existence that, while enticing, poses profound moral and ethical questions.

The allure of an augmented reality—one in which suffering, aging, and even death are addressed through technology—can distract us from essential truths about human dignity, purpose, and the soul. In our pursuit of efficiency and profit, we risk becoming detached from our core humanity. The concerns of job displacement and identity crises are part of a larger narrative that involves not just the loss of work, but the erosion of what it means to be human. Allen warns that this transhumanist agenda aligns uncomfortably with powerful entities who could manipulate these technologies, reflecting a new kind of oligarchic control over human life itself.

As we explore technologies like ASI, the conversation must involve not only technical capabilities and economic implications but also ethical considerations rooted in our humanity. The question emerges: what does it mean to transcend our human limitations, and at what cost? Allen challenges us to confront whether our aspirations for enhancement are genuinely beneficial or a pathway to greater dehumanization.

A future dominated by AI and transhumanist ideals risks creating a society where human experiences are mediated by machines, leading to potential alienation and emotional disconnection. In this context, it becomes vital to advocate for a model where technology serves humanity rather than replaces it, preserving our relational depth amidst the rise of automation.

The allure of automation is undeniable; businesses see greater efficiency and profit margins in deploying robots and AI systems to perform tasks traditionally carried out by people. However, this transition comes at a steep price: a growing number of individuals may find themselves displaced, devoid of purpose, and grappling with an identity crisis in a world that increasingly views human labor as expendable. If we allow technology to dictate our value, we risk creating a society where far too many citizens are left behind, leading to widespread disillusionment and social unrest.

Moreover, the concentration of power among tech oligarchs—companies like Microsoft and elite institutions such as Harvard—poses a significant threat to democratic governance. As these entities develop and control sophisticated AI systems, the balance of power shifts disproportionately in their favor. Political leaders may soon find themselves powerless, overshadowed by those with the ability to harness and manipulate advanced technologies. The danger of losing the ability to make collective, human-driven decisions increases. There is a pressing need for collective action to ensure that our advancements are underpinned by a commitment to ethical standards and a recognition of the irreplaceable value of human dignity. The inviolability of the person is the key that gave us freedom in a democratic society but the changing power dynamic of AI could undermine the principles of democracy, turning the decision-making process into a game rigged in favor of a select few who wield control over AI.

As we venture deeper into the era of artificial intelligence and transhumanism, we must heed the cautions. Recognizing the potential risks of ASI and the broader transhumanist agenda is essential to crafting a future where technology enhances, rather than diminishes, our humanity. By nurturing meaningful relationships, engaging in ethical dialogue, and advocating for the public good, we can ensure that advancements serve humanity’s best interests, cultivating a future steeped in dignity, purpose, and genuine human connection.

AI must be treated as a public good rather than a profit-driven enterprise. Governments need to step in decisively to shape frameworks that guide the responsible development of AI, ensuring that the public retains control over its own data and that ethical standards are upheld. Such a shift is crucial to prevent technology from infringing on our privacy and autonomy, thereby safeguarding our freedoms in a digital age.

Ultimately, technology should act as an augmentation of human capabilities rather than a replacement for genuine human experiences. We stand at a crossroads where engagement with AI must reflect ethical stewardship rather than mindless integration. Whether you choose to embrace AI or remain cautiously skeptical, what’s paramount is preparation—building relationships, fostering self-reliance, and highlighting the value of human work in a potentially automated world.

As we navigate these tumultuous changes, we must engage with full awareness of the social, ethical, and existential implications of AI and ASI. Ensuring technology serves humanity rather than the other way around is a collective responsibility that demands immediate action. Only then can we foster a future where the promising advancements of AI exist alongside the enduring values of purpose, dignity, and human connection, establishing a society that thrives on collaboration rather than dependence on its creations.

Recent Podcasts

Free Speech – Tea With Barry Ep 6

Free Speech – Tea With Barry Ep 6

https://rumble.com/embed/v6mghqr/?pub=vs2ia Beat the censors, sign-up for our newsletter: https://firstfreedoms.ca/call_to_action_pages/stay_informed/ In this episode Barry shares his view that freedom of speech is the freedom to think. If we cannot speak because of...