Image courtesy of Rezny M
The coming wave of superintelligence is often portrayed as a distant existential threat, with machines turning against humanity. That vision imagines a clear enemy and a sudden catastrophe. The more plausible danger is quieter and far more human, people who neither fully understand nor care about the consequences.
Superintelligence will not destroy us by rebelling, it will destroy us by obediently pursuing the wrong goals for the wrong people. History offers a consistent lesson, powerful tools do not reliably end up with the wisest or most benevolent ones. They tend to go toward those who are ambitious, reckless, opportunistic or simply first to seize them.
Early evidence is already visible. Deepfake videos, synthetic images and AI-generated text now spread online faster than fact-checking can keep pace. Criminal exploitation has advanced just as rapidly. AI systems can clone voices, forge documents and generate persuasive messages at scale, dramatically lowering the skill required for sophisticated fraud. Impersonation schemes that once demanded careful planning can now be automated.
These technologies also magnify some of humanity’s most troubling impulses. Powerful image generators were quickly repurposed to produce highly disturbing sexual imagery of real individuals, including minors, demonstrating how rapidly AI can be diverted toward harm. Online harassment, reputational sabotage and psychological abuse can now be carried out with unprecedented speed, reach and anonymity.
The military implications are more stark still. Autonomous weapons systems, including AI-guided drones capable of selecting targets with minimal human oversight, are moving from experimental projects toward operational deployment in the real war. When decisions about life and death occur at machine speed, accountability becomes diffuse, escalation risks increase and mistakes become far harder to contain.
What unites these developments is not machine hostility but human fallibility. Much of today’s debate focuses on aligning AI with human values. Yet humanity does not possess a single, coherent set of values. Different societies and actors pursue competing visions of order, prosperity and justice. A system aligned with one agenda may appear dangerously misaligned to another.
Bottom Line
In responsible hands, Superintelligence could help solve some of humanity’s most intractable problems. We speak of aligning AI with human values, but which humans, and whose values? Civilizational-level catastrophe does not require rogue machines or science-fiction nightmares. All it takes is Superintelligence in the hands of irresponsible masters with the resources, power, and leverage.
