Prince Harry and Meghan, Duchess of Sussex have joined a coalition of Nobel laureates, AI pioneers, and tech leaders in calling for a global ban on the development of artificial superintelligence (ASI)—AI systems that could one day surpass human intelligence. The group warns that advancing such technology without strict safeguards poses “catastrophic” risks to humanity.
The open letter, released this October 22 and organized by the US-based Future of Life Institute, calls for governments and corporations to impose a moratorium until there is “broad scientific consensus” that ASI can be developed safely and “with strong public support.” Signatories include AI visionaries Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, Virgin founder Richard Branson, and several Nobel Prize winners.
According to the institute, ASI could emerge within the next decade, potentially displacing human jobs, undermining civil liberties, and threatening global security. The fear, they warn, is that such systems might eventually escape human control entirely.
Artificial intelligence has long promised to revolutionize society—but where should the line be drawn between innovation and danger? Tech expert and outspoken AI skeptic Ed Zitron is among those urging restraint, arguing that unchecked progress could come at a devastating cost. Click through this gallery to explore the debate over AI’s future and the growing calls for caution.