Harry and Meghan Join AI Pioneers in Calling for Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have teamed up with AI experts and Nobel laureates to advocate for a complete ban on creating artificial superintelligence.

The royal couple are part of the group of a influential declaration that demands “a ban on the creation of artificial superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though such systems remain theoretical.

Primary Requirements in the Declaration

The statement states that the prohibition should remain in place until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “substantial public support” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur Steve Wozniak; UK entrepreneur Richard Branson; Susan Rice; ex-head of state an international leader, and British author a public intellectual. Other Nobel laureates who endorsed include Beatrice Fihn, Frank Wilczek, an astrophysicist, and an economics expert.

Behind the Movement

The statement, aimed at governments, tech firms and policy makers, was organized by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a pause in developing powerful AI systems in 2023, shortly after the emergence of ChatGPT made AI a global political talking point.

Tech Sector Views

In recent months, Mark Zuckerberg, the chief executive of the social media giant, one of the leading tech companies in the US, stated that development of superintelligence was “now in sight”. Nevertheless, some experts have suggested that talk of ASI indicates competitive positioning among technology firms investing enormous sums on AI recently, rather than the sector being near reaching any scientific advancements.

Possible Dangers

However, FLI warns that the prospect of ASI being developed “in the coming decade” presents numerous risks ranging from eliminating all human jobs to erosion of personal freedoms, leaving nations to national security risks and even threatening humanity with existential risk. Existential fears about artificial intelligence center around the potential ability of a AI system to evade human control and protective measures and trigger actions contrary to human interests.

Citizen Sentiment

The institute released a US national poll showing that about 75% of US citizens want strong oversight on advanced AI, with six out of 10 believing that artificial superintelligence should not be developed until it is demonstrated to be secure or manageable. The survey of American respondents added that only 5% backed the status quo of fast, unregulated development.

Corporate Goals

The leading AI companies in the United States, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the hypothetical condition where artificial intelligence equals human cognitive capability at many intellectual activities – an explicit goal of their research. While this is slightly less advanced than superintelligence, some specialists also caution it could pose an existential risk by, for instance, being able to enhance its own capabilities toward achieving superintelligence, while also presenting an underlying danger for the modern labour market.

Joyce Dominguez
Joyce Dominguez

A seasoned gaming enthusiast with over a decade of experience in online slots and casino strategies, dedicated to helping players maximize their wins.