If the progress of AI is not stopped, the destruction of mankind is inevitable

- AI expert Eliezer Yudkowsky warns  

If the progress of AI is not stopped, the destruction of mankind is inevitable

Washington: If, for any reason, humans develop a ‘smart artificial intelligence’ with intelligence like ‘superhuman’, the destruction of mankind on earth will be inevitable. This is not a matter of possibility; please note that this is the only possible outcome,’ said Eliezer Yudkowsky, an expert in Artificial Intelligence (AI), warns. “Like me, other researchers in this field face this concern as well,” Yudkowsky said. Yudkovsky has strongly warned that there is no other way but to stop the work on artificial intelligence, given the dire possibility of the complete destruction of the human race.  

mankindLast month, the US-based Future of Life Institute wrote an open letter to the international community about the dangers of artificial intelligence. In this, it was appealed that the ongoing big experiments and research in artificial intelligence should be stopped for six months. More than a thousand researchers, scientists, entrepreneurs, intellectuals and political leaders have signed this letter. It includes Tesla CEO Elon Musk and Apple co-founder Steve Wozniak.  

Eliezer Yudkowsky, the founder of the ‘Machine Intelligence Research Institute’, warned of the destruction of the human race while presenting his position in this open letter. At the beginning of the article written in the US fortnightly “Time”, Yudkowsky explained that he had not signed the said letter. He claimed that a ban of six months is certainly better than no ban on new and advanced research in artificial intelligence. However, Yudkowsky expressed his displeasure that the proposal for a ban of only six months reduces the seriousness of the dangers in the field of artificial intelligence.  

Eliezer Yudkowsky, the founder of ‘The Machine Intelligence Research Institute’, underscored that the issue is not whether artificial intelligence will compete with human intelligence but what will happen if it surpasses human intelligence. “We are not prepared to survive an AI if it becomes more advanced than human intelligence. Yudkowsky warned in the article that if an AI is developed that is hostile to mankind, it will only result in total destruction. He said it would be like the 11th century fighting the 21st century as he highlighted the danger of super-advanced AI.  

Eliezer Yudkowsky also warned that highly advanced artificial intelligence would not allow itself to be confined to computers or the Internet for a long time but will take a new form through ‘artificial life forms’ or ‘post-biological molecular manufacturing’. Yudkowsky urged that all ongoing research on advanced artificial intelligence should be immediately stopped if the possible destruction of mankind is to be prevented. Yudkowsky also made everyone realize that this extreme step has to be taken as mankind is still not ready to fight an advanced AI. 

हिंदी    मराठी

Click below to express your thoughts and views on this news: