visit
Speaking on Thursday at the United Nations’ International Telecommunication Union (ITU) session on “Guardrails needed for safe and responsible AI,” Harari opined that AI should continue to be developed, but that it shouldn’t be deployed without safety checks and regulations.
“While we are learning to use AI, it is learning to use us”
Yuval Noah Harari, AI for Good Global Summit, 2023
“While we are learning to use AI, it is learning to use us,” said Harari, adding, “It’s very difficult to stop the development [of AI] because we have this arms race mentality. People are aware — some of them — of the dangers, but they don’t want to be left behind.
“But the really crucial thing, and this is the good news, the crucial thing is to slow down deployment — not development,” he added.
“It’s like you have this very dangerous virus in your laboratory, but you don’t release it to the public sphere; that’s fine”
Yuval Noah Harari, AI for Good Global Summit, 2023
“Now it is possible for the first time in history to create billions of fake people […] If you can’t know who is a real human and who is a fake human, trust will collapse, and with it at least, free society”
Yuval Noah Harari, AI for Good Global Summit, 2023
On the subject of AI-generated deepfakes and bots, Harari said, “Now it is possible for the first time in history to create fake people — to create billions of fake people — that you interact with somebody online and you don’t know of it’s a real human being or a bot.
“We should better understand its [AI] potential impact on society, on culture, on psychology, and on the economy of the world before we deploy it into the public sphere”
Yuval Noah Harari, AI for Good Global Summit, 2023
The historian added that “If you can’t know who is a real human and who is a fake human, trust will collapse, and with it at least free society. Maybe dictatorships will be able to manage somehow, but not democracies.”
“We are no longer mysterious souls; we are now hackable animals”
Yuval Noah harari, World Economic Forum, 2020
“We shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios”
Michael Schwarz, WEF Growth Summit, 2023
Speaking at the WEF Growth Summit 2023 during a panel on “,” Microsoft’s Michael Schwarz argued that when it came to AI, it would be best not to regulate it until something bad happens, so as to not suppress the potentially greater benefits.
“I am quite confident that yes, AI will be used by bad actors; and yes, it will cause real damage; and yes, we have to be very careful and very vigilant,” Schwarz told the WEF panel.
“We should regulate AI in a way where we don’t throw away the baby with the bathwater.
“So, I think that regulation should be based not on abstract principles.
“As an economist, I like efficiency, so first, we shouldn’t regulate AI until we see some meaningful harm that is actually happening — not imaginary scenarios.”
On January 23, 2023, Microsoft extended its partnership with OpenAI — the creators of ChatGPT — on top of the “$1 billion Microsoft poured into OpenAI in 2019 and another round in 2021,” according to Bloomberg.
This article was originally published by Tim Hinchliffe on