Stephen Hawking was often described as being a vocal critic of AI. Headlines were filled with predictions of doom by from scientist, but the reality was more complex.
Hawking was not convinced that AI was to become the harbinger of the end of humanity, but instead was balanced about its risks and rewards, and at a compelling talk broadcast at Web Summit, he outlined his perspectives and what the tech world can do to ensure the end results are positive.
Stephen Hawking on the potential challenges and opportunities of AI
Beginning with the potential of artificial intelligence, Hawking highlighted the potential level of sophistication that the technology could reach.
“There are many challenges and opportunities facing us at this moment, and I believe that one of the biggest of these is the advent and impact of AI for humanity,” said Hawking in the talk. “As most of you may know, I am on record as saying that I believe there is no real difference between what can be achieved by a biological brain and what can be achieved by a computer.
“Of course, there is unlimited potential for what the human mind can learn and develop. So if my reasoning is correct, it also follows that computers can, in theory, emulate human intelligence and exceed it.”
Moving onto the potential impact, he began with an optimistic tone, identifying the technology as a possible tool for health, the environment and beyond.
“We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one: industrialisation,” he said.
“We will aim to finally eradicate disease and poverty; every aspect of our lives will be transformed.”
However, he also acknowledged the negatives of the technology, from warfare to economic destruction.
“In short, success in creating effective AI could be the biggest event in the history of our civilisation, or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined or conceivably destroyed by it,” he said.
“Unless we learn how to prepare for – and avoid – the potential risks, AI could be the worst event in the history of our civilisation. It brings dangers like powerful autonomous weapons or new ways for the few to oppress the many. It could bring great disruption to our economy.
“Already we have concerns that clever machines will be increasingly capable of undertaking work currently done by humans, and swiftly destroy millions of jobs. AI could develop a will of its own, a will that is in conflict with ours and which could destroy us.
“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity.”
In the vanguard of AI development
In 2014, Hawking and several other scientists and experts called for increased levels of research to be undertaken in the field of AI, which he acknowledged has begun to happen.
“I am very glad that someone was listening to me,” he said.
However, he argued that there is there is much to be done if we are to ensure the technology doesn’t pose a significant threat.
“To control AI and make it work for us and eliminate – as far as possible – its very real dangers, we need to employ best practice and effective management in all areas of its development,” he said. “That goes without saying, of course, that this is what every sector of the economy should incorporate into its ethos and vision, but with artificial intelligence this is vital.”
Addressing a thousands-strong crowd of tech-savvy attendees at the event, he urged them to think beyond the immediate business potential of the technology.
“Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit”
“Everyone here today is in the vanguard of AI development. We are the scientists. We develop an idea. But you are also the influencers: you need to make it work. Perhaps we should all stop for a moment and focus our thinking not only on making AI more capable and successful, but on maximising its societal benefit,” he said. “Our AI systems must do what we want them to do, for the benefit of humanity.”
In particular he raised the importance of working across different fields.
“Interdisciplinary research can be a way forward, ranging from economics and law to computer security, formal methods and, of course, various branches of AI itself,” he said.
“Such considerations motivated the American Association for Artificial Intelligence Presidential Panel on Long-Term AI Futures, which up until recently had focused largely on techniques that are neutral with respect to purpose.”
He also gave the example of calls at the start of 2017 by Members of the European Parliament (MEPs) the introduction of liability rules around AI and robotics.
“MEPs called for more comprehensive robot rules in a new draft report concerning the rules on robotics, and citing the development of AI as one of the most prominent technological trends of our century,” he summarised.
“The report calls for a set of core fundamental values, an urgent regulation on the recent developments to govern the use and creation of robots and AI. [It] acknowledges the possibility that within the space of a few decades, AI could surpass human intellectual capacity and challenge the human-robot relationship.
“Finally, the report calls for the creation of a European agency for robotics and AI that can provide technical, ethical and regulatory expertise. If MEPs vote in favour of legislation, the report will go to the European Commission, which will decide what legislative steps it will take.”
Creating artificial intelligence for the world
No one can say for certain whether AI will truly be a force for positive or negative change, but – despite the headlines – Hawking was positive about the future.
“I am an optimist and I believe that we can create AI for the world that can work in harmony with us. We simply need to be aware of the dangers, identify them, employ the best possible practice and management and prepare for its consequences well in advance,” he said. “Perhaps some of you listening today will already have solutions or answers to the many questions AI poses.”
“You all have the potential to push the boundaries of what is accepted or expected, and to think big”
However, he stressed that everyone has a part to play in ensuring AI is ultimately a benefit to humanity.
“We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfill our potential and create a better world for the whole human race,” he said.
“We need to take learning beyond a theoretical discussion of how AI should be, and take action to make sure we plan for how it can be. You all have the potential to push the boundaries of what is accepted or expected, and to think big.
“We stand on the threshold of a brave new world. It is an exciting – if precarious – place to be and you are the pioneers. I wish you well.”