1 year ago

HLF Review 2016

Tuesday, September 20

Tuesday, September 20 than direct experience, for example from reading. It was left open what technologies we will have to add to Deep Learning to achieve these higher reasoning functions. Thomas Dreier In the second hour, critics of AI voiced their concerns about the legal and ethical consequences of an uninhibited and unregulated development of AI for society. Thomas Dreier, Professor of Law at the Karlsruhe Institute of Technology (KIT), talked about the legal consequences of AI that are deployed in the real world. He focused on questions of liability. Liability has traditionally been a consequence of faulty behavior. If you cause damage, you should reimburse the person that you harmed. Behind it is the idea of an autonomous human being that can make decisions. But in the modern world, there is also the idea of liability without fault, for example for operators of inherently dangerous technical devices. Does this have to change for AI devices? Are they more like humans, are they legal personalities? His answer: intelligent robots are still essentially machines, humans have the duty to supervise them and are ultimately liable for the actions of their intelligent creations. Dirk Helbing Dirk Helbing, Professor of Computational Social Science at the ETH in Zurich, gave a presentation based on the Digital Manifesto that he published in 2015 together with other German researchers. He envisioned a big AI machine that is capable of predicting the actions of individuals and society as a whole. Will machines become our benevolent dictators or wise kings? Could societies be run like a giant machine? Governments have plans to create such a machine, Helbing said, based on Big Data and Deep Learning. In his opinion, we have to stop these plans. Instead, we should use the technology in a decentralized approach to enable our collective intelligence to create a better world. We can build an efficient, liberal and participatory economic system, a citizen web that rewards social and ecological production and behavior. Noel Sharkey Noel Sharkey from the University of Sheffield talked about the concerns that a growing number of scientists have about robotic weapons that make decisions about the life and death of humans. The Foundation for Responsible Robotics was established last year to discuss these issues. A major subject of their discussions is the use of autonomous robots by the military and the police. The USA is developing autonomous drones and submarines and Russia is working on tanks and fighter jets. Sharkey himself has been advocating a ban of autonomous killer robots with international organizations like the UN, with support from Nobel laureates and religious leaders. His main argument against those weapons is that nobody can guarantee that they will 74

Tuesday, September 20 comply with the established laws of war. And nobody can predict what would happen when two swarms of killer robots governed by secret algorithms were fighting each other. As a consequence, we should uphold the law that Isaac Asimov established in 1942: a machine should never be allowed to kill a human. A new bill of human technological rights should be created that determines how much control we should cede to technology. In the concluding discussion, Jim Hendler pointed out that the ethics goes both ways: the question is not only whether it is ethical to replace humans with autonomous machines, but also whether it is ethical to send people into harmful situations when we have a robotic technology that could do the job instead. The discussion then focused on the question of whether AI is more and more a proprietary technology of the big companies that hire all the scientific talent. Vint Cerf from Google and Holger Schwenk from Facebook both emphasized that their researchers publish their findings and that the companies make a lot of their technology openly available. The moderator then mentioned the new European General Data Protection Regulation which guarantees citizens a right to know the logic of algorithms that make important decisions about them. Can we fulfill that in the future when we have AI algorithms based on Deep Learning that do not operate with fixed rules? Can we point out biases in those algorithms? While Cerf and Schwenk denied any such biases and insisted on keeping their companies’ source code secret, other participants disagreed. Dirk Helbing gave examples of algorithms biased against women and people of color. A member of the audience, Jennifer Tour Chayes from Microsoft Research, talked about technical solutions to discover biases in algorithms and even techniques to de-bias them. On a closing note, a question from the audience brought up the discussion whether one day machines could be smarter than us. The scientists on the podium agreed that today there is no evidence that machines are acquiring consciousness and are becoming more intelligent than humans. Jim Hendler pointed out that in many situations it is the combination of a human and a computer that will outperform either of them alone. It was left to Vint Cerf to close the debate by saying that our problem today is not allowing too much autonomous AI, but “giving too much autonomy to artificial idiots.” 75