1 month ago

Smart Industry 1/2018

Smart Industry 1/2018 - The IoT Business Magazine - powered by Avnet Silica


Smart Business Title Story: Self-driving cars (allowing see-through and 360° nonline of sight sensing), 3D HD maps, and precise positioning using the Global Navigation Satellite System (GNSS). V2X communications should improve driving comfort and has the potential to save lives by reducing accidents caused by human error. More than 1.3 million people die on the roads every year. Automated driving, supported by safe and dynamic driving algorithms, could change this by delivering Vision Zero, an EU project with a target of reducing the number of traffic fatalities to zero. Challenges ahead Automated driving calls for extremely complex systems. Service-oriented, end-to-end vehicle control architectures require an holistic approach embracing cloud services and the delivery of software updates over the air. Safety, and system architectures need to be developed in tandem if they are to rely on one another. Reliability, safety and availability in particular depend on the real-time analysis of traffic situations, road conditions, weather, and other variables. Increasingly, carmakers are addressing concerns about “carjacking,” where hackers gain control of vehicles via wireless transmission to cause new kinds of problems, from vandalism, by intentionally crashing a vehicle, to holding passengers for ransom. Car hacking is a hot topic. It's not new for researchers to hack cars and they have demonstrated previously how to hijack a car remotely, how to disable a car's crucial functions such as airbags, and even how to steal cars. The latest car hacking trick doesn't require any extraordinary skills to accomplish. A research team from the University of Washington demonstrated in 2016 how anyone could print stickers at home, put them on road signs and trick autonomous cars into misreading the signs and potentially causing serious accidents. The European Union Agency for Network and Information Security (ENSIA) published a guide in December 2016, Cyber Security and Resilience of Smart Cars, which contains good Interview with Janina Loh The ethics of selfdriving cars: Kant Cars vs Aristotlemobiles Driverless vehicles will have to deal with those tricky life-ordeath decisions philosophers have argued about for years. So who will the artificial intelligences behind the wheel choose to favor? Janina Loh offers a drive-through of the neighborhood. Janina Loh teaches philosophy at the University of Vienna. Together with her husband, she recently contributed an essay on digital ethics to Patrick Lin’s anthology Robot Ethics 2.0 (Oxford University Press, 2017) ■ This interview was conducted by our editor, Tim Cole. Will autonomous vehicles represent a step toward greater safety or are they an added risk? Since 90% of all traffic accidents are due to human error, chances are that self-driving cars will reduce the current number of pileups. But even the best autonomous vehicles will eventually be involved in serious incidents. Today, the driver makes decisions spontaneously and reacts by reflex because of the lack of time and information required to make informed ethical decisions before it’s too late. Essentially, the same will be true with self-driving cars but their decisions will be largely based on automation, so it will actually be the algorithms used and their programming that decide what actions to take. That’s why we need to make sure certain ethical principles are built into our technical systems. So in the worst case, machines will have to decide over life and death, won’t they? It will be hard to program autonomous vehicles to include every conceivable scenario that could occur in traffic. That’s why we need to make sure certain moral principles are being followed. A car programmed to protect its own passengers at the cost of everyone else would be just as socially unacceptable as one that willingly sacrifices its occupants to save others. So to which rules should a self-driving car adhere? Let’s consider the classic case. A car is driving through a residential area when a group of small children run out from behind a parked car. To avoid them the car would have to pull to the left, but by doing so it would hit an 80-year-old man approaching on his bicycle. In your opinion, what should the car decide to do? That depends on which school of ethics you choose to follow. The utilitarian school of thought, founded 16

“Bentham - Bentley” “Kant - Chrysler” “Aristoteles - Audi” photo © Mecum Auctions by Jeremy Bentham in the early 19th century, states that the best action is the one that maximizes utility. According to Bentham’s Greatest Happiness Principle, we should be governed by the credo “the greatest happiness of the greatest number.” This means the old man must die because his usefulness to society is probably less than that of the children, one of whom might grow up to become the next Einstein. However, according to the school of deontological ethics, prescribed by Immanuel Kant, assigning different values to different human lives is completely unethical because human dignity is absolute, and you can’t compare absolutes. In fact, human dignity is the bedrock of most legal systems in liberal democracies today. Sounds like there is no real ethical solution after all? Philippa Foot, a famous British philosopher and ethicist, called this kind of situational dilemma a “trolley problem” [based on a similar dilemma as described above using a trolley bus rather than a car and adding further complications]. Philosophers often engage in thought experiments such as these, where they describe hypothetical situations, sometimes realistic and sometimes theoretical, designed to investigate our “moral intuitions” when dealing with dilemmas. There is no right or wrong answer to a trolley problem, which means we don’t need to solve some kind of puzzle but can choose wisely before we let autonomous vehicles loose on mankind. So which school of ethics should carmakers follow? The automotive industry is focused on making sure accidents don’t happen in the first place. Driver assistance systems are programmed to react defensively, meaning that when in doubt they will slow down or stop and ask questions later. If an accident becomes unavoidable, the European Ethics Commission has laid down that under no circumstances may a vehicle be programmed to choose between potential human victims. Instead, the steering systems must be programmed to seek to avoid an accident by all means, or at least to reduce speed (and thus collateral damage) as much as possible. And if worst comes to worst, who is considered liable? It's certainly not the car itself, since algorithms cannot be held legally accountable. Do we need some kind of Digital Road Traffic Act? The European Parliament suggested that autonomous driver assistance systems should be considered to be legal entities – a kind of electronic persona. After all, companies can be taken to court so why not an algorithm? This is especially true for self-learning artificial intelligence (AI) systems, which would need to be issued a digital legal personality of some kind. Of course, this also means they would have to be registered with the authorities and would need to be provided with assets with which to pay compensation, or at the very It will be hard to program autonomous vehicles to include every conceivable scenario that could occur in traffic Vital decisions Autonomous vehicles will one day be forced to make life-or-death choices in an instant least they would require liability insurance. We need to think long and hard about what this kind of “digital personhood” means in practice, because we are heading towards a future in which autonomous and self-learning machines will play an increasingly important role. Couldn’t the creators of such systems conceivably argue that, since they are self-learning, it wasn’t they who taught the machine to do what it does, but that it actually taught itself? It’s certainly true that the damage done by a self-learning system can’t easily be traced back directly to the programmer. In order to act in a virtue-ethical sense, an autonomous driver assistance system will need to be capable of a far greater degree of self-learning than a simple “Kant Car” or “Bentham Porsche.” It would not be inconceivable to imagine a kind of “Aristotlemobile” that would require owners to drive themselves around at first in order for the car to learn to drive the way they do. a b c 17