24.11.2018 Views

Accountability

Accountability

Accountability

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

person is morally responsible for their actions, even if they were determined (that is,<br />

people also give compatibilist answers).<br />

The neuroscience of free will investigates various experiments that might shed light on<br />

free will.<br />

Collective<br />

When people attribute moral responsibility, they usually attribute it to individual moral<br />

agents. However, Joel Feinberg, among others, has argued that corporations and other<br />

groups of people can have what is called ‘collective moral responsibility’ for a state of<br />

affairs. For example, when South Africa had an apartheid regime, the country's<br />

government might have been said to have had collective moral responsibility for the<br />

violation of the rights of non-European South Africans.<br />

Lack of Sense of Responsibility of Psychopaths<br />

One of the attributes defined for psychopathy is "failure to accept responsibility for own<br />

actions".<br />

Artificial Systems<br />

The emergence of automation, robotics and related technologies prompted the<br />

question, 'Can an artificial system be morally responsible?' [37][38][39] The question has a<br />

closely related variant, 'When (if ever) does moral responsibility transfer from its human<br />

creator(s) to the system?'.<br />

The questions arguably adjoin with but are distinct from machine ethics, which is<br />

concerned with the moral behavior of artificial systems. Whether an artificial system's<br />

behavior qualifies it to be morally responsible has been a key focus of debate.<br />

Arguments That Artificial Systems Cannot Be Morally Responsible<br />

Batya Friedman and Peter Kahn Jr posited that intentionality is a necessary condition<br />

for moral responsibility, and that computer systems as conceivable in 1992 in material<br />

and structure could not have intentionality.<br />

Arthur Kuflik asserted that humans must bear the ultimate moral responsibility for a<br />

computer's decisions, as it is humans who design the computers and write their<br />

programs. He further proposed that humans can never relinquish oversight of<br />

computers.<br />

Frances Grodzinsky et al. considered artificial systems that could be modelled as finite<br />

state machines. They posited that if the machine had a fixed state transition table, then<br />

it could not be morally responsible. If the machine could modify its table, then the<br />

machine's designer still retained some moral responsibility.<br />

Page 90 of 141

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!