Breaking News

Self-driving Cars Mimic Human Moral Decision-making

Charles Darwin argued fervently that man’s sense of morality could only be a uniquely human trait in his book, The Descent of Man, and Selection in Relation to Sex. This is a common mode of thought, the notion that moral decisions and ethical dilemmas are unique to the human experience. The claim has been disputed in social, psychological, anthropological and biological studies as well as various forms of art from literature to music, but most still would espouse the idea that humans are probably, at the very least, the most cognizant of such things; however, the latest advancements in machine learning threaten even this.

The neurology journal, Frontiers in Behavioral Neuroscience, published a new study that examines the behavior of humans as well as moral assessments with the aim of determining how computers might adapt to making such assessments and behaving similarly. Mimicking a car with a human driver, self-driving vehicles can encounter the same split-second moral decisions that human beings make on the road everyday.

 

The groundbreaking study challenges the assertion that moral decisions are unique to humans by shifting the paradigm of how such decisions are viewed. They have been conceptualized as being highly context-dependent and, thus, unable to be rendered in algorithms. This new study, however, suggests that mere value-of-life-based models can be used to simulate human behavior in these ethical dilemma scenarios, and the research purports that moral behavior on the parts of humans can be described rather efficiently in algorithms and then utilized by machines to make the right decisions on the road in the context of self-driving cars in traffic.

 

The question is whether or not self-driving vehicles can be moral and mimic human behavior, particularly in specific contexts. Previous thoughts on the subject suggested that this could not be done. Science Daily unpacked the implications of the new Virtual Reality experiments that investigate moral assessments and human behavior at the University of Osnabrück’s Institute of Cognitive Science. The authors of the study were able to analyze human behavior in simulated road traffic situations.

 

“The participants were asked to drive a car in a typical suburban neighborhood on a foggy day when they experienced unexpected unavoidable dilemma situations with inanimate objects, animals and humans and had to decide which was to be spared,” according to Science Daily’s synopsis of the experiments. “The results were conceptualized by statistical models leading to rules, with an associated degree of explanatory power to explain the observed behavior. The research showed that moral decisions in the [broad] scope of unavoidable traffic collisions can be explained well, and modeled, by a single value-of-life for every human, animal, or inanimate object.”

Leon Sütfeld and his team say that this is the first time that it can be scientifically presumed that moral decisions are not so context-dependent, and it indicates that they can, therefore, be algorithmically rendered. “But we found quite the opposite,” Sütfeld explains, opposing the idea that moral decisions are super context-dependent. “Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”

 

The implication here is that moral behavior in the human mind can be described and modeled with algorithms to enhance machine learning to the point of mimicking human behavior. The study, in fact, finds that major implications go even further in unavoidable situations when it comes to self-driving cars because the margin of error (defining error as an instance in which the lower value is inexplicably privileged over the higher value) is far more probable among humans than it is for the correctly programmed, self-driven vehicle.

 

Science Daily presents the example of “a leading new initiative from the German Federal Ministry of Transport and Digital Infrastructure (BMVI) has defined 20 ethical principles related to self-driving vehicles, for example, in relation to behavior in the case of unavoidable accidents, making the critical assumption that human moral behavior could not be modeled.”

A senior author on the study, Prof. Gordon Pipa, explains that machine learning appears to have reached a point at which humans can program machines to make moral decisions that humans would make if given the proper opportunity to process the dilemmas, the very same decisions that are so critical to the model of societal coexistence and the serious, ongoing debate about automated decision-making. Pipa says, “we need to ask whether autonomous systems should adopt moral judgments, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?”

 

The new German ethical principles, for example, classify a child darting into oncoming traffic as being a primary contributor to the creation of the risk and goes so far as to qualify that child as being less worthy of being saved than the adult who simply stands on the sidewalk off-road and is irrefutably uninvolved. Another senior author on the study, Prof. Peter König, writes, “Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma.

 

“Firstly, we have to decide whether moral values should be included in guidelines for machine behavior and secondly, if they are, should machines should [sic] act just like humans.”

SIMIALR POST

2018.06.08

Cedric Dent

What’s Consciousness and Do Plants Have It?

2018.04.30

Cedric Dent

The Oldest Question in Biophilosophy

2018.04.30

Cedric Dent

Whales and Other Marine Animals Change Darwin’s Evolutionary Tree