Wednesday 28 May 2014

An inconvinient necessity

'Morality' by wasim muklashy
under a CC license
Artificial intelligence (AI) is no news. As a concept, it probably has roots dating back at ancient times. But the corresponding research field is rather new as it is said to have been launched just in 1956.

AI has gained a lot of attention by the public, governments and corporations for the single reason that it is a very promising field towards a variety of goals ranging from helping humans in daily tasks, helping protect the environment, accelerating scientific research and discovery to boosting surveilance and gaining an upper hand in wars. And also, well, it has found room in a long list of novels, films, etc. which has contributed to the - good or bad - public image of AI (Bladerunner, the Star Wars series, the Star Trek films and TV series, 2001 A Space Odyssey, Battlestar Galactica, Almost Human being just some few of the many films/ series portraying AI in full interaction with the human world).

At the same time, ethics has been developing throughout human "evolution", formulating questions on the moral aspects of this "evolution", seeking answers to those questions and, ideally, providing guidance to resolve ethical dilemmas and move forward. Most would accept, however, that ethics has been a rather "soft" filter for human activities throughout history. But let's not focus on that right now.

As I was browsing Slashdot the other day, a post on autonomous cars caught my attention. The original article on WIRED debated on the moral and legal aspects of programming autonomous cars. In brief, the dilemma that was elaborated on was what should an autonomous car be programmed to do if an accident is inevitable. Should it choose to crash onto the most "robust" target? Should it choose to crash onto whatever minimises the damage to itself or its passengers? Should it decide randomly? Who gets the blame on legal and ethical terms (those two are not necessarily the same)? Is it the owner, the manufacturer, the original programmer, the physicist/ engineer/ mathematician who developed the driving behaviour models (who may have had nothing to do with the production of the car)?

The problem in such questions is that there are valid arguments both pro and against each of the options. Actually, the problem is not in identifying the arguments but, rather, in quantifying their importance in a way compatible to established ethics, public perception and the law. And those three can stand really far apart from each other. Even worse, the distance amongst them may change by time due to many different factors.

We may not be realising that but similar concerns should actually apply to all AI elements of our world (they are too many) - even to plain automation systems (problems do arise from time to time). Autonomous cars just happen to be a high profile case right now (BTW, the new generation of Google's autonomous cars pushes the barrier a bit further).




Should we (the human race) stop for a while to sort out the moral questions before moving forward? I think, that may not be a realistic question to ask!

No comments: