Wednesday 28 May 2014

An inconvinient necessity

'Morality' by wasim muklashy
under a CC license
Artificial intelligence (AI) is no news. As a concept, it probably has roots dating back at ancient times. But the corresponding research field is rather new as it is said to have been launched just in 1956.

AI has gained a lot of attention by the public, governments and corporations for the single reason that it is a very promising field towards a variety of goals ranging from helping humans in daily tasks, helping protect the environment, accelerating scientific research and discovery to boosting surveilance and gaining an upper hand in wars. And also, well, it has found room in a long list of novels, films, etc. which has contributed to the - good or bad - public image of AI (Bladerunner, the Star Wars series, the Star Trek films and TV series, 2001 A Space Odyssey, Battlestar Galactica, Almost Human being just some few of the many films/ series portraying AI in full interaction with the human world).

At the same time, ethics has been developing throughout human "evolution", formulating questions on the moral aspects of this "evolution", seeking answers to those questions and, ideally, providing guidance to resolve ethical dilemmas and move forward. Most would accept, however, that ethics has been a rather "soft" filter for human activities throughout history. But let's not focus on that right now.

As I was browsing Slashdot the other day, a post on autonomous cars caught my attention. The original article on WIRED debated on the moral and legal aspects of programming autonomous cars. In brief, the dilemma that was elaborated on was what should an autonomous car be programmed to do if an accident is inevitable. Should it choose to crash onto the most "robust" target? Should it choose to crash onto whatever minimises the damage to itself or its passengers? Should it decide randomly? Who gets the blame on legal and ethical terms (those two are not necessarily the same)? Is it the owner, the manufacturer, the original programmer, the physicist/ engineer/ mathematician who developed the driving behaviour models (who may have had nothing to do with the production of the car)?

The problem in such questions is that there are valid arguments both pro and against each of the options. Actually, the problem is not in identifying the arguments but, rather, in quantifying their importance in a way compatible to established ethics, public perception and the law. And those three can stand really far apart from each other. Even worse, the distance amongst them may change by time due to many different factors.

We may not be realising that but similar concerns should actually apply to all AI elements of our world (they are too many) - even to plain automation systems (problems do arise from time to time). Autonomous cars just happen to be a high profile case right now (BTW, the new generation of Google's autonomous cars pushes the barrier a bit further).




Should we (the human race) stop for a while to sort out the moral questions before moving forward? I think, that may not be a realistic question to ask!

Tuesday 27 May 2014

Distances

'The breakwater' by
Joris Louwes under a CC license
It's been ages since I 've last posted a few lines here.

It 's not that writing something down has become more difficult than it normally is. It's the lack of motivation for putting that extra effort in wrapping it up - the much needed step before hitting the 'publish' button.

Don't get me wrong, the paragraph above is not meant to be a kind of humble brag. Rather, it is just a plain reference to the common, school-grade essay-writing methodology.

But - hmmm -  I'm already diverging from the topic I had in mind, which - by the way - has nothing to do with either technology or innovation.

The other day I was talking with an old friend. A person who now lives several thousand miles away, with whom I haven't had a descent, full-scale conversation for months or even more. Silly, I know; especially in the age of internet/voip/social-something everywhere that we live in. Not that I didn't have valid excuses ready (I always do). Silly, perhaps also sad, but true.

At any rate, it was nice catching up, spending quite some time describing trivial details of everyday life, as if they did, really, matter.

And that was exactly what got me thinking.

It's not necessarily what one has to say. And it's not necessarily if it is technically easy or not to communicate a message at a certain time. It's the outcome of the feeling we are after that which defines the distance to a person and justifies the effort to communicate.

(OK, the lines above are 'a touch' more dramatic than they should. And, to be fair, plenty of the things we say during the day are part of our working life or inseparable aspects of our operation as 'normally functional individuals'. To such thing, the paragraph above does not apply. But if you place this part out of the communication equation, the rest seems to be fully up to us.)