One of the best (and most intense) movie I ever watched is “Sophie’s choice“. What shocks more in this movie is that it makes incredibly explicit (thanks also to the amazing virtuosity of Meryl Streep ) the pain of a human choice (in this case a mother forced to choose which of her two children to save). It somewhat suggests that what is still more tragic than the outcome of the decision (definitely horrible in this case since it implies the loss of a child) is the process of formulating the decision, i.e. the process of putting a number (or a score) on our sentiments and rationally using such number to decide. The saddest part of the story is that humans (in this case Nazis) force other humans to realize that they can put a measure on situations (that we would like to remain incommensurable) and acting consequently for the lesser evil.
Why am I using such analogy in such an arid blog expected to target scientific matters? Since I have the belief that most of the discomfort we feel about the acceptance of autonomous agents is that their design inevitably asks human to pass through a similar path. Whatever autonomous agent we will create (e.g. car, weapon, worker) it will be the outcome of a design process which implicitly or explicitly passed through the step of putting a number on events and situations : for instance how much is a human life worth with respect to the passenger comfort, or the cost of being under a terroristic attack with respect to the risk of killing an innocent being. This is related to the consequentialist approach to ethics which interprets a moral act as the one with the best overall consequences. But any optimization step requires a cost function to be optimized..
It appears then that we are deliberately acting as our own torturers forcing a human designer (or the entire society) to put numbers on things we would rather keep out of measurement (e.g. our innermost feelings and our ethical values). Every AI approach will inevitably imply to get rid of the incommensurability of ethical values (though I would be glad to hear a counter-argument).
It follows that the most worrying consequence of AI is no more the creation of an artificial being (or intelligent zombie). It is instead that for reaching this goal we will force ourself to fathom and measure the innermost and most private secrets of our soul. Once we will have quantified our soul for making ethical robots, once incommensurability will have disappeared in the ranking of real numbers, could we still pretend of being more human than them?