Robots start being everywhere and humans are afraid of losing their role (as well as their job). But the best human revenge over robots is to grant them no consciousness. They can do wonderful (or horrible) things but, whatever they are, they are (and possibly will) not (be) conscious. They mine big data for us, they clean for us and they will be driving for us but, no way, they will do without consciousness. The rationale of this post is simple. Human have problems in defining consciousness but they feel extremely confident in denying it in other entities. So let’s try to define consciousness by focusing on why we deny it to others, in this case robots. It is easy to deny consciousness to rocks and simple organisms, on the basis of their non existing or irrelevant behavior, but why are we so sure in denying consciousness to robots? Since when we are talking of consciousness we refer to human consciousness, why robots do not have it (yet)?
My answer is simple: we deny consciousness to all behavior or functionality that can be described as automatic. So being conscious should mean first of all “not being automatic”. In Dennet’s words we could define automatic as “something that has competence without comprehension“. The behavior can be complex and astonishing (think about Google recognition of image features) but since is somewhat programmed is necessarily not conscious.
So the idea is the following: let us define better what is automatic and, by reverse, we will define better what is conscious.
I will list here below what I consider as peculiar of an automaton or an automatic agent. Note that I will not put the fact of being programmed in this list since the automatic behavior can be the result of a long evolutionary process and not necessarily the output of a programing effort. Also I will focus on agents, so entities which make actions and evolve in time. So, in my opinion, the most peculiar aspects of an automatic agent are:
- A well-defined and accessible cost function, which maps an internal state to a degree of cost (or gain)
- An accurate observation of the current state
- An accurate model of the impact of an action on the future state: note that accurate means that it is able to predict not only the effect of a single action but of an entire sequence of actions
Overall, an automatic agent is a dynamic agent who knows (or pretends to know) a lot about itself and the surrounding environment. This self-confidence allows him to proceed indeed in an automatic and rapid manner. This means that it does not need to remember or store much of its past or its present, it needs simply to act.
All this appears to humans as incredibly (or excessively) simple: humans do not have a simple and well-defined cost function, like they have no access to a perfect observation of their state or a clear idea of the impact of their actions on their future state. Think, for instance, to taking the decision of getting married: how many doubts, trade-offs, considerations, hesitations are crowding our minds. Think instead to a self-driving car which in a situation of inevitable accident (e.g. it has to take the gruesome decision of running over either a group of old walkers or a toddler) takes anyway and rapidly a decision: it will be able to do that only because all the uncertainty has been somewhat removed. The robot can be fast and automatic because all the myriads of cost functions (ethical, practical, economical) that could pop up in a human mind in the same situation are condensed and summarized in a single and certain one.
So what? if all this is considered by human as not conscious, is not possible that consciousness is indeed the (human) functionality developed to manage all the configurations which cannot be addressed in an automatic manner? In other terms, it is well know that humans make myriads of things in an unconscious (i.e. automatic) manner. We could argue that the evolution led us to solve a lot of tasks in an automatic manner, or equivalently we have been able to address many functionalities in a robot-like manner. For many others we have not (yet) been able: this can be due to several reasons: limited sensory capabilities, limited intelligence, excessive complexity of the task to be solved (e.g. related to the fact that we live in multi-agent communities). So, though a big part of us acts automatically and unconsciously like a robot, all the rest still requires consciousness. Now the question is “why this is the case”? Why could consciousness help in addressing situations that we are not able to address in an automatic way?
Quoting Dennet “..the puzzle today is “what is consciousness for (if anything)?” if unconscious processes are fully competent to perform all the cognitive operations of perception and control” .
My answer as data scientist, is: data collection. Consciousness could be a way to alert memory functionalities to store situations (and related actions) in which we are aware that an automatic procedure is not good enough (for us as individual or for our community). Since we are not able to solve this yet automatically, why not simply store the configuration waiting to learn (sooner or later) what to do? So conscious states could be (also) data collection phases, with the aim of training a learning capability that could be helpful in the future. Think about driving a car: the amount of consciousness during our first driving days is largely superior to the one we have after ten years of licence. The task has been learned and made automatic: consciousness is no more necessary.
So, will the robots ever have consciousness? If we design robot in order to solve automatically a number of tasks, it means that we think that this task is sufficiently well defined not to require consciousness. This is how we conceive nowadays robotics. Using robots implies (implicitly or explicitly) that the task is well-defined. Think therefore which strong assumption is made when we accept the use of autonomous robotic warriors (drones,…)….
Once robot will be used to address ill-defined tasks (e.g. manage an interstellar mission of interaction with new species) consciousness functionalities will be needed.