Every philosopher (e.g. scientist) is slave of some formalism and tends to apply it as much as possible to any reality aspects. This is indeed a form of bias and my bias, recently, is that I tend to interpret everything in terms of bias/variance… So why not pushing this to the extreme and applying it to nothing less than the hardest issue of science and philosophy? conscience, as simple as that…
In particular I will aim here to address issues like: does conscience exist really, what is its function, may robots have one, and so on…
Let’s go straight to the end of my reasoning: we could use the bias/variance formalism, useful to describe any learning procedure, to support the idea that conscience is not only an epiphenomenon but rather a necessary component of every rational cognitive process. In particular conscience is required for interacting with a complex multivariate, multi agent and uncertain reality where the criterion of effectiveness/success of such interaction is complex, multivariate, uncertain and dependent on others too.
In other terms what I claim is that surviving in our reality requires any intelligent agent to roughly decompose its intelligent activities in two parts: a part (made of several submodules if needed) which can be addressed as a problem of optimization (according to an univariate cost function) and implemented similarly to a fast regulator or automatic controller: a second part which has to deal with exploration, exception, multi criteria, interaction, uncertainty and adaption.
As far as the first part is concerned, think for instance to visuomotor control regulations which allow us everyday to survive in an hostile environment thanks to their fast and unconscious control loops able to monitor, control or optimize some tasks. This part corresponds to any bias component of a cognitive effort: a stable, situated and limited module aiming to exploit some previous or acquired task in a specific application domain . This module is rapid and effective when the application domain is respected and the addressed cost function is of interest.
Let me quote Christof Koch from his book “Consciousness: Confessions of a Romantic Reductionist. ” :
« The mystery deepens with the realization that much of the ebb and flow of daily life does indeed take place beyond the pale of consciousness. This is patently true for most of the sensory-motor actions that compose our daily routine: tying shoelaces, typing on a computer keyboard, driving a car, returning a tennis serve, running on a rocky trail, dancing a waltz. These actions run on automatic pilot, with little or no conscious introspection. Indeed, the smooth execution of such tasks requires that you not concentrate too much on any one component. »
Conscience boils down to all that cannot be dealt with in this manner, in other terms to all that escapes to the domain of automatic, fast yet biased servomotor modules. No free lunch theorems have shown that there is no optimization working for settings or models optimal for all distributions. Any biased approach, though effective in his own application domain, is doomed to failure (or better to low performance) in a complex world which cannot be interpreted at the light of a single criterion, or a single cost function.
So the two facets of our cognitive process address different aspects: on one side bias, exploitation, unconsciousness, automation, regularity, single variate criterion, rapidity, optimized solution on the other variance, exploration, awareness, attention, exception, multi criteria, delay, assessment of alternatives.
So as Koch said, consciousness is useful because « life sometimes throws you a curveball! »
According to this interpretation, consciousness is a necessary component of a high level cognitive functionality; in other terms I refute the possibility of having a zombie who could have the equivalent cognitive capabilities of a conscious being, ale I don’t believe that a too biased learning agent could be effective in the long run in a ever changing world. Not being conscious would reduce the functionalities to automatic, biased learning or control processes, making the resulting behavior constrained to limited objectives, settings and criteria. Though a zombie could emulate in a short time and specific contexts the activities of a conscious agent, I believe that it is relatively easy for a conscious being to recognize that the zombie is only simulating intelligence (or at least conscious intelligence) and unmask it.
Think for example how it is easy for a young child to expose the limitations of a highly expensive robot just after some minutes of free interaction.
Think now about the rising success of self driving cars and the fact that it often occurs to people driving their car a very long way and realizing that they were thinking of something else. The growing appearance of self-driving cars seems to confirm that conscience is not necessarily needed for implementing driving functionality in conventional setting. Think now about the dramatic eventuality of a car deciding in a fraction of second between two potential victims during a car accident: no automatic algorithm would be considered adequate to deal with such ethical issue and we would be uncomfortable in dictating to the robot some behavior rules to act in such context. We are indeed entering the domain of consciousness where the conventional mechanistic way of proceeding is no more relevant.
I consider all these examples as evidence about the impossibility of attaining high level of cognitive capability without conscience, like it is impossible to attain knowledge with a biased, constrained, precooked modeling approach.