Special thought in memory of I. Prigogine

When I heard of the decision of the Catalan parliament to get rid of central state domination, this image popped into my mind

 Capture d’écran 2017-10-15 à 12.14.06.png

 

and I when I heard that most banks were leaving Barcelona I rather thought about that…wall2.jpg

40 years after Prigogine Nobel prize and 100 years after his birth, I think it is time that politicians learn that the world is complex and nonlinear

Advertisements

Unknown unknowns: a data science perspective

If you forgot about the catch quote of Rumsfeld on “unknown unknowns” it is time to read its interesting history on Wikipedia. What I want to discuss here is the fact that the related (in)famous matrix of knowledge could be also interpreted from a data science perspective.

According to the matrix of knowledge we can distinguish between
Known Knowns (KK) Known Unknowns (KU)
Unknown Knowns (UK) Unknown Unknowns (UU)

 

To adopt the data science perspective I will consider here that the target of the knowledge process is a random variable Y. Knowing Y  can be interpreted in two manners:

  1. in a predictive perspective where knowing Y means reducing the uncertainty of Y thanks to a set of explicative variables X: in information theory  this boils down to finding some variables X bringing information (i.e. reducing uncertainty) about Y, i.e. the mutual information  I(X;Y) is greater than zero. For unaware readers by I(X;Y) I intend here H(Y)-H(Y|X), i.e. the reduction in the uncertainty of Y once X is observed.
  2. in a causal perspective where  we look here for causal variables X, i.e. variables that once manipulated change the distribution of Y.  We could note this asymmetric causal relation by I(X->Y).

For the sake of simplicity I will not distinguish here the two cases and I will limit to consider the predictive case.

Our degree of knowledge is instead related to the correspondence between our estimation Î(X;Y) and the reality. We know when Î(X;Y)  is in concordance with I(X;Y).

So given those premises we can distinguish between the 4 following cases:

  • KK: as example consider the laws of mechanics. Y (e.g. planet position) is measurable, X (gravity force) is measurable too and  X provides information about Y, i.e. I(X,Y)>0. Furthermore we have a good estimate Î(X;Y) of the mutual information I(X;Y) and this estimate is significantly larger than zero. Then we are reasonably certain that I(X;Y)>0. There is something to know (I(X:Y)>0) and we know it ( Î(X;Y)>0).
  • KU: as example consider the understanding of cancer appearance (Y): we know that we don’t know its causes. We are not able to find (causal or predictive) variables X such that I(X;Y)>0.  Notwithstanding we could have access to other variables Z for which we have enough evidence that I(Z;Y)=0. In other words we have a good estimate of I(Z;Y) but this estimate is not significantly larger than zero. Think also to the dependency between the price of stock today and the price in 15 days, assessed on the basis of a very long historical record. We have enough evidence from data that, whatever is our effort, we are not better predictors than random. We are reasonably certain that I(Z;Y)=0
  • UK: as example consider the law of mechanics before Copernican revolution. Scientists had access to sufficient data to infer the dependencies but they were not able to do that. In UK setting there are some variables X and Y for which I(X;Y)>0 but because of our lack of data or our fallacious inferential method either we disregard X or we assume that Î(X;Y) =0 . We are not able to prove that this regularity exists.
  • UU: as example consider possible wrong models that we are taking for correct or fallacious reasoning  (e.g. spurious causal relationships) we take for granted. In this case we are considering variables X for which I(X;Y)=0 but because of our lack of data or our fallacious inferential method (e.g. selection bias, Simpson paradox, overfitting, bad assessment of uncertainty) we deem that Î(X;Y) >0. At the same time it could be that I(Z;Y)>0 but either we do not have access to Z or we wrongly deem that Î(Z;Y) =0. We don’t know (i.e. our estimate is wrong) that we don’t know (i.e. we take for granted the wrong dependency or regularity). This is the situation typical of black swan (e.g. financial crisis). We cannot forecast them because the variables X we are taking into account are not the right ones (Z). In other terms we wrongly believe that some variables (e.g. bank profits) may have an impact on our phenomenon of interest (e.g. stock market) and we disregard the important variables (subprime).

Self driving cars, the code of the road and clinical trials

Changing highway code to speed up the introduction of autonomous vehicles sounds like getting rid of clinical trials to speed up new drugs commercialization… Is it really this that people want ? https://lnkd.in/eG3WQPF https://lnkd.in/erjqvfB https://lnkd.in/eePC7fz

By changing the code we are directly authorizing the test of a potentially dangerous technology in the final user environment. This is in principle not allowed to drugs (or similar technologies): they have to pass a certain number of clinical trials https://en.wikipedia.org/wiki/Clinical_trial and approvals before being tested in real conditions,  without forgetting that a clinical trial is typically started only after having passed a peer review process (e.g. scientific publication). Clinical study design aims to ensure the scientific validity and reproducibility of the results, since only 10 percent of all drugs started in human clinical trials become an approved drug. At this stage I have not yet seen either sufficient scientific validity or reproducibility in autonomous car domain. We are probably in a stage very similar to the preliminary publication stage. In drugs this means (in the best case) that you are still tens of year away from commercialization. In self driving car case, if the code is changed now,  we will allow directly  the test in real conditions to commercial actors who did not pass any scientific or protocol procedure and for which reproducibility is not yet proved. If we think that probably only a small percentage of them deserves the authorization to drive in real environment, is it really worthy that we take the risk? Overall I have the feeling that under excessive economic pressure, we are trading safety for commercialization: this is a very delicate issue that cannot be addressed if not after accurate assessment. Unfortunately autonomous car will have as many deleterious side effects as drugs: before changing the law let us create a serious protocol of assessment of algorithms and solutions. The debate is open …

Atemporal truth, science and moving target

Is the pervasive use and effectiveness  of Data Science in many scientific domains leading to the refusal of an atemporal truth, as final target of a scientific endeavor?
Even if an atemporal truth existed, the methodological approach of Data Science brings a lot of arguments about the objective impossibility of attaining it on the basis of empirical observations. The adoption of a model is more due to its usefulness than to its degree of absolute certainty. Models describing a phenomenon of interest change and evolve as far as new measures are collected, new variables are measured, new validation opportunities are discovered, new computational ressources are available and new objectives are set.
The degree of quality of model is more a conditional property depending on practical and historical contexts rather than an absolute value measuring its presumed distance from the truth. Consider all the models and laws describing a biological organism before and after the invention of measurement devices at genomic and genetic level: we cannot consider the models before the advent of genomics data as better or worse than the ones created afterword. These two families of models are simply incommensurable: they refer to two different descriptions of the phenomenon of interest, one containing the notion of genes while the other does not.

However, that “all models were wrong” we knew it already, but what is still more intriguing is: are we sure we are still talking of the same phenomenon? Once we introduce in our ontology the notion of genes, is the organism we are referring to still the same? is the phenomenon of interest the same? Of course what we are perceiving with our human senses is the same (i.e. the ontology of our senses is the same), but is the object of our scientific endeavor the same entity as before? Even if we are willing to believe in some atemporal true description phenomenon, we cannot deny that the definition of a new measurement technology (or sensor) has also changed the definition of what is the phenomenon. So if our object of interest changes by changing the measurement device can we expect an atemporal definitive truth?

An important debate in philosophy of science is concerned with the evolution of science: is the progress a cumulative one or do we assist to a discontinuous path characterized by paradigm shifts.  From a data science perspective, where the phenomena of interested is defined by the data we measure, is it still possible to define science as an asymptotic journey converging to the definitive truth if the target itself is moving?

Asimov and self-driving cars

Asimov stated the extended version of the well-known Laws of Robotics  as follows:

  1. Law 1: A tool must not be unsafe to use. It is of course possible for a person to injure himself with one of these tools, but that injury would only be due to his incompetence, not the design of the tool.
  2. Law 2: A tool must perform its function efficiently unless this would harm the user.

Let us suppose now that the tool is a robot acting in a real setting with finite knowledge of its environment (or equivalently in a condition of uncertainty). If this robot is a self-driving car it is expected to act in a closed loop setting, i.e. perform continuously a control action. As a rational decision maker, the intelligent robot is expected to:

  1. assess the probability of the potential outcomes of his actions
  2. assign a cost to any outcome, notably in a binary case the costs of false positives and false
  3. take the action which minimize the expected cost or maximize the expected benefit

In a situation affected by uncertainty (btw are you aware of any situation deprived of ?) a rational agent asked to choose between “action” and “no action” should always take the option which minimizes the cost of failure or equivalently maximizes the gain.

Asimov laws on the contrary do not foresee any uncertainty. They state that a robot MUST not be unsafe, not that it should maximize the degree of safety. No probability of failure (e.g. harming a human) is foreseen. The logical consequence of his laws is that the cost of a false positive (taking an action which seems good but is indeed harmful) is infinite. Asimov laws are deterministic, the reality is stochastic.

What is interesting is that Asimov laws seems so self-evident to any science fiction reader as well as to any human being. A robot should not harm otherwise it is better not to rely on him. Humans tend to forget uncertainty but they live in an uncertain world, perform constantly actions in an uncertain setting and have a discrete rate of failure.

Every second human drivers make errors, unluckily also mortal ones, but no-one would consider acceptable to forbid humans from driving cars because of their mistakes or the number of casualties. We accept that humans as decision makers do their  best to minimize the losses, are we ready to accept robots making the same errors (yet minimizing losses)? If we adopt Asimov law, the answer should be no: no error is tolerated.

Consider now the following situation. In 2019 Elon Musk  wants to show to Di Maio (the new Italian prime minister) the value of his brand new self-driving Tesla car, not only to wander in the Nevada desert, but also to equip Italian police with a smart ally to fight drugs dealers riding  mopeds in the crowded streets of central Naples. After a smooth start, suddenly Tesla gets stuck and has no intention to move further: the car seems afraid  to deal with an unceasing flow of motorbikes passing the Tesla on the left and right side and a crowd of “scugnizzi” with blue T-shirt playing football on the sidewalk. As coded in its software, a self-driving Tesla car must perform its function efficiently unless this would harm the user. Elon knew it was a great occasion for Tesla, no error was allowed and he asked his Texan engineers to be on the safe side of the decision. Unfortunately Texan engineers don’t know that driving in Naples is a risky business for brave persons. No risk means no gain.  Zero (human) pain will mean no gain either.

End of the story: Asimov laws have been fulfilled. The car could autonomously move only after 8h30 PM when the match Napoli-Roma began and everyone went home. The expected signature of the contract is postponed to the future. Di Maio quits the car, extremely disappointed but very comforted in his heart  (“I knew that a US MA in Physics is not worth a non graduated Italian taxi driver“)…and Elon tweets on the US president profile for lobbying in favor of the change of  status of Asimov laws: it is definitely time to pass to stochastic laws.

Truth is a model

The most common misunderstanding about science is that scientists seek and find truth. They don’t. They make and test models…. Making sense of anything means making models that can predict outcomes and accommodate observations. Truth is a model. (Neil Gershenfeld, American physicist, 2011)

 

Why we cannot stand “conscious robots”

Robots start being everywhere and humans are afraid of losing their role (as well as their job). But the best human revenge over robots is to grant them no consciousness. They can do wonderful (or horrible) things but, whatever they are, they are (and possibly will) not (be) conscious. They mine big data for us, they clean for us and they will be driving for us but, no way,  they will do without consciousness. The rationale of this post is simple. Human have problems in defining consciousness but they feel extremely confident in denying it in other entities. So let’s try to define consciousness by focusing on why we deny it to others, in this case robots. It is easy to deny consciousness to rocks and simple organisms, on the basis of their non existing or irrelevant behavior, but why are we so sure in denying consciousness to robots? Since when we are talking of consciousness we refer to human consciousness, why robots do not have it (yet)?

My answer is simple: we deny consciousness to all behavior or functionality that can be described as automatic. So being conscious should mean first of all “not being automatic”. In Dennet’s words we could define automatic as “something that has competence without comprehension“. The behavior can be complex and astonishing (think about Google recognition of image features) but since is somewhat programmed is necessarily not conscious.

So the idea is the following: let us define better what is automatic and, by reverse, we will define better what is conscious.

I will list here below what I consider as peculiar of an automaton or an automatic agent. Note that I will not put the fact of being programmed in this list since the automatic behavior can be the result of a long evolutionary process and not necessarily the output of a programing effort. Also I will focus on agents, so entities which make actions and evolve in time. So, in my opinion, the most peculiar aspects of an automatic agent are:

  1. A well-defined and accessible cost function, which maps an internal state to a degree of cost (or gain)
  2. An accurate observation of the current state
  3. An accurate model of the impact of an action on the future state: note that accurate means that it is able to predict not only the effect of a single action but of an entire sequence of actions

Overall, an automatic agent is a dynamic agent who knows (or pretends to know) a lot about itself and the surrounding environment. This self-confidence allows him to proceed indeed in an automatic and rapid manner. This means that it does not need to remember or store much of its past or its present, it needs simply to act.

All this appears to humans as incredibly (or excessively) simple: humans do not have a simple and well-defined cost function, like they have no access to a perfect observation of their state or a clear idea of the impact of their actions on their future state. Think, for instance, to taking the decision of getting married: how many doubts, trade-offs, considerations, hesitations are crowding our minds. Think instead to a self-driving car which in a situation of inevitable accident (e.g. it has to take the gruesome decision of running over either a group of old walkers or a toddler)  takes anyway and rapidly a decision: it will be able to do that only because all the uncertainty has been somewhat removed. The robot can be fast and automatic because all the myriads of cost functions (ethical, practical, economical) that could pop up in a human mind in the same situation are condensed and summarized in a single and certain one.

So what? if all this is considered by human as not conscious, is not possible that consciousness is indeed the (human) functionality developed to manage all the configurations which cannot be addressed in an automatic manner? In other terms, it is well know that humans make myriads of things in an unconscious (i.e. automatic) manner. We could argue that the evolution led us to solve a lot of tasks in an automatic manner, or equivalently we have been able to address many functionalities in a robot-like manner. For many others we have not (yet) been able: this can be due to several reasons: limited sensory capabilities, limited intelligence, excessive complexity of the task to be solved (e.g. related to the fact that we live in multi-agent communities). So, though a big part of us acts automatically and unconsciously like a robot, all the rest still requires consciousness. Now the question is “why this is the case”? Why could consciousness help in addressing situations that we are not able to address in an automatic way?

Quoting Dennet “..the puzzle today is “what is consciousness for (if anything)?” if unconscious processes are fully competent to perform all the cognitive operations of perception and control”  .

My answer as data scientist, is: data collection. Consciousness could be a way to alert memory functionalities to store situations (and related actions) in which we are aware that an automatic procedure is not good enough (for us as individual or for our community). Since we are not able to solve this yet automatically, why not simply store the configuration waiting to learn (sooner or later) what to do? So conscious states could be (also) data collection phases, with the aim of training a learning capability that could be helpful in the future. Think about driving a car: the amount of consciousness during our first driving days is largely superior to the one we have after ten years of licence. The task has been learned and made automatic: consciousness is no more necessary.

So, will the robots ever have consciousness? If we design robot in order to solve automatically a number of tasks, it means that we think that this task is sufficiently well defined not to require consciousness. This is how we conceive nowadays robotics. Using robots implies (implicitly or explicitly) that the task is well-defined. Think therefore which strong assumption is made when we accept the use of autonomous robotic warriors (drones,…)….

Once robot will be used to address ill-defined tasks (e.g. manage an interstellar mission of interaction with new species) consciousness functionalities will be needed.

 

 

Data science: an argument for contigentism?

I recently discovered a passionating debate in philosophy of science about the contingentism/inevitabilism issue (see this book). Quoting Lena Soler, one the authors of the book and major expert in this domain, the debate is roughly about that: “Could science have been otherwise? could have it been dramatically different from science as we now it today? Is there something inevitable in a sound scientific enterprise? Could we have developed an alternative successful science based on different notions, conceptions, results?” Note that the aim here is not to discuss the importance of an exotic pseudo-science but to reason about the fact whether the scientific notions, concepts, techniques, we are using today are really necessary or if a different scientific path (i.e. still scientific in Popper’s terminology) would have been possible.

In my opinion, it is quite natural to conceive, that being science a human enterprise, it could have been evolving  in a very different manner. How many choices, decisions, conclusions (or Nobel prizes) in the scientific world have been dictated by contingencies, social aspects, historical contexts, politics, economics or nationalistic considerations? Was all of that inevitable? For instance, think simply to the obscure fate that could await today a revolutionary article, unfortunately written in very bad English…

This debate evoked  in me some considerations about the role of data science in all that. The success of data science is  the living proof that, starting from the same (or very similar) premises, modeling can have multiple, heterogenous outcomes. Think for instance to address a scientific prediction problem in a data driven manner and consider the overabundance of techniques, methods, algorithms that you could use to solve this problem. From a data scientist perspective contingentism is a pure evidence.  The same problem could be tackled in many different manners but with roughly the same accuracy from an external prediction perspective. So the question raises spontaneously: what would have happened if Keplero or Newton would have had the same attitude (or better computational power)? Would the gravitational laws have the same form, the same aspect? Notions like mass, gravity would be the same? Would the consequent course of science be the same?

Also, is not data science a formidable manner of playing again the history of science? Which kind of scientific product would have been returned by a today data scientist once put in front to the same experimental evidence of renowned scientists of the past centuries?

Bias/variance interpretation of conscience

Every philosopher (e.g. scientist) is slave of some formalism and tends to apply it as much as possible to any reality aspects. This is indeed a form of bias and my bias, recently, is that I tend to interpret everything in terms of bias/variance… So why not pushing this to the extreme and applying it to nothing less than the hardest issue of science and philosophy?  conscience, as simple as that…

In particular I will aim here to address issues like: does conscience exist really, what is its function, may robots have one, and so on…

Let’s go straight to the end of my reasoning: we could use the bias/variance formalism, useful to describe any learning procedure, to  support the idea that conscience is not only an epiphenomenon but rather a necessary component  of every rational cognitive process. In particular conscience is required for interacting with a complex multivariate, multi agent and uncertain reality where the criterion of effectiveness/success of such interaction is complex, multivariate, uncertain and dependent on others too.

In other terms what I claim is that surviving in our reality requires any intelligent agent to roughly decompose its intelligent activities in two parts: a part (made of several submodules if needed) which can be addressed as a problem of optimization (according to an univariate cost function) and implemented similarly to a fast regulator or automatic controller: a second part which has to deal with exploration, exception, multi criteria, interaction, uncertainty and adaption.

As far as the first part is concerned,  think for instance to visuomotor control regulations which allow us everyday to survive in an hostile environment thanks to their fast and unconscious control loops able to monitor, control or optimize some tasks. This part corresponds to any bias component of a cognitive effort: a stable, situated and limited module aiming to exploit some previous or acquired task in a specific application domain . This module is rapid and effective when the application domain is respected and the addressed cost function is of interest.

 

Let me quote Christof Koch from his book “Consciousness: Confessions of a Romantic Reductionist. ” :

« The mystery deepens with the realization that much of the ebb and flow of daily life does indeed take place beyond the pale of consciousness. This is patently true for most of the sensory-motor actions that compose our daily routine: tying shoelaces, typing on a computer keyboard, driving a car, returning a tennis serve, running on a rocky trail, dancing a waltz. These actions run on automatic pilot, with little or no conscious introspection. Indeed, the smooth execution of such tasks requires that you not concentrate too much on any one component.  »

 

Conscience boils down to all that cannot be dealt with in this manner, in other terms to all that escapes to the  domain of automatic, fast yet biased servomotor modules. No free lunch theorems have shown that there is no optimization working for settings or models optimal for all distributions. Any biased approach, though effective in his own application domain, is doomed to failure (or better to low performance) in a complex world which cannot be interpreted at the light of a single criterion, or a single cost function.

So the two facets of our cognitive process address different aspects: on one side bias, exploitation, unconsciousness, automation, regularity, single variate criterion, rapidity, optimized solution on the other variance, exploration, awareness, attention, exception, multi criteria, delay, assessment of alternatives.

So as Koch said, consciousness is useful because « life sometimes throws you a curveball!  »

According to this interpretation, consciousness is a necessary component of a high level cognitive functionality; in other terms I refute the possibility of having a zombie who could have the equivalent cognitive capabilities of a conscious being, ale I don’t believe that a too biased learning agent could be effective in the long run in a ever changing world. Not being conscious would reduce the functionalities to automatic, biased learning or control processes, making the resulting behavior constrained to limited objectives, settings and criteria. Though a zombie could emulate in a short time and specific contexts the activities of a conscious agent, I believe that it is relatively easy for a conscious being to recognize that the zombie is only simulating intelligence (or at least conscious intelligence) and unmask it.

Think for example how it is easy for a young child to expose the limitations of a highly expensive robot just after some minutes of free interaction.

Think now about the rising success of self driving cars and the fact that it often occurs to people driving their car a very long way and realizing that they were thinking of something else. The growing appearance of self-driving cars seems to confirm that conscience is not necessarily  needed for implementing driving functionality in conventional setting. Think now about the dramatic eventuality of a car deciding in a fraction of second between two potential victims during a car accident: no automatic algorithm would be considered adequate to deal  with such ethical issue and we would be uncomfortable in dictating to the robot some behavior rules to act in such context. We are indeed entering the domain of consciousness where the conventional mechanistic way of proceeding is no more relevant.

I consider all these examples as evidence about the impossibility of attaining high level of cognitive capability without conscience, like it is impossible to attain knowledge with a biased, constrained, precooked modeling approach.

Another related consideration is provided by Stephen J. Gould (1977): “Animals become too committed to the peculiarities of their environment by evolving a fine-tuned design for a highly specific mode of life. They sacrifice plasticity for future change. Neoteny (i.e. the humans property of starting postnatal life in a less mature state than animals) shifts the emphasis from instincts to learning process as the dominant factor in the acquisition of the organism’s survival and coping skills.”

 

 

Bias/variance gnosiology

We learn only when we create a regularity and all that remains from our learning efforts is some sort of confortable simplification. Now, reality escapes or diverges from our regular expectations every time we want to use or enforce  them to explain or predict the course of nature. In front of the inescapable gap between our regular eden and the  natural hell of observations, we can take two extreme attitudes: negate or discredit reality and reduce all divergences to some sort of noise (measurement error) or try to incorporate discording data and measures in our model. Of course there is a continuum of intermediate positions which are possible between these two extrema and it is conceivable that we change/adapt our strategy according to the context, the topic, our age or mood. However, this post supports the idea that a large part of our approach to the understanding of reality  can be simplified (again a regularity 🙂 by making explicit how we position ourselves in this range between ideological defense of our model and  acceptation of the confutation power of data. This trade off is well known in (frequentist) statistics where the process of estimating models from data is described in terms of the bias/variance trade-off. An estimator is a generic name for describing whatever function/algorithm bringing from data to an estimate: we could generalize here to any data/observation process returning a sort of model, regularization or belief.

A biased estimator is typically an estimator which is insensitive to data: his strength derives from the intrinsic robustness and coherence as well as his weaknesses might originate in the (in)sane attitude of disregarding data or incoming evidence. A variant estimator adapts rapidly and swiftly to data and observations but it can be easily criticized for its excessive instability.

So, nothing really new, but I feel sometimes delighted in  mapping attitudes, beliefs, ideologies to this trade-off (definitely another illusion of almighty regularity) or to characterize/explain differences in terms of this classification.

Bias/variance tradeoffs
On the biased side of the world On the variance side of the world
Right-wing Left-wing
Old Young
Parent Son
Idealism Empiricism
Self-confident Doubtful
Optimist Pessimist
Reformist Revolutionary
Woytila Bergoglio
German football team Italian football team
Classical art Modern art
Academia Université du peuple
Official press Social networks
European institutions Populism
Mainstream science Scientific breakthrough
Mathematics Statistics
Parametric statistics Nonparametric statistics
Expert driven Data driven
Faithful Playboy
Boring Charming
Bill Gates Steve Jobs
Long-term Short-term
Conventional Breakthrough
Official medicine Homeopathy
Apple Start-up
Book Webpage
Raiuno Raitre
Classic music Rock
Rock Rap
Risk-averse Risk-taker
Orthodox Unconventional
Dogma Unconventional
Aristotle Galileo
Formal informal
Descartes Popper
Manzoni Leopardi
Idealism Relativism
Truth Opinion
Linearity Nonlinearity
Simplicity Complexity
Certainty Doubt
Exploitation Exploration
Communist Populist
Automatic Conscious (?)
Heuristics Unbounded rationality

And now up to you…

PS. OK, but after all, is there a better side to stay? Hum, if you thing there is, welcome on the biased side ;-). If you think it depends, welcome on the variant side of the world.