Friday, May 02, 2014

The Human Condition : Free Will

What exactly is the human condition?

Wikipedia says the human condition encompasses the unique features of being human.
It has some suggestions... "meaning of life, the search for gratification, the sense of curiosity, the inevitability of isolation, and the awareness of the inescapability of death."

Popular culture and art has always bothered itself with this question. What does it mean to be human? 

Movies like the Matrix have addressed this in spades, where machines conflict with humanity. Fallibility, and the potential to outperform logical perfection, is what separates humans from machines. The mechanism of this weakness and strength, is said to come from the ever-uncharacterizable free will, something that no artificial intelligence can replicate.
Free will breaks all computer programs!!
Free will is one of the biggest tropes in artistic definitions of humanity, followed closely by the fighting human spirit and human mortality. Love is the other contender (the undisputed winner in cinema at least). In the more religious context, the soul is the signature of humanity. Vital essence.

I will ruminate on the various aspects of the human condition in multiple posts. This is the first one, concerning free will.

Is free will human? Can a machine have free will? And what is free will anyway? 

Movies have popularized the notion that a machine would calculate the probabilities of all the options at hand, and do the optimal thing whereas humans are governed by emotional responses causing them to fight for losing causes and make irrational decisions based on free will. 

I am not sure that is true. Consider the example of the movie that most people who read this would know about, I Robot. (hugely entertaining movie, even if its not oscar material)
Will Smith has punched many a non-human faces.

Will Smith doesn't like robots because in an accident long time ago, a robot saved his life rather than the 12 year old girl's life. The robot calculated that Smith had a greater chance of survival than the girl so given the constraint that only one lifesaving could be attempted, Smith was the logical choice. Free Will Smith thinks that a girl's life was worth more regardless of the probabilities. Would a human have made a different choice? And would that be a defining human feature to make the other choice? I dont think so. 

Assuming the robots calculation is correct, if an impartial human was given the same information why would he make a different choice. If said human makes the other choice without this info, then his decision is based on ignorance rather than any triumphant free will. In this case, a human might have made an incorrect assumption that Smith a grown man, could perhaps save himself and go after the girl instead. As a result, Smith would die and with high probability the girl as well. If the human were able to calculate all these scenarios, then undoubtedly he would also make the choice the robot made.

If the girl's father had been trying to save them, he would have gone for the girl regardless, even if he had the knowledge of the probabilities. But is that still irrational? He is only trying to minimize his loss, even though the chances of saving the girl are less, the loss is much greater. If he also ends up dying in the process, we might say the act was irrational, but is it? If he could calculate exactly what he could do to save her, he would give up when the probability of saving her drops to 0 and save himself. Failure to do this is most likely due his inability to calculate this information... ignorance, not irrationality. Even if he voluntarily gives up his life to keep trying for a lost cause, it might only be because the anticipated loss at the daughters death is so great that his own survival is "perceived" to be of no consequence.
Catbert knows all.
Ignorance is obviously not a human virtue, it is simply a limitation, one that can be imposed on machines as well. So the valuation system, could that be a source of the humanity of free will? Emotional connections that humans can form with objects, animals or humans, that make them act in manners that no robots can.. is that humanity? The concept that loss of a loved one could be so devastating that self preservation is no longer a priority is a perception unlikely to be shared by many logic hierarchies. But that is a subjective opinion, some human might be able and willing to form such a bond, while another may not. So are less emotional people less free? Or is free will more fundamental than that, bound to the basic mechanism of how we make decisions?

Maybe... so let me consider an alternate, hypothetical way to explain seemingly irrational behavior while still preserving rationality. 

In the classical physics version of the world, the version we believed pre -1905, if one knew the exact positions and velocities of every single particle in a system at any one time, then one knew the entire future and past of the system with complete certainty. That should apply to humans as well, our every decision can be predicted perfectly if Maxwell's demon were able to determine the current state of our brain and the environment. So how can there be free will?

But quantum physics, says position and momentum cannot be determined exactly at the same time. Meaning Maxwell's Demon is a fraud. He cannot predict with certainty the future because he cannot know the present completely. But here's the catch... he is not completely impotent... he can know the present probabilistically. This means that he can know the future as well... probabilistically. So let me explain the difference by applying it to the above case.

Say the robot calculates that the probability of saving Smith is 50% and probability of saving girl is 25%. Since 50>25, the robot chooses to save Smith, no ifs and buts about it. Say now the same situation is presented a different robot... a quantum robot or a Qbot. His brain works differently, probabilistically. So now given the same information, the brain chooses who to save, with 50/75 = 2/3 probability the Qbot will choose to save Smith, but with 1/3 probability the Qbot will choose to save the girl instead. This is a huge difference. This means given the same situation and the same information, 1 in 3 Qbots will probably act "irrationally". This is not some hypothetical idea, we can very easily code robots to behave in this probabilistic manner.

Consciousness actually has a role to play in quantum physics, unlike any scientific theory that came before it. It is an entity that can collapse a probabilistic object into a deterministic one through the act of measurement. Is that not precisely what the Qbot in the previous example is doing? The looming decision existed in its brain, using the preexisting information of prior probabilities. Then when the moment dawns for action, it collapses the state and picks one of the two options, save Smith or save the girl. It makes the decision and once it does, there is no longer a probability, the Qbot has "decided" one course of action. And whats more, the action is seemingly out of free will since no one... not even the people who programmed the bot... can predict which decision any given Qbot will make. 

So is free will simply probability? Is consciousness just a probabilistic computer?

True probabilistic randomness cannot be achieved by a classical computer, it can only be approximated. If our brain is indeed a probabilistic classical computer, then we just have the illusion of free will... a close approximation. Even seemingly random decisions have some hidden reason, which if known, would make us predictable.

Or perhaps our brains are different from any computer we can currently build. A quantum system yields a truly random output, not an approximation. 

So is consciousness, or the mind, a quantum computer?

If so, then is building a quantum computer the first true step towards genuine artificial intelligence?
SMBC is the awesome.

This post is not based in scientific fact, mere philosophical conjecture

No comments:

Post a Comment