Monday, February 19, 2007

Happy to be Human


Being a human has never been so troublesome. First of all, primatologists have asked us to reconsider our premises of humanness as we’ve been just lucky to outsmart hominids and Neanderthals in safely inhabiting this planet. Peculiar human characteristics like superior intelligence, stunning thought-making process like speech and language have been also displayed – although in varying degrees – by other species like chimpanzees, parrots, etc.

Then, paleontologists have asked us to redraw the boundaries of the genus Homo as there no longer seems anything exclusively special about specimens we class in the same genus as ourselves and those we relegate as australopithecines –literally southern ape-like animals. But, as Goethe wrote, "The Ancients said that the animals are taught through their organs; let me add to this, so are men, but they have the advantage of teaching their organs in return."

As fossil records reveal, human qualities like being bipeds, possessing superior brains, sophisticated tool-making, etc also feature in other species. This has several moral implications because the very foundation of humanness seems to be shaking under shifting sands.

We are –according to Darwin– “neither the centre of creation nor its purpose.” So, humans are not heading towards any defined goal; but we have to congratulate ourselves on being lucky that we are secure as of now. Or revel – as the American philosopher Daniel Dennett feels – that over thousands of years of “communication and investigation,” have made us as “the nervous system of the planet.” Our newfound capacity for long-distance knowledge gives us powers that dwarf those of all the rest of the life on Earth. Further, as the Harvard psychologist, Daniel Gilbert puts it:

When people look out on the natural world and declare that there must be a God because all of this could surely not have happened by chance, they are not overestimating the orderly complexity of nature. Rather, they are underestimating the power of chance to produce it.

Humans have no choice but to live – though modern medicine has exercised a control over human life by providing means to prevent growth of life and postpone death for now. Animals enjoy an instinctual nature that enables them with some capabilities and cripples them in others.

But, humans can discover alternative modes of living if they find the current ones troublesome. I think it is audacious to say that “time is running out our hand.” But, we have made our hands so greasy that nothing seems to be given attention at all. Unlike animals, Man “has to live his life, he is not lived by it.”

Chance has become the lazy luxury that we’re enjoying for quite some time now and the “blind force” that created the same chance can give a lead to other humanlike species like Artificial Intelligence (AI)-powered robots. From the pioneer of AI, Alan Turing, promoters of AI like John McCarthy, Marvin Minsky to panegyrics like Hans Moravec, researchers have made bizarre claims – a culmination of pseudoscientific approach of the past century.


Human obsession with transcendence has tinctured history with varying degrees of success. I feel that to study the human attempts at transcendence have been energized by some interesting insights from “unfolding of human consciousness.” At every stage of human evolution right from protozoan life to posthuman life (as Francis Fukuyama predicts the future as,) each stage has enclosed the previous stage and rose to the next stage.

So, as the first hominids or “protohuman” creatures emerged, they did so “upon and around” a basic core of the natural and human structures that were already defined by previous evolution. As the American philosopher-psychologist, Ken Wilber (b.1949) puts it:

Dawn Humans, in other words, began its career immersed in the conscious realms of nature and body, of vegetable and animal and initially “experienced” itself as indistinguishable from the world that has already evolved to that point. Man’s world – nature, matter, vegetable life and animal (mammalian) body – and Man’s self – the newly evolving centre of his experience – were basically undifferentiated, embedded. His self was the naturic world; his naturic world was his self; neither was clearly demarcated, and this, basically, in unconscious homage to the past.

André Leroi-Gourhan points to this enclosed stage as containing “species memory” with originary technicity - automatic and shared by all life forms. At the next stage, about 200,000 years ago, humans begin to appear “not as hunters or gatherers but as magicians.” But, what is magical about this stage? Humans begin to separate themselves from the fusion with animals, i.e. they separated the consciousness based in animals to that in the individual organism.

Though the self is separated from nature, it is “magically intermingled with it.” This intermingling is manifested in images like typhon who was an enormous “half-human, half-snake.” Freud attests this image – like his use of totem – when he says that the ego was “first and foremost a body-ego.” That is, the self is centered on the body and not so much on the mind.

André Leroi-Gourhan points to this evolving stage as containing “ethnic memory” with two ethnic practices associated with face and hand: speech and gesture. Once this symbolization comes into existence, the status of body seems to change because symbols capture memory and try to externalize it. When a gallery of these symbols enters the external world, they automatically engage in a battle of priorities – thereby creating a hierarchy. This scheme of privileging is what has been the insidious legacy rewarded to humankind. Instead of accommodating the incoming symbols appropriately, the hierarchy has been defiant; instead of considering, it has become inconsiderate.

One of the latest mishaps this hierarchy has inflicted on humankind is Scientism: the rabid resolution that everything in this universe can be explained by science. Friedrich Nietzsche suspected that scientists were motivated by the “will to power” over the external world and themselves too and so, were incapable of detached, objective arguments. Martin Heidegger viewed science as offering secondary derivative accounts of the world. He sensed a danger not in viewing things or “letting things to be” but in triumphing technology over things, science was legitimizing itself over all other methods of thinking.

Opposed to these views is an American “public philosopher” like Daniel Dennett who – in supporting that brains and just computers – disregards conscious subjective experiences as they cannot be measured objectively. This viewpoint suggests that the world is valueless and devoid of any quality. Scientism wishes to capture through the fishnet of science empirically verifiable values but let human immeasurable values like a scientist’s purpose, pleasure slip through.

Scientism or positivism is not one generation’s wondrous discovery but a concept that has accumulated over generations. To skim a concise chronicle of the mishap of scientism in the journey of human history, we need to go back to the pre-Socratic age. Heraclitus, a dialectic philosopher, saw an ever-changing world that was a result of “a dynamic and cyclic interplay of opposites,” in which resides unity, which he called Logos. Another pre-Socratic philosopher, Parmenides, opposed this view. He felt that Being was invariable and unique and so there was no change.

In 5th century BC, Greek philosophers tried to “reconcile the idea of unchangeable Being (of Parmenides) with that of eternal Becoming (of Heraclitus.)” They did so “by assuming that the Being is manifest in certain invariable substances, the mixture and separation of which gives rise to changes in the world.” This reconciliation climaxed in the concept of atom.

With the authority of Aristotle and the Christian Church, this concept along with the “concept of classification of objects” was prevalent through the middle ages. All this was to change in 1600 when Galileo Galilee thought about “how things happen” rather than “why things happen” [the way his adversaries thought.] Galileo introduced two distinct qualities in doing science: scientific method and measurement (or empirical knowledge and mathematics.) Galileo safely lodged the quantifiable properties of matter into the scientific vault of modern science.

But as the psychiatrist R.D. Laing (1927-1989) reminds us: “Out go sight, sounds, taste, touch and smell and along with them has since gone aesthetics, ethical sensibility, values, quality, form; all feelings, motives, intentions, soul, consciousness, spirit. Experience as such is cast out of the realm of scientific discourse.” Further blow was inflicted by Rene Descartes - who by his dualism - led many to equate identity with mind (or “ghost in the machine” as he called it,) instead of their whole organism. For Descartes, nature was a machine that ran with mathematical laws.

The Cartesian framework was fervently used by Isaac Newton to provide a mechanistic worldview. His celebrated contribution is summed up by the theoretical physicist, Fritjof Capra (b.1939):

In Newtonian mechanics, all physical phenomena are reduced to the motion of material particles, caused by their mutual attraction, that is, by the force of gravity. The effect of this force on a particle or any other material object is described mathematically by Newton’s equations of motion, which form the basis of classical mechanics. These were fixed laws according to which material objects moved and were thought to account for all changes observed in the physical world. …as a consequence [of Cartesian dualism] the world was believed to be a mechanical system that could be described objectively, without ever mentioning the human observer, and such an objective description of nature became ideal of all science.

Darwin’s evolutionary theory forced scientists to abandon the Cartesian conception of the world as a machine constructed by some creator. Instead, the universe had to be pictured as an evolving and ever changing system in which complex structures develop into simpler forms. As Richard Dawkins puts it: “the universe is not good or bad; it is just indifferent.” But, in 1905, Albert Einstein rid the Newtonian notion of definite possibility of truth in physical world by relativity and quantum theories. As Capra so succinctly explains:

Einstein strongly believed in nature’s inherent harmony and throughout his scientific life his deepest concern was to find a unified foundation of physics. He began to move towards a unified framework bringing together electrodynamics and mechanics…[that brought forth various pairs as in light [waves/particles]…However, the new conception of the universe that has emerged from modern physics does not mean that Newtonian theory is wrong, or that quantum theory, or relativity theory is right. Modern science has come to realize that all scientific theories are approximations to the true nature of reality; and that each theory is valid for certain range of phenomena.

So, as the eminent philosopher of science, Karl Popper (1902-1994) says: “the criterion of the scientific status of a theory is its falsifiability, or refutability or testability.” This insight into the limits of scientific wars on nature gave rise to new theories that didn’t fail to include the very driving element of all theories: speculation. Till today, speculation has driven scientists to string theories, quantum computing, etc.


With the advent of computers in the second half of 20th century, their rapid intrusion into various disciplines has given it a universal acclaim as the One technology. Juxtaposed with computing prowess, Genetics has also redefined the ramifications of the basic questions of personhood. Does gene determine our behavior? If so, with genetic modification, can we realize the long cherished-dream of Edenic paradise on earth?

By technology, I mean –in the sense of Habermas– “the scientific control of the natural and social processes.” The discourse of technology – especially in convincing that brains as digital computers– has an extensive influence on humanness. A complex concept like consciousness is stripped off from its subjective content like emotions, feelings.

Sensemaking is what conscious minds do. So, as sense observations are empirical, it is not impossible to record them and simulate them through a computer or better with a robot. Daniel Dennett lures: “…when you realize that the machines that we're made of, that they can do self-repair, they can fight off infections and they can do amazing calculations in the brain, it's stupendous.”

A neuroscientist, Susan Greenfield (b.1950), pictures the situation thus: “technology will erode individuality, as it replaces memory and makes experience vicarious.” By bringing everyone under a single fold of “singularity,” eminent thinkers like Marvin Minsky, Ray Kurzweil, Rodney Brooks have dehumanized the very makeup of human meaning. On the other hand, ingenious biologists like Edward O. Wilson and Richard Dawkins have tried to explain humanness through genetic makeup.

However, a crucial area - commonly surfacing in both philosophy and science - is that of the mind. One of the popular theories of mind today is that of the Harvard psychologist Steven Pinker. His theory proceeds by first equating mind with brain and then, turning the brain into an information-processing machine; for all this, he uses computing as an analogy. Computing as how an engineer would view the assembly of machine, fix and maintain it. This Pinker calls “Reverse Engineering.” But, metaphors are often misleading.

A 17th century physician, William Harvey used a pump as a metaphor to explain the blood circulation and working of heart. But, Pinker mistakes analogy for a metaphor and uses it. Yet, Pinker argues that the use of computing is a metaphor and in the mechanical sense, the similarity ends. From this end, begins Pinker’s evolutionary thought. The mind he talks about isn't a coherent unity; it's an interacting community of distinct modules, each specialized for a particular function. It's evolved, as he explained to us, as a device for enhancing human survival, and reproductive success, according to ultra-Darwinian principles.

In contrast to Pinker’s theory, neurobiologists like Steven Rose emphasize that “real brains transform dead information into living meaning - making sense of the world around us.” It's a meaning which is given to sensory inputs by the working of the brain. It's based on experience and it's provided through its evolutionary and developmental history. For example, Steven Pinker feels that a footprint carries information whose meaning is dead without an observer. But, particular footprints like fear, anxiety, excitement, interpretation are not only based on the subject but on human history, culture, personal experience, etc.


These developments wobble us into the terrain of the challenges in the philosophy of mind. The first one is what is a mental state? Mental states have two dimensions. One is that they are sensory qualities like pains, thirsts, etc; the other dimension entails beliefs. These beliefs are either “propositional attitudes or intentional states.” The way these two dimensions need to be reconciled is a major concern.

The American philosopher, Donald Davidson (1907-2003) has had an influential stance on mental events. While the logical positivism of Ludwig Wittgenstein (1889-1951) had ripped reasons from causes of actions, Davidson offered a fresh view. The causal explanations of actions do have reasons - making mental events basically as physical events. He proposed two ideas to substantiate this concept.

One is ‘primary reason,’ a belief-desire pair. Suppose my desire to switch on a computer arises from the belief that after sometime, I can use to do several things. The second idea is that an action driven by primary reason has more than one description. I can also say that I’m booting an Operating System as well as allowing an inauthentic hacker to dirty my system (which is unintentional.)

The causal connection of the action is rational, to the extent that the ‘primary reason’ specifies the reason for the action. And, it is also causal, inasmuch as the one event causes the other if it is indeed the reason for it. It is precisely because the reason is causally related to the action that the action can be explained by reference to the reason. Out of several reasons for actions, one reason triumphs over others because this reason has caused the action.

Moreover, there is no strict law that governs the rational connection between reason and action. But, there is a law-like regularity that pervades the causal connection of the action. Davidson is thus able to maintain that rational explanation need not involve explicit reference to any law-like regularity, while nevertheless also holding that there must be some such regularity that underlies the rational connection just inasmuch as it is causal. This theory Davidson calls Anomalous Monism: mental events are identical with physical events, despite the absence of strict laws that connect them. So, Davidson infers that there can be no reductive explanation available for science to shrink rational to non-rational explanations. Without laws, scientism is powerless in reducing everything down to scientifically quantifiable entities.

But, the overarching problem that plagues the philosophy of mind is the mystery of consciousness. If we go by Davidson’s claim that actions have reasons as causes, then how far do conscious actions have definite reasons? And, if there are explainable reasons, can they be empirically verifiable? There are those who accept consciousness as a natural phenomenon arising out of physiological processes but fall short of understanding it. Others argue that objective science has no business with subjective experiences at all. Moments, when physical states don the costume of consciousness by turning into mental states, are as of now incomprehensible.

A different view is provided by the neuroscientist, Patricia Churchland. She feels that by understanding the neurobiological complexity of pains, a sane scientific reductionism doesn’t make pains unreal. A shaky support is found in the way research on dreams has moved from the Jungian conception of unconscious to biologically adaptable possibility. Even John Searle does use the same kind of idea by making the complex macro-level systems (like liquidity of water) to be reducible to micro-level explanations (of water molecules.) It is convincing that such physical and chemical processes can be reduced to close study. But, it is an overture to assume that a definite neuronal state causes two seemingly similar actions like pains and itches. This is not at all to suggest that the deciphering of consciousness is impossible but is improbable.


Barring the duplication of human mind, Artificial Intelligence has in effect been affective in contemporary technology. In principle, there have been two schools of thinking in AI. One is that of the director of the AI lab at MIT, Rodney Brooks, who is a maker of living systems. Robots serving some useful needs as in medicine and military like surgery assistance and bomb defuse. But, he feels that when it comes to manufacturing biological systems, robotic research hasn’t yet understood the basic physiological systematic framework – without which considerable progress can be made. Though our nearest ancestors like chimpanzees and orangutans have been tested for intelligence, the “synthesis of thought-making processes” is one major shortcoming yet to be decoded.

Brooks uses the bottom-up approach of computation that begins with lower levels of duplication of intelligence as those in flatworms and then moves to higher levels, ultimately reaching humanoid capabilities. Evolutionary computation is the incensed flavor of this approach and indeed genetic algorithms have used evolutionary logic to solve challenges in fields as diverse as aerospace engineering and financial markets. It is not surprising that what created us is helping what we have created for ourselves.

The other school is Marvin Minsky’s who has used a top-down approach by planning to first conquer the higher reaches of human intelligence and then gradually moving down to lower levels. Minsky is quite generous to machines by displaying his reckless humanism in not treating machines as different from humans. Even Darwinian evolution hasn’t ended with us; but, the next post-human level is in offing. At this level, ‘unnatural selection’ will rule and robots would be our children.

To explain the ramifications of these two views, I want to use the arguments of the Harvard mathematician philosopher, Hilary Putnam (b.1926). He was once an ardent supporter of functionalism: the view that brains are functionally equivalent to digital computers. Then, he retracted from this view and modified his argument. This makes him pivotal in a tempered understanding of the criticality of the claims. The principal difficulty Putnam has with functionalism is put thus:

…the content of our beliefs and desires [attitudes] is not determined by individualistic properties of the speaker but depends on the social facts about the linguistic community of which the speaker is a member and on facts about the physical environment of the speaker and the linguistic community…reference is not fixed by what is “in the heads of speakers.

For Putnam, the task of AI manifests in its “notional activity” in simulating intelligence; whereas “its real activity is writing clever programs for a variety of tasks.” This is one of reasons why AI savants like Raj Reddy, Kurzweil develop systems that are based on inductive logic. This variety of logic is based on the understanding that a suggested proposition like working of brain is strictly computational and is tested in various circumstances or instances. And, to the extent that the proposition is disproved within the circumstances or instances, the proposition is confirmed. In all, general laws are derived from specific instances and hold good within the confines of the instances.

All such propositions or information-content is fed into the massive databases of the expert systems to give it a strong basis for correct functioning. Kurzweil especially feels that ever-increasing storage spaces, rapid computational speeds, simplification of the complexity of information-processors empowers AI to overcome human hurdles. But, Putnam wants researchers to exercise constraint and tread steadily. Putnam feels that the crucial question is not whether AI has uprooted personhood or not.

But, “What’s all the fuss about now? Why don’t we wait until AI achieves something and then have an issue?” This question is in fact fairly acceptable because computers based on parallel distribution – or neural networks – have enabled researchers to physically map the way the brain works in one sense. However, the extent to which AI research becomes successful has two perspectives: in principle and in practice. The “in principle” side deals with how to model the digital computer to duplicate the brain’s complexities and capabilities? The “in practice” side deals with the efficiency of such a model.

As an example, Putnam uses natural language processing as a model to explain the extent to which they have been capable of duplicating intelligence. Mental information expressed in natural language – as Davidson also attests – has varying content and variable contexts. A simple example is that suppose a robot’s task is to analyze statements like “It’s hot.” This context-driven statement has various inflections ranging from hotness to fashion. Identifying each inflection and classing them in various categories has to do with various modes of human learning.

Putnam cites Noam Chomsky’s use of “the existence of a large number of innate conceptual capabilities that give us a propensity to form certain concepts and not others.” When these innate abilities mature in a particular environment, language too is fittingly learnt. But, AI researchers extend this insight to extreme ends by claiming that the “human use of natural language can be successfully simulated on a computer.”

As this level of learning is “more or less topic-neutral heuristic for learning and that this heuristic suffices for learning one’s own language.” Putnam praise this optimistic view but feels that as of now, there is no big idea about “how the topic-neutral learning strategy works.”

In all, AI is a promising turf testing the hypothesis: “as if…but not.” The seeming possibilities and seamless prospects can reward AI researchers. But, they have to recognize that approaches to human-created problems cannot be equally and unthinkingly applicable to solving and duplicating human capabilities.


Our post-human future, Francis Fukuyama laments, has its past in bio-technology which “will cause us in some way to lose our humanity…Worse yet, we might make this change without recognizing that we had lost something of great value. We might thus emerge on the other side of great divide between human and posthuman history.” Nanotechnology is a promising peril for humankind by making “molecular machines.”

With its innovative energy to do “molecular manufacturing,” nanotechnology applications range from reducing the useable sizes of gadgets, manipulating the usability of materials (for example, making rubber conduct heat) to eliminating global poverty by providing affordable appliances like miniature solar panels for unlimited electricity.

Besides, expanding human lifespan to conquer mortality is a big promise of nanotechnology. For example, medical nanorobots can postpone death for some hours by providing artificial respiration to a stopped heart through ample volumes of carbon dioxide and oxygen to the red blood cells.

When nanotechnology-based manufacturing was first proposed, a concern arose that tiny manufacturing systems might run amok and eat the biosphere, reducing it to copies of themselves. In 1986, nanotechnology pioneer, Eric Drexler wrote, "We cannot afford certain kinds of accidents with replicating assemblers." Criminals and terrorists with stronger, more powerful, and much more compact devices could do serious damage to society. Chemical and biological weapons could become much more deadly and much easier to conceal.

Many other types of terrifying devices are possible, including several varieties of remote assassination weapons that would be difficult to detect or avoid. Due to the availability of cheap devices – though they may be cleaner and safer - economic disruption and thereby, imperialistic devices can monopolize the markets. By changing the means of production, portability and purpose, Nanotechnology escorts a new era of human innovation into unbridled disaster. Regulation and cautious restraint of such useful yet harmful technology is widely suggested. Both imminent vigil over planning and manufacturing devices and ethical constraints is morally incompatible.

However, some physicists like Stephen Hawking have suggested that Cyborg – melding computer technology into human body – is a better alternative to intelligent machines like nanorobots. For example, a paralyzed person can communicate through a silicon chip to a computer for moving the wheelchair. The chip taps the brain’s pathways pertaining to the desire to move a body-part and then through wireless communication alerts the computer to signal the wheelchair to move. This is amazing as far as helping the disabled but if the same technology enhances human capabilities and makes superhumans, then it is unlikely that most of the less privileged humanity would survive.

Whatever form technological developments take – nanorobots, cyborgs, etc – the impulse is to enhance human capabilities and overcome human vulnerabilities. Until technology is harnessed for a vast and inclusive life, it serves no definite purpose in creating an externalized nonhuman environment. Whenever humanness is questioned, we are faced with dilemmas and dissatisfaction. Even if humanness is a myth, we can add insights through myth-busting as well as enhance the myth with these insights.


Capra, Fritjof. 1984. Turning Point. London: Fontana Paperbacks

Dawkins, Richard. 1998. Unweaving the Rainbow. London: Penguin Books

Fernandéz-Armesto, Felipe. 2004. So You Think You’re Human? Oxford: Oxford University Press.

Fromm, Erich. 1996. To have or to be. New York: Continuum

Mumford, L. 1966. The myth of the machine: technics and human development. New York: Hartcourt

Putnam, Hillary. 1995. Words and Life. Massachusetts: Harvard University Press

Wilber, K. 1999. The Collected Works of Ken Wilber (Volume 2). Massachusetts: Shambala.

Various debates published in Third Culture at

No comments: