A Brief History of Knowledge From 3000 BC to 2001 AD

When the earliest civilizations appeared (in Mesopotamia, Egypt, India and China), they were largely constrained by their natural environment and by the climate. Religion, Science and Art were largely determined by extra-human factors, such as seasons and floods. Over the course of many centuries, humans have managed to change the equation in their favor, reducing the impact of natural events on their civilization and increasing the impact of their civilization on nature (for better and for worse). How this happened to be is pretty much the history of knowledge. Knowledge has been, first and foremost, a tool to become the “subject” of change, as opposed to being the “object” of change.

One could claim that the most important inventions date from prehistory, and that “history” has been nothing more than an application of those inventions. Here is a quick rundown (in parentheses the earliest specimen we found so far and the place where it was found): tools (2 million years ago, Africa), fire (1.9 million years ago, Africa), buildings (400,000 BC, France), burial (70,000 BC, Germany), art (28,000 BC), Farming (14,000 BC, Mesopotamia), animal domestication (12,000 BC), boat (8,000 BC, Holland), weapons (8,000 BC), pottery (7,900 BC, China), weaving (6,500 BC, Palestine), money (sometime before the invention of writing, Mesopotamia), musical instruments (5,000 BC, Mesopotamia), metal (4,500 BC, Egypt), wheel (3,500 BC, Mesopotamia), writing (3,300 BC, Mesopotamia), glass (3,000 BC, Phoenicia), sundial (3,000 BC, Egypt).

Once the infrastructure was in place, knowledge increased rapidly on all fronts: agriculture, architecture (from the ziggurat of the Sumerians to the pyramids of the Egyptians to the temples of the Greeks), bureaucracy (from the city-states of the Sumerians to the kingdom of Egypt, from the empire of Persia to the economic empire of Athens), politics (from the theocracies of Mesopotamia and Egypt to the democracy of Athens), religion (from the anthropomorphic deities of Mesopotamia to the complex metaphysics of Egypt, from the tolerant pantheon of the Greeks to the one God of the Persians and the Jews), writing (from the “Gilgamesh” in Mesopotamia to the “Adventures of Sinuhe” in Egypt to the “Bible” of the Jews to Homer’s epics in Greece), economics (from the agricultural societies of Mesopotamia and Egypt to the trade-based societies of Phoenicia and Athens), transportation (from to the horse-driven chariots of Mesopotamia to the Greek trireme), art (from the funerary painting of the Egyptians to the realistic sculptures of the Greeks), etc.

For a while, Religion acted as, basically, a compendium of knowledge (about life, society and the universe). In India, the Vedas and the Upanishads painted a cyclical picture of the universe. Right and wrong actions increase the positive and negative potential energy (“apurva”) associated with each person. Apurva is eventually released (in this or the next life) and causes good or evil to the person. Basically, misfortune is caused by prior wrongful deeds. It is not only deserved but even required. Life is a loop from the individual back to the individual. This was cosmic justice totally independent of the gods. Wisdom is the realization that everything is suffering, but the realization of suffering does not lead to pessimism: it leads to salvation. Salvation does not require any change in the world. It requires a realization that everything is part of an absolute, or Brahman. Salvation comes from the union of the individual soul (“atman”) with the universal soul (“brahman”). “Maya”, the plurality of the world is an illusion of the senses. Salvation comes from “moksha”: liberation from maya and experience of Brahman. By experiencing the divine within the self, one reaches pure knowledge and becomes one with the eternal, infinite, and conscious being. Nothing has changed in the world: it is the individual’s state of mind that has changed. Self-knowledge is knowledge of the absolute.

Buddha focused on the suffering, a ubiquitous state of living beings, but ended up denying the existence of the self: only events exist, the enduring self is an illusion (the “atman” is an illusion). Each moment is an entirely new existence, influenced by all other moments. To quote a Buddhist scripture, “only suffering exists, but no sufferer is to be found”. Suffering can be ended by overcoming ignorance and attachment to Earthly things.

From ancient times, China displayed a holistic approach to nature, man, and government. Chinese religion realized the fundamental unity of the physical, the emotional and the social. Particularly during the Zhou dynasty, Chinese religion was natural philosophy. There was no fear of damnation, no anxiety of salvation, no prophets, no dogmas. Confucius was much more interested in the fate of society than in the fate of the souls of ordinary people. He believed that the power of example was the ideal foundation of the social contract: a ruler, a father, a husband have to “deserve” the obedience that is due to them. Thus, Confucius’ philosophy was about the cultivation of the self, how to transform the ordinary individual into the ideal man. The ultimate goal of an individual’s life is self-realization through socialization. If Confucius focused on society, Lao-tzu focused on nature. He believe in a “tao”, an ultimate unity that underlies the world’s multiplicity. There is a fundamental reality in the continuous flow and change of the world: the “way” things do what they do. Understanding the “tao” means identifying with the patterns of nature, achieving harmony with nature. The ideal course of action is “action through inaction” (“wuwei”): to flow with the natural order. The “tao” is the infinite potential energy of the universe. “Qi” is vital energy/matter in constant flux that arises from the “Tao”, and “Qi” is regulated by the opposites of “Yin” and “Yang”. Everything is made of yin and yang.

Note that neither Buddhism nor Confucianism nor Taoism were “religions”, in the sense of worshipping a God. In fact, they all denied the importance of gods.

In Persia, on the other hand, Zarathustra believed in one supreme God that was similar to the Indian “absolute” of Brahman, except that it was opposed by a divine enemy, and the world was due to the titanic battle between these two supernatural beings: Ahura-Mazda, the spiritual, immaterial, creator god who is full of light and good, and Ahriman, the god of darkness and evil. Unlike previous religions, this one was eschatological: at the end of time, Ahura-mazda shall emerge victorious, and, after the apocalyptic ending and a universal judgement that will take place on Earth, all humans (even sinners) shall resurrect.

Judaism, which grew out of a synthesis of Mesopotamian, Arabian, Persian and Egyptian religious cults was originally only the religion of the Jews, and El was originally the nomadic god of a nomadic people (not tied to a sanctuary but “god of the father”). It was a god of punishment and wrath, and Jewish religion was conceived as, basically, obedience to El, with the reward for the Jewish people being the Promised Land. The “Old Testament” is largely silent about the rest of humanity, and largely silent about the afterlife. This was a god who spoke directly to its people (the Jews). The earliest prophets of the kingdom of Mari has been visionary mystics in charge of foretelling the future and interpreting natural events as divine messages on behalf of the royalty. The Biblical prophets, on the other hand, addressed the people (and, eventually, “all nations”) and their main mission was to promote a higher form of morality and justice. Judaism, in its expectation that a Messiah would come and deliverer the Jews from their suffering, was largely indifference towards unbelievers. In the meantime, the misadventures of the Jewish people were due to the fact that the Jews disobeyed their god. But, at some point, El and Yahweh became synonymous, and, eventually, Yahweh became the “only” god (“There is no other god besides me”). The “Old Testament”, which originally was a history of the Jews, acquired a universal meaning.

Both Mazdaism and Judaism became monotheistic religions and denounced all other gods as mere “idols” not worthy of worship.

A major step in the evolution of knowledge was the advent of Philosophy. Both in Greece and India, the explosion in Philosophy and Science was enabled by a lack of organized religion: both regions had a form of “rational superstition” rather than the theocracies of Mesopotamia and Egypt. The gods of the Greek and of the Indian pantheon were superhuman, but never tried to explain all that happens on this planet. Philosophers and scientists were able to speculate on the nature of the universe, of the human life and of the afterlife without offending the state and fearing for their lives.

In India, six “darshana” (philosophical schools) tried to answer the fundamental questions: is there a God? Is the world real? Samkhya believed that there is no God and that the world is real (due to the interaction between two substances, prakriti and purusha). Yoga believed in a supreme being (Isvara) and that the world is real.

Vedanta believed in Brahman and that the world is not real (it is an emanation of Brahman, the only substance that truly exists).

In Greece, Pythagoras was perhaps the first philosopher to speculate about the immortality of soul. Heraclitus could not believe in the immortality of anything, because he noticed that everything changes all the time (“you cannot enter the same river twice”), including us (“we are and we are not”). On the contrary, Parmenides, the most “Indian” of the Greek philosophers, believed that nothing ever changes: there is only one, infinite, eternal and indivisible reality, and we are part of this unchanging “one”, despite the illusion of a changing world that comes from our senses. Zeno even proved the impossibility of change with his famous paradoxes (for example, fast Achilles can never catch up with a slow turtle if the turtle starts ahead, because Achilles has to reach the current position of the turtle before passing it, and, when he does, the turtle has already moved ahead, a process that can be repeated forever). Democritus argued in favor of atomism and materialism: everything is made of atoms, including the soul. Socrates was a philosopher of wisdom, and noticed that wisdom is knowing what one does not know. His trial (the most famous religious trial before Jesus’) signaled the end of the dictatorship of traditional religion. Plato ruled out the senses as a reliable source of knowledge, and focused instead on “ideas”, which exist in a world of their own, are eternal and are unchangeable. He too believed in an immortal soul, trapped in a mortal body. By increasing its knowledge, the soul can become one with the ultimate idea of the universe, the idea of all ideas. On the contrary, Aristotle believed that knowledge “only” comes from the senses, and a mind is physically shaped by perceptions over a lifetime. He proceeded to create different disciplines to study different kinds of knowledge.

The Hellenistic age that followed Alexander’s unification of the “oikoumene” (the world that the Greeks knew) on a level never seen before fostered a new synthesis of views of the world. Hellenistic philosophy placed more emphasis on happiness of the individual, while Hellenistic religion place more emphasis on salvation of the individual. Cynics, who thought that knowledge is impossible, saw attachment to material things as the root problem, and advocated a return to nature. Skeptics, who agreed that knowledge is impossible, thought that the search for knowledge causes angst, and therefore one should avoid having beliefs of any sort. Epicureans, who had a material view of the world (the universe is a machine and humans have no special status), claimed that superstitions and fear of death cause angst. Stoics viewed the entire universe as a manifestation of god and happiness as surrendering the self to the divine order of the cosmos, as living in harmony with nature.

From the very beginning, knowledge was also the by-product of the human quest for an answer to the fundamental questions: Why are we here? What is the meaning of our lives? What happens when we die? Is it possible that we live forever in some other form? The afterlife and immortality are not knowledge, since we don’t “know” them yet, but humans used knowledge to reach different conclusions about these themes. The civilizations of Mesopotamia were mainly interested in “this” life. The Egyptians were obsessed with the afterlife, with immortality originally granted only to the pharaoh but eventually extended to everybody (via the mysteries of Osiris, the first major ritual about the resurrection). The ancient Greeks did not care much for immortality, as Ulysses showed when he declined the goddess’ invitation to spend eternity with her and preferred to return to his home; but later, in the Hellenistic period, a number of religious cults focused on resurrection: the Eleusinian mysteries (about Demeter’s search through the underworld for her daughter Persephone), the Orphic mysteries (about Orpheus’ attempt to bring back his wife Eurydice from the underworld) and the Dionysian mysteries (about Dionysus, resurrected by his father Zeus). The Romans cared for the immortality of their empire, and were resigned to the mortality of the individual; but it was under Roman rule that a new Jewish religion, Christianity, was founded on the notion that Jesus’ death and resurrection can save all humans.

The other great theme of knowledge was (and still is) the universe: what is the structure of the world that we live in? Neither the Indian nor the Greek philosophers could provide credible answers. They could only speculate. Nonetheless, the Hellenistic age fostered progress in mathematics (Euclid’s “Geometry” and Diophantus’ “Arithmetic”) and science (Erarosthenes’ calculation of the circumference of the Earth, Archimedes’ laws of mechanics and hydrostatics, Aristarchus’ heliocentric theory, Ptolemy’s geocentric theory). The Romans’ main contribution to the history of knowledge may well be engineering, which, after all, is but the practical application of science to daily life. The Romans, ever the practical people, made a quantum leap in construction: from aqueducts to public baths, from villas to amphitheaters. At the same time, they too created a new level of unification: the unification of the Mediterranean world.

The intellectual orgy of Greek philosophy opened the western mind. The Romans closed it when they adopted Christianity as “the” imperial religion and turned it into a dogma. Christianity was born a Jewish religion, but it was “relocated” to Rome and thus, automatically, turned into a universal religion. Jesus’ god was substantially different from the original El/Yahweh of the “Old Testament”: it was, first and foremost, a god of love. Jesus was the very son of God, sent to the Earth to die for humans and thus save them from the original sin. St Paul made it clear that it was love for everybody, not just for the Jews; and that the “kingdom” of the Christian faith, God’s reward for the faithful, was in heaven, not on Earth. The catch was that unbelievers were no longer immune from God’s judgement: they risked eternal damnation. The reward for the faithful was resurrection, just like Jesus had resurrected. Christianity was the culmination of a tradition of mysteries for the salvation of the individual, of religion for the ordinary man and woman, even for the slaves. Its central theme was one of resurrection and eternal life available to everybody. Indirectly, it was also an ideology of universality and equality.

In fact, both Buddhism and Christianity, and, to some extent, Confucianism, were universal and egalitarian. They were not exclusive of a race, a gender, or a social class. This achievement in religion marks a conceptual step in which ordinary people (even slaves) were beginning to see themselves as equal to the kings, albeit powerless.

Islam, another offshoot of Judaism, was the culmination of the trend towards monotheist, eschatological, egalitarian and universal religions. Islam borrowed from the Persian philosopher Mani the of a succession of revelations given to different peoples by the very same God (Allah) and it borrowed from Christianity the idea of universal brotherhood and the mission to convert the unbelievers. But, unlike its predecessors, Islam was also an ideology, because it prescribed how to build a state. It made it the duty of every Muslim to struggle for the creation of a universal Islamic state. Islam’s Earthly mission was to reform society and to form a nation. Islam’s mission was inherently political. The ultimate aim of the Islamic state is to develop social justice. What had been a subtle message in Christianity became an explicit message in Islam. In fact, the entire Muslim population (not just the priestly class) is in charge of running the Islamic state. Humans are granted limited popular sovereignty under the suzerainty of God.

The Islamic philosophers felt the need to reconcile Islam and Greek philosophy. The two who exerted the strongest influence on the West, Abu Ali al-Husain ibn Abdallah ibn Sina Avicenna and Abu al-Walid Muhammad ibn Ahmad ibn Muhammad ibn Rushd Averroes, achieved such a momentous unification of religion and philosophy by envisioning the universe as a series of emanations from Allah, from the first intelligence to the intelligence of humans. This allowed them to claim that there is only one truth, that appears like two truths: religion for the uneducated masses and philosophy for the educated elite. But there is no conflict between reason and revelation: ultimately, they both reach the same conclusions about the existence of Allah. The sufists, best represented by Ibn Arabi, added an almost Buddhist element: human consciousness is a mirror of the universal, eternal, infinite consciousness of Allah. Allah reveals himself to himself through human consciousness. The Sufi wants to achieve a state of participation in the act of self-revelation. The human condition is one of longing, of both joy (for having experienced the divine) and sorrow (for having lost the divine).

The invasions of the “barbaric” people of the east, the Arab invasion from the south and the wars against the Persian empire, led to the decadence of Roman civilization and to the “dark age” that lasted a few centuries. The obliteration of culture was such that, eventually, Europe had to re-learn its philosophy, science and mathematics from the Arabs.

The Christian dogma contributed to the decline of the Greek ideal. Rationality was replaced by superstition. Virtue was replaced by faith. Justice in this world was replaced with justice in the next world. The free exercise of reason was replaced with obedience to the Church. The Greek tolerance for foreign faiths was replaced by the intolerance of the Church. Nonetheless, Christianity emulated Islam in trying to reconcile religion and philosophy. St Augustine preached the separation (grounded in Greek philosophy) of body and soul, of bodily life and spiritual life: the pleasures of the body detract/distract from the truth of the soul.

During the “dark ages”, the Christian conversion of the European pagans, from Russia to Scandinavia, was completed. The Church, in fact, replaced the Roman empire as the unifying element of Europe. The Church controlled education. The Church controlled the arts. The Church even controlled the language: Latin.

The Arab invasion disrupted the economic and political unity of the Mediterranean Sea, and the rise of the Frankish kingdom, soon to be renamed “Holy Roman Empire” (a mostly landlocked empire) caused a redesign of the main trade routes away from the sea. Venice alone remained a sea-trading power, and, de facto, the only economic link between Holy and Eastern Roman Empires. This “inland” trade eventually caused a “commercial” revolution. Trade fairs appeared in Champagne, the Flanders, and northern Germany, creating a new kind of wealth in those regions. The Italian communes became rich enough to be able to afford their own armies and thus become de-facto independent and develop economies entirely based on trade. In northern Europe, a new kind of town was born, that did not rely on Mediterranean sea. Both in the north and in the south, a real bourgeois class was born. The medieval town was organized around the merchants, and then the artisans and the peasants.

As the horse became the main element in warfare, the landowner became the most powerful warrior. A new kind of nobility was created, a land-owning nobility. The collapse of central authority in western Europe led to feudalism, a system in which the nobility enjoyed ever greater power and freedom, a global “political” revolution.

Thus the “medieval synthesis”: Church, cities, kings (clergy, bourgeoisie, nobility).

But a fourth element was even more important for the history of knowledge. As Rome decayed, and Alexandria and Antioch fell to the Muslims, the capital of Christian civilization moved to Constantinople (Byzantium). Despite the Greek influence, this cosmopolitan city created great art but little or no philosophy or science. It was left to the monasteries of western Europe to preserve the speculative traditions of the Greek world, except that they were mainly used to prove the Christian dogma. Monasticism was nonetheless crucial for the development of philosophy, music, painting. The anarchy of the “dark age” helped monasteries become a sort of refuge for the intellectuals. As the choice of lay society came down to being a warrior or a peasant, being a monk became a more and more appealing alternative. Eventually, the erudite atmosphere of the monasteries inspire the creation of universities. And universities conferred degrees that allowed graduates to teach in any Christian country, thus fueling an “educational” revolution. Johannes Scotus Erigena, Peter Abelard, Thomas Aquinas, Johannes Eckhart, John Duns Scotus (the “scholastics”) were some of the beneficiaries. Western philosophy restarted with them. As their inquiries into the nature of the world became more and more “logical”, their demands on philosophy became stricter. Eventually, Roger Bacon came to advocate that Science be founded on logic and observation; and William Occam came to advocate the separation of Logic and Metaphysics, i.e. of Science and Church.

The commercial revolution of the new towns was matched by an “agricultural” revolution of the new manors. The plough (the first application of non-human power to agriculture), the three-field rotation (wheat/rye, oats/legumes, fallow) and the horseshoe caused an agricultural revolution in northern Europe that fostered rapid urbanization and higher standards of living. Improved agricultural techniques motivated the expansion of arable land via massive deforestation.

In the cities, a “technological” revolution took place. It started with the technology of the mill, which was pioneered by the monasteries. Mills became pervasive for grinding grain, fulling clothes, pressing olives and tanning. Textile manufacturing was improved by the spinning wheel (the first instance of belt transmission of power). And that was only the most popular instance of a machine, because this was the first age of the machines. The mechanical clock was the first machine made entirely of metal.

There also was a military revolution, due to the arrival of gunpowder. Milan became the center of weapon and armor manufacturing. Demand for cannons and handguns created a whole new industry.

Finally, an “engineering/artistic” revolution also took place, as more and more daring cathedrals started dotting the landscape of Christianity. Each cathedral was an example of “total art”, encompassing architecture, sculpture, painting, carpentry, glasswork. The construction of a cathedral was a massive enterprise that involved masons, workers, quarrymen, smiths, carpenters, etc. Not since the Egyptian pyramids had something so spectacular been tried. Each cathedral was a veritable summa of European civilization.

The political, commercial, agricultural, educational, technological and artistic revolutions of the Middle Ages converged in the 13th century (the “golden century”) to create an economic boom as it had not been seen for almost a millennium.

Improved communications between Europe and Asia, thanks to the Mongol Empire that had made travel safe from the Middle East to China, particularly on the “silk road”, and to the decline of the Viking and Saracen pirates, led to a revival of sea trade, especially by the Italian city-states that profited from a triangular trade Byzantium-Arabs-Italy.

Florence, benefiting from the trade of wool, and Venice, benefiting from the sea trade with the East, became capitalistic empires Venice sponsored technological innovation that enabled long-distance and winter voyages, while Florence sponsored financial innovation that enabled to lend/borrow and invest capital worldwide. The Italian cities had a vested interest in improved education, as they need people skilled in geography, writing, accounting, technology, etc. It is not a coincidence that the first universities were established in Italy.

The economic boom came to an abruptly stop by a plague epidemics (“the Black Death”) that decimated the European population. But the Black Death also had its beneficial effects. The dramatic decrease in population led to a higher standard of living for the survivors, as the farmers obtained more land per capita and the city dwellers could command higher wages. The higher cost of labor prompted investments in technological innovation. At the same time, wealthy people bequeated their fortunes to the creation of national universities which greatly increased the demand for books. The scarcity of educated people prompted the adoption of vernacular languages instead of Latin in the universities.

Throughout the Middle Ages, the national literatures had produced national epics such as “Beowulf” (900, Britain), “Edda” (1100, Scandinavia), “Cantar del Cid” (1140, Spain), Chretien de Troyes’ “Perceval” (1175, France), “Slovo o Ploku Igoreve” (1185, Russia), “Nibelungen” (1205, Germany), “Chanson de Roland” (1200, France), Wolfram Von Eschenbach’s “Parzival” (1210, Germany). Dante Alighieri’ “Divine Comedy” (1300) heralded a new age, in which the vernacular was used for the highest possible artistic aims, a veritable compendium of knowledge. After languishing for centuries, European poetry bloomed with Francesco Petrarca’s “Canti” (1374, Italy), Geoffrey Chaucer’s “Canterbury Tales” (1400, England), Inigo Santillana’s “Cancionero” (1449, Spain), Francois de Villon’s “Testament” (1462, France). And Giovanni Boccaccio’s “Decameron” (1353, Italy) laid the foundations for narrative prose.

In observance with the diktat of the Second Council of Nicaea (787), that the visual artist must work for the Church and remain faithful to the letter of the Bible, medieval art was permeated by an aesthetics of “imitation”. Christian art was almost a reversal of Greek art, because the emphasis shifted from the body (mortal, whose movement is driven by emotions) to the soul (immortal, immune to emotions), from realism and movement to spirituality and immanence. Christian art rediscovered Egyptian and Middle-eastern simplicity via Byzantine art. Nonetheless, centuries of illuminated manuscripts, mosaics, frescoes and icons eventually led to the revolution in painting best represented by Giotto’s “Scrovegni Chapel” (1305). While Italian artists were re-founding Greco-Roman art based on mathematical relationships and a sense of three-dimensional space, as in Paolo Uccello’s “Battle of St Romano” (1456), Masaccio’s “Trinity” (1427) and Piero della Francesca’s “Holy Conversation” (1474), Northern European painters became masters of a “photographic” realism as in Jan Van Eyck’s “The Virgin of the Chancellor Rolin” (1436) and “The Arnolfini Marriage” (1434).

Before Europe had time to recover from the Black Death, the unity of the Mediterranean was shattered again by the fall of Byzantium (1453) and the emergence of the Ottoman empire (a Muslim empire) as a major European power.

However, Europe was coming out of the “dark age” with a new awareness of the world. Marco Polo had brought news of the Far East. Albertus Magnus did not hesitate to state that the Earth is a sphere. Nicolas Oresme figured out that the rotation of the Earth on an axis explains the daily motion of the universe.

In China, the Han and Tang dynasties had been characterized by the emergence of a class of officials-scholars and by a cultural boom. The Sung dynasty amplified those social and cultural innovations. The scholar-officials become the dominant class in Chinese society. The state was run like an autocratic meritocracy, but nonetheless a meritocracy. As education was encouraged by the state, China experienced a rapid increase in literacy which led to a large urban literate class. The level of competence by the ruling class fosterd technological and agrarian innovations that created the most advanced agriculture, industry and trade in the world. When Europe was just beginning to get out of its “dark age”, China was the world’s most populous, prosperous and cultured nation in the world. The Mongol invasion (the Yuan dynasty) did not change the character of that society, but, in fact, added an element of peace: the “pax tatarica” guaranteed by the invincible Mongol armies.

India was the only part of the non-Chinese world that Chinese scholars were fascinated with. They absorbed Indian culture over the centuries, and particularly adopted one philosophical school of India: Buddhism. First came “Pure Land” or Jodo Buddhism (4th c), with its emphasis on devotion instead of meditation, Then Tiantai/Tendai (6th c), Huayan/Kegon (7th c) and Chan/Zen (6th c). The latter, a fusion of Buddhism and Taoism, focused on attainment of sudden enlightenment (“satori”). According to the Northern school (Shen-hsiu) satori was to be obtained by gradual enlightenment through guided meditation, while the Southern school (Huineng) allowed for satori through individual meditation. Zen promoted spontaneous thinking, as opposed to the philosophical investigation of Confucianism, spontaneous behavior as opposed to the calculated behavior of Confucianism. Zen is the “everyday mind”.

Japan had adopted Buddhism as a state religion already in 604, under prince Shotoku Taishi, next to a Confucian-style constitution and the native shinto cult. The various Buddhist schools arrived from China in the following centuries (the Tendai school in the 9th century, the Jodo school in the 12th century), until Zen Buddhism reached Japan during the 13th century. Zen became popular among the military class (the “samurai”) that embodied the noble values in an age of anarchy. In turn, the Zen monk came to behave like a spiritual samurai. From 1192 till 1333, Japan was ruled by “shogun” (military leaders) with residence in Kamakura (the “bakufu” system of government), while the emperor (or “mikado”) became a figurehead. Even the equivalent of the scholar-official of China was military: during the 17th century, the ideal man was the literate warrior who lived according to “bushido” (“way of the warrior”). Japan remained largely isolated until 1854, when the USA forced Japan to sign a treaty that opened Japan to foreign trade, a humiliation that led to the restoration of imperial power (1868) after so many centuries of military rule.

Japan’s native religion, Shinto, provides the bases for the imperial institutions. It is, in fact, a form of Japanese patriotism. It declares Japan a divine country, and the emperor a descendant of the gods. Shinto is polytheist to the extreme, admitting in its pantheon not only thousands of spirits (“kami”), personifying the various aspects of the natural world, and ancestors, but also the emperors and the deified heroes of the Japanese nation, and even foreign deities. Shinto is non-exclusive: a Shintoist can be a Buddhist, a Catholic, etc. The reason is that there is no competition between Shinto and the metaphysics of the other religions. Shinto is a religion to deal with ordinary lives, based on the belief that humans can affect Nature by properly honoring the spirits. When Japan adopted Buddhism, the Native spirits were recast as manifestations of Buddha.

The “Rinzai” school of Zen Buddhism believed in sudden enlightenment while concentrating to solve a koan (“sanzen”, or conversation with a master). The “Soto” school believed in gradual enlightenment through meditation in daily life (“zazen”, or sitting meditation). But the traditions of Japanese society surfaced also in Zen Buddhism: satori can be facilitated by martial arts, tea ceremonies, gardening, Haiku poetry, calligraphy, No drama, etc.

In marked contrast to the western civilizations, the eastern civilizations of India, China and Japan displayed little interested in the forceful spread of their religious beliefs.

Luckily for Christian Europe, in 1492 Spain opened a new front of knowledge: having freed itself of the last Arab kingdom, it sponsored the journey of Christopher Columbus to the “West Indies”, which turned out to be a new continent. That more or less accidental event marked the beginning of the “colonial” era, of “world trade”, and of the Atlantic slave trade; and, in general, of a whole new set of mind.

Other factors were shaping the European mind: Gutenberg’s printing press (1456), which made it possible to satisfy the growing demand for books; Martin Luther’s Reformation (1517), which freed the northern regions from the Catholic dogma; Copernicus’ heliocentric theory (1530), that removed the Earth (and thus Man) from the center of the universe; and the advent of the nation states (France, Austria, Spain, England and, later, Prussia).

However, it was not the small European nations that ruled the world at the end of the Middle Ages. The largest empires (the “gunpowder empires”) were located outside Europe. Gunpowder was only one reason for their success. They had also mastered the skills of administering a strong, centralized bureaucracy required to support an expensive military. In general, they dwarfed Europe at one basic dimension: knowledge. While Europe was just coming out of its “dark age”, the gunpowder empires were at their cultural peak. The Ottoman Empire, whose capital Istanbul was the largest city in Europe, was a melting pot of races, languages and religions. It was a sophisticated urban society, rich in universities and libraries, devoted to mathematics, medicine and manufacturing. The Safavid Empire of Persia, that controlled the silk trade, was a homogeneous state of Muslim Persians. The Mughal Empire of India, an Islamic state in a Hindu country, was also a melting pot of races, languages and religions. Ming China was perhaps the most technologically and culturally advanced of all countries.

The small European countries could hardly match the knowledge and power of these empires. And, still, a small country like Portugal or Holland ended up controlling a larger territory (stretching multiple continents) than any of those empires. A dis-united Europe of small and poor states caught up in an endless loop of intestine wars, speaking different languages, technologically backwards, that had to import science, philosophy and technology from the Muslims, that had fewer people and resources than the Asian empires, managed to conquer the entire world (with the only notable exception of Japan). Perhaps the problem was with the large-scale bureaucracies of those Asian empires, that, in the long term, became less and less competitive, more and more obscurantist. In some cases, their multi-ethnic nature caused centrifugal forces. Or perhaps Europe benefited from its own anarchy: continuous warfare created continuous competition and a perennial arms race. Perhaps the fact that no European power decisively defeated the others provided a motivation to improve that was missing in the more stable empires of the East. After all, the long-range armed sailing ships, which opened the doors to extra-European colonization, were the product of military build-up. Soon, world trade came to be based on sea transportation, which was controlled by Europeans. The printing press, which the gunpowder empire were slow to adopt (or even banned), slowly changed the balance of knowledge. World trade was creating more demand for technological innovation (and science), while the printing press was spreading knowledge throughout the continent. And all of this was funded with the wealth generated by colonialism. While the Asian empires were busy enjoying their stability, the small European countries were fighting for supremacy, anywhere anytime; and, eventually, they even overthrew those much larger empires.

Nowhere was the apparent oxymoron more intriguing than in Italy, a fragmented, war-torn peninsula that, nonetheless, became the cultural center of Europe. On a smaller scale, it was the same paradox: the tiny states of Italy and the Netherlands were superior in the arts to the powerful kingdoms of Spain, France and England. In this case, though, the reason is to be found in the socio-economic transformation of the Middle Ages that had introduced a new social class: the wealthy bourgeoisie. This class was more interested in the arts than the courts (which were mainly interested in warfare). The main “customer” of the arts was still the Church, but private patronage of art became more and more common. This, in turn, led to an elite of art collectors and critics. Aesthetics led to appreciation of genius: originality, individuality, creativity. Medieval art was imitation, Renaissance art was creation.

Perhaps the greatest invention of the Renaissance was the most basic of all from the point of view of knowledge: the self. The Egyptians and the Greeks did not have a truly unified view of the self, a unique way to refer to the “I” who is the protagonist of a life and, incidentally, is also a walking body. The Greeks used different terms (pneuma, logos, nous, psyche) to refer to different aspects of the “I”. The Middle Ages were the formative stage of the self, when the “soul” came to be identified with the thinking “I”. The Renaissance simply exalted that great medieval invention, the “I”, that had long been enslaved to religion. The “I” was now free to express and affirm itself.

In a nutshell, the “Rinascimento” (Renaissance art) adapted classical antiquity to Biblical themes. This was its fundamental contradiction: a Christian art based on Pagan art. An art that was invented (by the Greeks) to please the pagan gods and (by the Romans) to exalt pagan emperors was translated into an art to pay tribute to the Christian dogma. Leonardo da Vinci’s “The Last Supper” (1497) and Michelangelo Buonarroti’s “The Universal Judgement” (1541) are possibly the supreme examples in painting, while architects such as Donato Bramante and Gianlorenzo Bernini dramatically altered the urban landscapes. But there was also an obsession with ordering space, as manifested in Sandro Botticelli’s “Allegory of Spring” (1478) and Raffaello Sanzio’s “The School of Athens” (1511). In the Netherlands, Hieronymous Bosch’s “The Garden of Delights” (1504) was perhaps the most fantastic piece of art in centuries.

The Renaissance segued into the Baroque age, whose opulence really signified the triumph of European royalty and religion. Aesthetically speaking, the baroque was a restoration of order after the creative disorder of the Renaissance. The least predictable of the visual arts remained painting, with Pieter Bruegel’s “Triumph of Death” (1562), Domenico El Greco’s “Toledo” (1599), Pieter Rubens’ “Debarquement de Marie de Medicis” (1625), Rembrandt’s “Nightwatch” (1642), Jan Vermeer’s Malkunst (1666). In Italy, Giovanni Palestrina, Claudio Monteverdi (1567) and Girolamo Frescobaldi (1583) laid the foundations for classical music and the opera. The national literary scenes bloomed. Masterpieces of poetry included Ludovico Ariosto’s “Orlando Furioso” (1532), Luiz Vas de Camoes’ “Os Lusiadas” (1572), Torquato Tasso’s “Gerusalemme Liberata” (1575), Pierre de Ronsard’s “Sonnets pour Helene” (1578), John Donne’s “Holy Sonnets” (1615), John Milton’s “Paradise Lost” (1667). Even more characteristic of the era was theater: Gil Vicente’s “Auto da Barca do Inferno” (1516), Christopher Marlowe’s “Faust” (1592), William Shakespeare’s “Hamlet” (1601) and “King Lear” (1605), Lope de Vega Carpio’s “Fuente Ovejuna” (1614), Pedro Calderon’s “El Gran Teatro del Mundo” (1633), Moliere’s “Le Misanthrope” (1666) and JeanBaptiste Racine’s “Phedre” (1677). Francois Rabelais’ “Gargantua et Pantagruel” (1552) and Miguel Cervantes’ “Don Quijote” (1615) laid the foundations of the novel.

Progress in science was as revolutionary as progress in the arts. Tycho Brahe, who discovered a new star, and Johannes Kepler, who discovered the laws of planetary motion, Francis Bacon, who advocated knowledge based on objective empirical observation and inductive reasoning, and finally Galileo Galilei, who envisioned that linear uniform motion (not rest) is the natural motion of all objects and that forces cause acceleration (which is the same for all falling objects, i.e. the same force must cause objects to fall), Suddenly, the universe did not look like the perfect, eternal, static order that humans had been used to for centuries. Instead, it looked as disordered, imperfect and dynamic as the human world.

New inventions included: the telescope (1608), the microscope (1590s), the pendulum clock (1657), the thermometer (1611), the barometer (1644).

Both the self and the world were now open again to philosophical investigation. Rene‚ Descartes neatly separated matter and mind, two different substances, each governed by its set of laws (physical or mental). While the material world, including the body, is ultimately a machine, the soul is not: it cannot be “reduced” to the material world. His “dualism” was opposed by Thomas Hobbes’ “materialism”, according to which the soul is merely a feature of the body and human behavior is caused by physical laws.

Baruch Spinoza disagreed with both. He thought that only one substance exists: God. Nature is God (“pantheism”). The universe is God. This one substance is neither physical nor mental, and it is both. Things and souls are (finite) aspects (or “modes”) of that one (infinite) substance. Immortality is becoming one with God/Nature, realizing the eternity of everything.

Gottfried Leibniz went in the other direction: only minds exist, and everything has a mind. Matter is made of minds (“panpsychism”). Minds come in degrees, starting with matter (whose minds are very simple) and ending with God (whose mind is infinite). The universe is the set of all finite minds (or “monads”) that God has created. Their actions have been pre-determined by God. Monads are “clocks that strike hours together”.

Clearly, the scientific study of reality depended on perception, on the reliability of the senses. John Locke thought that all knowledge derives from experience (“empiricism”), and noticed that we only know the ideas and sensations in our mind. Those ideas and sensations are produced by perceptions, but we will never know for sure what caused those perceptions, how reality truly is out there: we only know the ideas that are created in our mind. Ideas rule our mind

On the contrary, George Berkeley, starting from the same premises (all we know is our perceptions) reached the opposite conclusion: that matter does not even exist, that only mind exists (“idealism”). Reality is inside our mind: an object is an experience. Objects do not exist apart from a subject that thinks them. The whole universe is a set of subjective experiences. Locke thought that we can never know how the world really is, but Berkeley replied that the world is exactly how it appears: it “is” what appears, and it is inside our mind. Our mind rules ideas

David Hume increased the dose of skepticism: if all ideas come from perception, then mind is only a theater in which perceptions play their parts in rapid succession. The self is an illusion. Mental life is a series of thoughts, feelings, sensations. A mind is a series of mental events. The mental events do exist. The self that is supposed to be thinking or feeling those mental events is a fiction.

Observation led physicists to their own view of the world. By studying gases, Robert Boyle concluded that matter must be made of innumerable elementary particles, or atoms. The features of an object are due to the features and to the motion of the particles that compose it.

Following Galileo’s intuitions and adopting Boyle’s atomistic view, Isaac Newton worked out a mathematical description of the motion of bodies in space and over time. He posited an absolute time and an absolute space, made of ordered instants and points. He assumed that forces can act at distance, and introduced an invisible “gravitational force” as the cause of planetary motion. He thus unified terrestrial and celestial Mechanics: all acceleration is caused by forces, the force that causes free fall being the gravitational force, that force being also the same force that causes the Earth to revolve around the Sun. Forces act on masses, a mass being the quantitative property that expressed Galileo’s inertia (the property of a material object to either remain at rest or in a uniform motion in the absence of external forces). Philosophers had been speculating that the universe might be a machine, but Newton did not just speculate: he wrote down the formulas.

Significant innovations were also introduced, for the first time in a long time, in Mathematics. Blaise Pascal invented the mathematical theory of probability (and built the first mechanical adding machine). Leibniz envisioned a universal language of logic (a “lingua characteristica”) that would allow to derive all possible knowledge simply by applying combinatorial rules of logic. Arabic numbers had been adopted in the 16th century. Signs for addition, subtraction, multiplication were introduced by Francois Vieta. John Napier invented logarithms. Descartes had developed analytical geometry, and Newton and Leibnitz independently developed calculus.

It might not be a coincidence that a similar scientific, mathematical approach can be found in the great composers of the era: Antonio Vivaldi, George-Frideric Handel and Johann Sebastian Bach.

The next big quantum leap in knowledge came with the “industrial” revolution. It is hard to pinpoint the birth date of the industrial revolution (in 1721 Thomas Lombe built perhaps the first factory in the world, in 1741 Lewis Paul opened the first cotton mill, in 1757 James Watt improved the steam engine), but it is clear where it happened: Manchester, England. That city benefited from a fortunate combination of factors: water mills, coal mines, Liverpool’s port and, last but not least, clock-making technology (the earliest factory mechanics were clock-makers). These factors were all in the hands of the middle class, so it is not surprising that the middle class (not the aristocracy or the government) ended up managing most of the enterprises.

The quantum leap in production translated into a quantum leap in transportation: in 1782 the first steamboat sailed up the Clyde, in 1787 John Wilkinson built the first iron boat, in 1812 Henry Bell started the first commercial steamboat service in Glasgow, in 1819 the “Savannah” completed the first transatlantic crossing by a steamboat, in 1820 the first iron steamship was built, etc. By 1892 Britain’s tonnage and sea-trade exceeds the rest of the world together. ). At its peak, Britain had only 2% of the world’s population, but produced almost 20% of the world’s manufacturing output

One of the most tangible side-effects of the industrial revolution was the British Empire. There had been “empires” before, and even larger ones (the Mongol empire). But never before had an empire stretched over so many continents: Africa, America, Oceania, Asia. The Roman empire had viewed itself as an exporter of “civilization” to the barbaric world, but the British Empire upped the ante by conceiving its imperialism as a self-appointed mission to redeem the world. Its empire was a fantastic business venture, that exported people, capital and goods, and created “world trade”, not just regional trade. This enterprise was supported by a military might that was largely due to financial responsibility at home. Despite the fact that France had a larger population and more resources, Britain managed to defeat France in the War of the Spanish Succession (1702-1713), in the Seven Years’ war (1756-1763) and in the Napoleonic wars (1795-1815.

Managing the British Empire was no easy task. One area that had to be vastly improved to manage a global empire was the area of global communications: steamships, railroads, the telegraph, the first undersea cable and a national post system unified the colonies as one nation. They created the first worldwide logistical system. Coal, a key element in a country in which wood was scarce, generated additional momentum for the improvement of shipbuilding technology and the invention of railroads (1825).

Other areas that the British Empire needed to standardize were finance and law. Thus the first economic and legal systems that were global, not only regional, were born. British economic supremacy lasted until 1869, when the first transcontinental railroad connecting the American prairies with the Atlantic Coast introduced a new formidable competitor: the USA.

No wonder, thus, that Adam Smith felt a new discipline had to be created, one that studied the dynamics of a complex economy based on the production and distribution of wealth. He explained the benefits of free competition and free trade, and how competition can work for the common good (as an “invisible hand”).

Jeremy Bentham (1789) introduced “utilitarian” criteria to decide what is good and what is bad: goodness is what guarantees “the greatest happiness for the greatest number of people”. The philosophy of “utilitarianism” was later perfected by John Stuart Mill, who wrote that “pleasure and freedom from pain are the only things desirable as ends” thus implying that good is whatever promote pleasure and prevents pain

France was much slower in adopting the industrial revolution, and never came even close to matching the pace of Britain industrialization, but the kingdom of the Bourbons went through a parallel “intellectual” revolution that was no less radical and influential: “Les Lumieres”, or the Enlightenment. It started in the salons of the aristocracy, usually run by the ladies, and then it spread throughout the French society. The “philosophes” believed, first and foremost, in the power of Reason and in Knowledge, as opposed to the religious and political dogmas. They hailed progress and scorned conservative attitudes. The mood changed dramatically, as these philosophers were able to openly say things that a century earlier would have been anathema. Scientific discoveries (Copernicus, Galileo, Newton), the exploration of the world, the printing press and a religious fatigue after so many religious wars led to cultural relativism: there are no dogmas, and only facts and logic should determine opinions. So they questioned authority (Aristotle, the Bible) across the board. Charles de Montesquieu, Denis Diderot, Voltaire, Rousseau favored a purely rational religion and carried out a moral crusade against intolerance, tyranny, superstition.

Julien LaMettrie was the ultimate materialist: he thought the mind is nothing but a machine (a computer, basically) and thoughts are due to the physical processes of the brain. There is nothing special about a mind or a life. Humans are just like all other animals.

Charles Bonnet speculated that the mind may not be able to influence the body, but might simply be a side-effect of the brain (“epiphenomenalism”).

Paul-Henri Holbach believed that humankind’s miseries are mostly caused by religion and superstition, that there is no God handing out rewards or punishment, that the soul dies when the body dies, that all phenomena can be understood in terms of the features of matter.

Georges Buffon concocted the first western account of the history of life and of the Earth that was not based on the Bible.

The American revolution (1776) was, ultimately, a practical application of the Enlightenment, a feasibility study of the ideas of the Enlightenment. The French Revolution (1789-94) was a consequence of the new political discourse, but also signaled an alliance between the rising bourgeoisie, the starving peasants and the exploited workers. Its outcome was that the “nation” replaced “God” and “King”: nationalism was born. By the turn of the century, the Enlightenment had also fathered a series of utopian ideologies, from Charles Fourier’s phalanxes to Claude Saint-Simon’s proto-socialism to Pierre Proudhon’s anarchy.

In marked contrast with the British and French philosophers, the Germans developed a more “spiritual” and less “materialistic” philosophy. The Germans were less interested in economy, society and politics, and much more interested in explaining the universe and the human mind, what we are and what is the thing out there that we perceive.

Immanuel Kant single-handedly framed the problem for future generations of philosophers. Noticing that the mind cannot perceive reality as it is, he believed that phenomena exist only insofar as the mind turns perceptions into ideas. The empirical world that appears to us is only a representation that takes place inside our mind. Our mind builds that representation thanks to some a-priori knowledge in the form of categories (such as space and time). These categories allow us to organize the chaotic flow of perceptions into an ordered meaningful world. Knowledge consists in categorizing perceptions. In other words, Kant said that knowledge depends on the structure of the mind.

Other German philosophers envisioned an even more “idealistic” philosophy.

Johann Fichte thought the natural world is construed by an infinite self as a challenge to itself and as a field in which to operate. The Self needs the non-Self in order to be.

Peter Schelling believed in a fundamental underlying unity of nature, which led to view Nature as God, and to deny the distinction between subject and object.

The spiritual theory of reality reached its apex with Georg-Wilhelm-Friedrich Hegel. He too believed in the unity of nature, that only the absolute (infinite pure mind) exists, and that everything else is an illusion. He proved it by noticing that every “thesis” has an “antithesis” that can be resolved at a higher level by a “synthesis”, and each synthesis becomes, in turns, a thesis with its own antithesis, which is resolved at a higher level of synthesis, and so forth. This endless loop leads to higher and higher levels of abstraction. The limit of this process is the synthesis of all syntheses: Hegel’s absolute. Reality is the “dialectical” unfolding of the absolute. Since we are part of the absolute as we develop our dialectical knowledge, it is, in a sense, the absolute that is trying to know itself. We suffer because we are alienated from the absolute instead of being united with it. Hegel applied the same “dialectical” method to history, believing that history is due to the conflict of nations, conflicts that are resolved on a higher plane of political order.

Arthur Schopenhauer (1819) opened a new dimension to the “idealistic” discourse by arguing that a human being is both a “knower” and a “willer”. As knowers, humans experience the world “from without” (the “cognitive” view). As free-willing beings, humans are also provided with a “view from within” (the “conative” view). The knowing intellect can only scratch the surface of reality, while the will is able to grasp its essence. Unfortunately, the will’s constant urge for ever more knowledge and action causes human unhappiness: we are victims of our insatiable will. In Buddhist-like fashion, Schopenhauer reasoned that the will is the origin of humansufferings: the less one “wills”, the less one suffers. Salvation can come through an “euthanasia of the will”.

Ludwig Feuerbach inverted Hegel’s relationship between the individual and the Absolute and saw religion as a way to project the human experience (“species being”) into the concept of God.

Soren Kierkegaard (1846) saw philosophy and science as vain and pointless, because the thinker can never be a detached, objective, external observer: the thinker is someone who exists and is part of what is observed. Existence is both the thinker’s object and condition. He thought that philosophers and scientists missed the point. What truly matters is the pathos of existing, not the truth of Logic. Logic is defined by necessity, but existence is dominated by possibility. Necessity is a feature of being, possibility is a feature of becoming. He focused on the fact that existence is possibility, possibility means choice, and choice causes angst. We are trapped in an “aut-aut”, between the aesthetic being (whose life is paralyzed by multiple possibilities) and the ethic being (whose life is committed to one choice). The only way out of the impasse is faith in God.

Inventions and discoveries of this age include Alessandro Volta’s battery, a device that converts chemical energy into electricity, John Dalton’s theory that matter is made of atoms of differing weights. By taking Newton to the letter, Pierre-Simon LaPlace argued that the future is fully determined: given the initial conditions, every future event in the universe can be calculated. The primacy of empirical science (“positivism”) was championed by Auguste Comte, who described the evolution of human civilization as three stages, corresponding to three stages of the human mind: the theological stage (in which events are explained by gods and kings rule); the abstract stage (in which events are explained by philosophy, and democracy rules); and the scientific (“positive”) stage (in which there is no absolute truth, but science provides generalizations that can be applied to the real world).

Hermann von Helmholtz offered a detailed picture of how perception works, one that emphasized how an unconscious process in the brain was responsible for turning sense data into thought and for mediating between perception and action.

In Mathematics, George Boole resuscitated Leibniz’s program of a “lingua characteristica” by applying algebraic methods to a variety of fields. His idea was that the systematic use of symbols eliminated the ambiguities of natural language. A number of mathematicians realized that the traditional (Euclidean) geometry was not the only possible geometry. Non-Euclidean geometries were developed by Carl-Friedrich Gauss, Nikolaj Lobachevsky (1826), Janos Bolyai (1829) and Georg Riemann (1854). The latter realized that the flat space of Euclidean geometry (the flat space used by Newton) was not necessarily the only possible kind of space: space could be curved, and he developed a geometry for curved space (in which even a straight line is curved, by definition). Each point of that space can be more or less curved, according to a “curvature tensor”.

Somehow, the convergence of utopianism, idealism and positivism yielded Karl Marx’s historical materialism. Marx was fully aware that humans are natural beings who have to interact with nature (work) in order to survive. Labor converts the raw materials of nature into the products that help humans survive. But in the industrial society the difference between the time/cost of manufacturing a product versus the price that people are willing to pay for it: had created a “surplus value” that was making the capitalist class richer and richer, while hardly benefiting the working class at all. Marx set out to analyze the “alienation” caused to the working class by the fact that producer and product had been separated. He envisioned the society of his time as divided into two antagonistic classes: the proletariat and the bourgeoisie. And he envisioned the whole of human history as a conflict not of nations but of classes. His remedy was socialism: all citizens should own the tools of production. After socialism, the final stage of human history was to be communism: the: full equality of a class-less society.

While human knowledge was expanding so rapidly, literature was entering the “romantic” age. The great poets of the age were William Blake and William Wordsworth in England, Friedrich Hoelderlin and Johann-Wolfgang Goethe in Germany, Giacomo Leopardi in Italy. With the exception of Carlo Goldoni’s comedies in Italy, theater was dominated by German drama: Gotthold-Ephraim Lessing in Germany), Friedrich von Schiller, Georg Buchner. The novel became a genre of equal standing with poetry and theater via Goethe’s “Wilhelm Meister” (1796), Stendhal’s “Le Rouge et Le Noir” (1830), Honore’ de Balzac’s “Le Pere Goriot” (1834), Emily Bronte’s “Wuthering Heights” (1847), ), Herman Melville’s “Moby Dick” (1851), Nikolaj Gogol’s “Dead Souls” (1852), Gustave Flaubert’s “Madame Bovary” (1857), Victor Hugo’s “Les Miserables” (1862

While painting was relatively uneventful compared with the previous age, despite the originality of works such as Francisco Goya’s“Aquelarre” (1821) and Jean-Francois Millet’s “The Gleaners” (1851), this was the age of classical music, that boasted the geniuses of Wolfgang-Amadeus Mozart, Franz-Peter Schubert and Ludwig Van Beethoven.

In the meantime, the world had become a European world. The partition of Africa (1885) had given Congo to Belgium, Mozambique and Angola to Portugal, Namibia and Tanzania to Germany, Somalia to Italy, Western Africa and Madagascar to France, and then Egypt, Sudan, Nigeria, Uganda, Kenya, South Africa, Zambia, Zimbabwe, Botswana to Britain. Then there were the “settler societies” created by the European immigrants who displaced the natives: Canada, USA, Australia, South Africa. In subject societies such as India’s (and, de facto, China’s), few Europeans ruled over huge masses of natives. The mixed-race societies of Latin America were actually the least “European”. There were fewer and shorter Intra-European wars but many more wars of conquest elsewhere. Europeans controlled about 35% of the planet in 1800, 67% in 1878, 84% in 1914.

Japan was the notable exception. It had been the least “friendly” to the European traders, and it became the first (and only) non-European civilization to “modernize” rapidly. In a sense, it became a “nation” in the European sense of the word. It was also the first non-European nation to defeat a European power (Russia). No wonder that the Japanese came to see themselves as the saviors of Asia: they were the only ones that had resisted European colonization.

To ordinary people, the age of wars among the European powers seemed to be only a distant memory. The world was becoming more homogeneous and less dangerous. One could travel from Cairo to Cape Town, from Lisbon to Beijing carrying with minimal formalities. It was “globalization” on a scale never seen before and not seen again for a century. Such a sense of security had not been felt since the days of the Roman empire, although, invisible to most, this was also the age of a delirious arms race that the world never had seen before.

No wonder that European population increased dramatically at the end of the 19th century. In 30 years, Germany’s population grew by 43%, Austria-Hungary’s by 35%, Britain’s by 26%. A continuous flow of people emigrated to the Americas.

After the French revolution, nationalism became the main factor of war. Wars were no longer feuds between kings, they were conflicts between peoples. This also led to national aspirations by the European peoples who did not have a country yet: notably Italians and Germans, who were finally united in 1861 and 1871 (but also the Jews, who had to wait much longer for a homeland). Nationalism was fed by mass education (history, geography, literature), which included, more or less subtly, the exaltation of the national past.

France lived its “Belle Epoque” (the 40 years of peace between 1871 and 1914). It was the age in which cafes (i.e., the lower classes) replaced the salons (i.e., the higher classes) as the cultural centers. And this new kind of cultural center witnessed an unprecedented convergence of sex, art and politics. Poetry turned towards “Decadentism” and “Symbolism”, movements pioneered by Charles Baudelaire’s “Les Fleurs du Mal” (1857), Isidore de Lautreamont’s “Les Chants de Maldoror” (1868), Arthur Rimbaud’s “Le Bateau Ivre” (1871), Paul Verlaine’s “Romances sans Paroles” (1874) and Stephane Mallarme’s “L’apres-midi d’un Faune” (1876). Painters developed “Impressionism”, which peaked with Claude Monet, and then “Cubism”, which peaked with Pablo Picasso, and, in between, original styles were pursued by Pierre Renoir, Georges Seurat, Henry Rousseau, Paul Gaugin and Henri Matisse. France had most of the influential artistic movements of the time. In the rest of Europe, painting relied on great individualities: Vincent van Gogh in Holland, Edvard Munch in Norway, Gustav Klimt in Austria and Marc Chagall in Russia. French writers founded “Dadaism” (1916) and “Surrealism” (1924), and an Italian in Paris founded “Futurism” (1909). They inherited the principle of the “Philosophes”: question authority and defy conventions, negate aesthetic and moral values. At the same time, they reacted against the ideological values of the Enlightenment itself: Dadaism exalted irrationality, Surrealism was fascinated by dreams, Futurism worshipped machines.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

Berlin, in the meantime, had become not only the capital of a united Germany but also the capital of electricity. Germany’s pace of industrialization had been frantic. Werner Von Siemens founded Siemens in 1847. In 1866 that company invented the first practical dynamo. In 1879 Siemens demonstrated the first electric railway and In 1881 it demonstrated the first electric tram system. In 1887 Emil Rathenau founded Siemens’ main competitor, the Algemeine Elektrizitats Gesellschaft (AEG), specializing in electrical engineering, whereas Siemens was specializing in communication and information. In1890 AEG developed the alternating-current motor (invented in the USA by Nikola Tesla) and the generator, which allowed to build the first power plants: alternating current made it easier to transmit electricity over long distances. In 1910, Berlin was the greatest center of electrical production in the world Germany’s industrial output had passed from France’s (in 1875) and Britain’s (in 1900). Berlin was becoming a megalopolis, as its population grew from 1.9 million in 1890 to 3 million in 1910.

Electricity changed the daily lives of millions of people, mainly in the USA, because it enabled the advent of appliances, for example Josephine Cochrane’s dishwasher (1886), Willis Carrier’s air conditioner (1902), and General Electric’s commercial refrigerator (1911). Life in the office also changed dramatically. First (in 1868) Christopher Latham Sholes introduced a practical typewriter that changed the concept of corresponding, and then (in 1885) William Burroughs introduced an adding machine that changed the concept of accounting.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

Progress in transportation continued with Friedrich von Harbou’s dirigible (1873), Daimler and Maybach’s motorcycle (1885), Karl Benz’s gasoline-powered car (1886) and Wilbur and Orville Wright’s airplane (1903). But, more importantly, the USA introduced a new kind of transportation, not physical (of people) but virtual (of information). The age of communications was born with Samuel Morse’s telegraph (1844), Alexander Bell’s telephone (1876), Thomas Edison’s phonograph (1877), Kodak’s first consumer camera (1886). Just like Louis Daguerre had invented the “daguerrotype” in 1839, but his invention had been improved mainly in the USA, so the Lumiere brothers invented cinema (in 1895) but the new invention soon became an American phenomenon. The most dramatic of these events was perhaps Guglielmo Marconi’s transatlantic radio transmission of 1901, when the world seemed to shrink.

“Creationist” views of the world had already been attacked in France by the “philosophes”. In the age of Progress, a new, much more scientific attack, came from Britain.

Herbert Spencer attempted a synthesis of human knowledge that led him to posit the formation of order as a pervasive feature of the universe. Basically, the universe is “programmed” to evolve towards more and more complex states. In particular, living matter continuously evolves. The fittest forms of life survive. Human progress (wealth, power) results from a similar survival of more advanced individuals, organizations, societies and cultures over their inferior competitors

Charles Darwin explained how animals evolved: through the combination of two processes, one of variation (the fact that children are not identical to the parents, and are not identical to each other) and selection (the fact that only some of the children survive). The indirect consequence of these two processes is “adaptation”, whereby species tend to evolve towards the configuration that can best cope with the environment. The “struggle for survival” became one of the fundamental laws of life. In a sense, Darwin had merely transferred Adam Smith’s economics to biology. But he had also introduced an important new paradigm: “design without a designer “. Nature can create amazingly complex and efficient organisms without any need for a “designer” (whether human or divine). Humans are used to the idea that someone designs, and then builds, an artifact. A solution to a problem requires some planning. But Darwin showed that Nature uses a different paradigm: it lets species evolve through the combined forces of variation and selection, and the result is a very efficient solution to the problem (survival). No design and no planning are necessary. It was more than a theory of evolution: it was a new way of thinking, that was immediately applied to economics, sociology, history, etc.

Ernst Haeckel argued that “ontogeny recapitulates phylogeny”: the development of the body in the individual of a species (or ontogeny) summarizes the evolutionary development of that species (phylogeny).

Far less publicized, but no less dramatic, was the discovery of Gregor Mendel. He set out to explain why children do not inherit the average of the traits of their parents (e.g., a color in between the black eyes of the mother and the blue eyes of the father) but only the trait of one or the other (black or blue eyes). He came up with a simple but, again, revolutionary explanation: there are units of transmission of traits (which today we call “genes”), and one inherit not a mathematical combination of her parents’ traits but either one or the other unit. Mendel introduced an important distinction: the “genotype” (as the program that determines how an organism looks like) versus “the phenotype” (the way the organism looks like).

Similar progress was going on in the study of the human mind. Paul Broca studied brain lesions to understand the structure of the brain and how it determines human behavior and personality.

The acceleration in Physics had been dramatic since Newton’s unification of terrestrial and celestial Mechanics, but the age of steam and electrical power introduced the first strains on its foundations. In 1824, Sadi Carnot had worked out a preliminary science of heat, or Thermodynamics, and in 1864 James Clerk Maxwell unified electricity and magnetism, thus founding Electromagnetism. In 1887 Heinrich Herz discovered radio waves, and in 1895 Wilhelm-Conrad Roentgen discovered X rays. Newton’s Physics had not been designed for these phenomena.

In 1896 radioactivity was discovered and led Physicists to believe that the atom was not indivisible, that it had its own structure. In 1900 Max Planck invented Quantum Theory by positing that energy can only be transmitted in discrete “quanta”. In 1905 Albert Einstein published “The Special Theory of Relativity”. In 1911 Ernest Rutherford showed how the atom is made of a nucleus and orbiting electrons.

Newton’s Physics viewed the world as a static and reversible system that undergoes no evolution, whose information is constant in time. Newton’s Physics was the science of being. But his Physics was not very useful to understand the world of machines, a dynamic world of becoming. Thermodynamics describes an evolving world in which irreversible processes occurs: something changes and can never be undone. Thermodynamics was the science of becoming. The science of being and the science of becoming describe dual aspects of nature. Thermodynamics was born to study gases: systems made of a myriad small particles in frantic motion. Newton’s Physics would require a dynamic equation for each of them, which is just not feasible. Thermodynamics describes a macroscopic system by global properties (such as temperature, pressure, volume). Global properties are due to the motion of its particles (e.g., temperature is the average kinetic energy of the molecules of a system). They are fundamentally stochastic, which implies that the same macro-state can be realized by different micro-states (e.g., a gas can have the same temperature at different points in time even if its internal status is changing all the time). Sadi Carnot realized that a “perpetual-motion” machine was not possible: it is not possible to continuously convert energy from one form to another and back. The reason is that any transformation of energy has a “cost” that came to be called “entropy”. That quantity became the real oddity of Thermodynamics. Everything else was due to a view of a complex system stochastic (as opposed to Newton’s deterministic view of a simple system), but entropy was a new concept, that embodied a fundamental feature of our universe: things decay, and some processes are not reversible. Heat flows spontaneously from hot to cold bodies, but the opposite never occurs. You can dissolve a lump of sugar in a cup of coffee, but, once it is dissolved, you can never bring it back. You may calculate the amount of sugar, its temperature and many other properties, but you cannot bring it back. Some happenings cannot be undone. The second law of Thermodynamics states that the entropy (of an isolated system) can never decrease. It is a feature of this universe that natural processes generate “entropy”. This translates into a formula that is not an equality. Newton’s Physics was built on the equal sign (something equals something else). Thermodynamics introduced the first law of nature that was an inequality.

Ludwig Boltzmann interpreted entropy as a measure of disorder in a system. He offered a statistical definition of entropy: the entropy of a macrostate is the logarithm of the number of its microstates. Entropy measures the very fact that many different microscopic states of a system may result in the same macroscopic state. One can interpret this fact as “how disordered the system is”. This interpretation combined with the second law of Thermodynamics led to the fear of an “eternal doom”: the universe must evolve in the direction of higher and higher entropy, thus towards the state of maximum entropy, which is absolute disorder, or the “heat death”.

Maxwell’s Electromagnetism introduced another paradigm shift: the concept of field (pioneered by Michael Faraday). Maxwell proved that electricity and magnetism, apparently related to different phenomena, are, in reality, the same phenomenon. Depending on circumstances, one can witness only the electrical or only the magnetic side of things, but they actually coexist all the time. The electric force is created by changes in the magnetic field. The magnetic force is created by changes in the electric field. The oddity was that the mathematical expression of these relations between electric and magnetic forces turned out to be “field equations”, equations describing not the motion of particles but the behavior of fields. Fields are generated by waves that radiate through space. Gravitation was not any longer the only example of action at distance: just like there was a “gravitational field” associated to any mass, so there turned out to exist an “electromagnetic” field to any electrical charge. Light itself was shown to be made up of electromagnetic waves.

Ernst Mach was another influential Physicist who had a powerful intuition. He envisioned the inertia of a body (the tendency of a body at rest to remain at rest and of a body in motion to continue moving in the same direction) as the consequence of a relationship of that body with the rest of the matter in the universe. Basically, he thought that each body in the universe interacts with all the other bodies in the universe, even at gigantic distances, and its inertia is the sum of those myriad interactions.

The last of the major ideas in Physics before Relativity came from Henri Poincare, who pioneered “chaos” theory when he pointed out that a slight change in the initial conditions of some equations results in large-scale differences. Some systems live “at the edge”: a slight change in the initial conditions can have catastrophic effects on their behavior.

The intellectual leadership, though, was passing to the Mathematicians.

By inventing “set theory”, Georg Cantor emancipated Mathematics from its traditional domain (numbers). He also introduced “numbers” to deal with infinite quantities (“transfinite” numbers) because he realized that space and time are made of infinite points, and that, between any two points, there exists always an infinite number of points. Nonetheless, an infinite series of numbers can have a finite sum. These were, after all, the same notions that, centuries before Cantor, had puzzled Zeno. Cantor gave them mathematical legitimacy.

Gottlob Frege (1884) aimed at removing intuition from arithmetic. He thus set out, just like Leibniz and Boole before him, to replacing natural language with the language of Logic, “predicate calculus”. Extending Cantor’s program, Frege turned Mathematics itself into a branch of Logic: using Cantor’s “sets”, he reconstructed the cardinal numbers by a purely logical method that did not rely on intuition.

Frege realized that Logic was about the “syntax”, not the “semantics” of propositions. An expression has a “sense” (or intension) and “a reference” (or extension): “red” is the word for the concept of redness and the word for all the things that are red. In some cases, expressions with different senses actually have the same referent. For example, “the star of the morning” and “the star of the evening”, that both refer to Venus. In particular, propositions of Logic can have many senses, but only have one of two referents: true or false.

Giuseppe Peano was pursuing a similar program at the same time, an “axiomatization” of the theory of natural numbers.

Charles Peirce gave a pragmatic definition of “truth”: something is true if it can be used and validated. Thus, truth is defined by consensus. Truth is not agreement with reality, it is agreement among humans. Truth is “true enough”. Truth is not eternal. Truth is a process, a process of self-verification. In general, he believed that an object is defined by the effects of its use: a definition that works well is a good definition. An object “is” its behavior. The meaning of a concept consists in its practical effects on our daily lives: if two ideas have the same practical effects on us, they have the same meaning.

Peirce was therefore more interested in “beliefs” than in “truths”. Beliefs lead to habits that get reinforced through experience. He saw that the process of habit creation is pervasive in nature: all matter can be said to acquire habits, except that the “beliefs” of inert matter have been fixed to the extent that they can’t be changed anymore. Habit is, ultimately, what makes objects what they are. It is also what makes us what we are: I am my habits. Habits are progressively removing chance from the universe. The universe is evolving from an original chaos in which chance prevailed and there were no habits towards an absolute order in which all habits have been fixed.

At the same time, Peirce realized that Frege’s theory of sense and referent was limited. Peirce introduced the first version of “semiotics” that focused on what signs are. An index is a sign that bears a causal relation with its referent (for example, cigarette smokes “means” that someone was in the room). An icon is a sign that bears a relation of similarity with its referent (for example, the image of a car refers to the car). A symbol is a sign that bears a relation with its referent that is purely conventional (for example, the letters “car” refers to a car). A sign refers to an object only through the mediation of other signs (or “interpretants”). There is an infinite regression of interpretants from the signifier (the sign) to the signified (the referent). A dictionary defines a word in terms of other words, which are defined in terms of other words, which are defined in terms of other words, and so forth. Peirce believed that knowing is “semiosis” (making signs) and semiosis is an endless process.

Philosophy was less interested in Logic and more interested in the human condition, the “existentialist” direction that Schopenhauer and Kierkegaard had inaugurated.

Friedrich Nietzsche believed that humans are driven by the “will to power”, an irresistible urge to order the course of one’s experiences (an extension of Schopenhauer’s will to live). All living beings strive for a higher order of their living condition to overcome their present state’s limitations. Human limitations are exemplified by Science: Science is only an interpretation of the world. Truth and knowledge are only relative to how useful they are to our “will to power”. He viewed Christian morality as a device invented by the weak to assert their will to power over the strong, a “slave morality”. He believed that Christian values had become obsolete (“God is dead”) and advocated a new morality founded on the ideal of the “superman”, who rises above the masses and solves the problems of this world, not of the otherworld.

Henri Bergson was, instead, a very spiritual philosopher, for whom reality was merely the eternal flow of a pantheistic whole. This flow has two directions: the upward flow is life, the downward flow is inert matter. Humans are torn between Intellect and Intuition: Intellect is life observing inert matter (in space), whereas Intuition is life observing life (in time). Intellect can “understand” inert matter, not only Intuition can “grasp” life. In order to understand matter, Intellect breaks it down into objects located in space. Intuition, instead, grasps the flow of life as a whole in time.

Francis-Herbert Bradley was the last major “idealist”. He argued that all categories of science (e.g., space and time) can be proven to be contradictory, which proves that the world is a fiction, a product of the mind. The only reality has to be a unity of all things, the absolute.

Inevitably, the focus of knowledge shifted towards the psyche.

William James adapted Peirce’s “pragmatism” to the realm of the mind. He believed that the function of mind is to help the body to live in an environment, just like any other organ. The brain is an organ that evolved because of its usefulness for survival. The brain is organized as an associative network, and associations are governed by a rule of reinforcement, so that it creates “habits” out of regularities (stimulus-response patterns). A habit gets reinforced as it succeeds. The function of thinking is pragmatic: to produce habits of action. James was intrigued by the fact that the brain, in doing so, also produced “consciousness”, but thought that mental life is not a substance, it is a process (“the stream of consciousness”).

Edward Thorndike postulated the “law of effect”: animals learn based on the outcome of their actions. He envisioned the brain as a network: learning occurs when elements are connected. Behavior is due to the association of stimuli with responses that is generated through those connections. A habit is a chain of “stimulus-response” pairs.

Wilhelm-Max Wundt had founded Psychology to study the psyche via experiments and logic, not mere speculation. The classical model of Psychology was roughly this. Actions have a motive. Motives are hosted in our minds and controlled by our minds. Motives express an imbalance between desire and reality that the mind tries to remedy by changing the reality via action. An action, therefore, is meant to restore the balance between reality and our desires. But what about dreams?

Sigmund Freud was less revolutionary than he seemed to be, because, in principle, he simply applied the classical model of Psychology. He decided that dreams have a motive, that those motives are in the mind, and that they are meant to remedy an imbalance. Except that the motives of dreams are not conscious: the mind contains both conscious motives and unconscious motives. There is a repertory of motives that our mind, independent of our will, has created over the years, and they participate daily in determining our actions. Freud’s revolution was in separating motive and awareness. A dream is only apparently meaningless: it is meaningless if interpreted from the conscious motives. But the dream is perfectly logical if one considers also the unconscious motives. The meaning of dreams are hidden and reflect memories of emotionally meaningful experience. Dreams are not prophecies, as ancient oracles believed, but hidden memories. Psychoanalysis was the discipline invented by Freud to sort out the unconscious mess.

Freud divided the self in different parts that coexist. The ego perceives, learns and acts consciously. The super-ego is the (largely unconscious) moral conscience which was created during childhood by parental guidance as an instrument of self-repression The id is the repertory of unconscious memories created by “libido”.

Somewhat unnecessarily, Freud painted a repulsive picture of the human soul. He believed that the main motive was “libido” (sexual desires) and that a child is, first and foremost, a sexual being. As parents repress the child’s sexuality, the child undergoes oral, anal and phallic stages. Boys desire sex with their mother and are afraid their father wants to castrate them. Girls envy the penis and are attracted to their father. And so forth.

Carl Jung shifted the focus towards a different kind of unconscious, the collective unconscious. He saw motives not so much in the history of the individual as in the history of the entire human race. His unconscious is a repertory of motives created over the millennia and shared by all humakind. Its “archetypes” spontaneously emerge in all minds. All human brains are “wired” to create some myths rather than others. Thus mythology is the key to understanding the human mind, because myths are precisely the keys to unlock those motives. Dreams reflect this collective unconscious, and therefore connect the individual with the rest of humankind and its archaic past. For Jung, the goal of Psychoanalysis is a spiritual renewal through the mystical connection with our primitive ancestors.

Another discipline invented at the turn of the century was Hermeneutics. Wilhelm Dilthey argued that human knowledge can only be understood by placing the knower’s life in its historical context. Understanding a text implies understanding the relationship between the author and its age. This applies in general to all cultural products, because they are all analogous to written texts.

Ferdinand Saussure was the father of “Structuralism”. The meaning of any human phenomenon (e.g, language) lies the network of relationships that it is part of. A sign is meaningful only within the entire network of signs, and the meaning of a sign “is” its relationship to other signs. Language is a system of signs having no reference to anything outside itself. He also separated “parole” (a specific utterance in a language, or a speaker’s performance) from “langue” (the entire body of the language, or a speaker’s competence), thus laying the foundations for Linguistics.

Edmund Husserl’s aim was to found “Phenomenology”, the science of phenomena. He believed that the essence of events is not their physical description provided by science, but the way we experience them. In fact, science caused a crisis by denying humans the truth of what they experience, by moving away from phenomena as they are. He pointed out that consciousness is “consciousness of”: it correlates the act of knowing (“noesis”) of the subject and the object that is known (“noema”). The self knows a phenomenon “intuitively”. The essence (“eidos”) of a phenomenon is the sum of all possible “intuitive” ways of knowing that phenomenon. The eidos can be achieve only after “bracketing out” the physical description of the phenomenon, only after removing the pollution of science from the human experience, so that the self can experience a purely transcendental knowledge of the phenomenon. This would restore the unity of subject and object that science separated.

In Physics, a number of ideas were converging towards the same view of the world. Henri Poincare` showed that the speed of light has to be the maximum speed and that mass depends on speed. Hendrik Lorentz unified Newton’s equations for the dynamics of bodies and Maxwell’s equations for the dynamics of electromagnetic waves in one set of equations, the “Lorentz transformations”. These equations, which were hard to dispute because both Newton’s and Maxwell’s theories were confirmed by countless experiments, contained a couple of odd implications: bodies seemed to contract with speed, while clocks seemed to slow down.

Albert Einstein devised an elegant unification of all these ideas that matched, in scope, the one provided two centuries earlier by Newton. He used strict logic. His axioms were that the laws of nature must be uniform, that those laws must be the same in all frames of reference that are “inertial” (at rest or moving of linear uniform motion), and that the speed of light was the same in all directions. He took the oddities of the Lorentz transformations literally: length and duration appear different to different observers, depending on their state of motion, because space and time are relative. “Now” and “here” became meaningless concepts. The implications of his axioms were even more powerful. All physical quantities were now expressed in four dimensions, a time component and a three-dimensional space component. One, in particular, represented both energy and momentum, depending on the space-time coordinate that one examined. It also yield the equivalence between mass and energy (E=mc2). Time does not flow (no more than space does): it is just a dimension. A life is a series of points in space-time, points that have both a spatial and a temporal component.

Einstein’s world was still Newton’s world, though, in some fundamental ways. For example, it was deterministic: the past determines the future. There was one major limitation: because nothing can travel faster than light, there is a limit to what can happen in one’s life. Each observer’s history is constrained by a cone of light within the space-time continuum radiating from the point (space and time) where the observer “is”.

Einstein’s next step was to look for a science that was not limited to “inertial” systems. He believed that phenomena should appear the same for all systems accelerated with respect to one another. His new formulas had new startling implications. The dynamic of the universe was reduced to the interaction between masses and the geometry of space-time: masses curve space-time, and the curvature of space-time determines how masses move. Space-time is warped by all the masses that is studded with. Every object left to itself moves along a “geodesic” of space-time (the shortest route between two points on the warped surface of space-time). It so happens that space-time is warped, and thus objects appear to be “attracted” by the objects in space-time that have warped it. But each object is simply moving on a geodesic (the equivalent of a straight line in traditional “flat” space). It is space-time that is curved, not the geodesic (the trajectory) of the body. Space-time “is” the gravitational field. Einstein thus reduced Physics to Geometry. The curvature of space-time is measured by a “curvature tensor” (as in Riemann’s geometry) such that each point in space-time is described by ten numbers (the “metric tensor”). If the metric tensor is reduced to zero curvature, one obtains traditional Physics in traditional flat space. Curvature (i.e., a gravitational field) also causes clocks to slow down and light to be deflected.

Surprisingly, Einstein’s Relativity, that granted a special status to the observer, re-opened the doors to Eastern spirituality. Nishida Kitaro was perhaps the most distinguished Eastern philosopher to attempt a unification of western science and Zen Buddhism. In Kitaro’s system, western science is like a robot without feelings or ethics that provides the rational foundations for life, while Zen provides the feelings and the ethics. “Mu” is the immeasurable moment in space-time (“less than a moment”) that has to be “lived” in order to reach the next “mu”. The flow of “mu” creates a space-time topology. Mu’s infinitesemal brief presence creates past, present, and future. The “eternal now” contains one’s whole being and also the being of all other things. The present is merely an aspect of the eternal. The eternal generates all the time a present. Mu also creates self-consciousness and free will. There is a fundamental unity of the universe, in particular between the self and the world. Each self and each thing are expressions of the same reality, God. The self is not a substance: it is nothingness (“to study the self is to forget the self”). Religion, not science, is the culmination of knowledge. It is also the culmination of love.

The European countries (and at least two of their former colonies, Brazil and the USA) experienced an unprecedented boom in literature. The great novels of the time expanded over the genres invented by the previous generations: Leo Tolstoj’s “War and Peace” (1869), George Eliot’s “Middlemarch” (1872), Emile Zola’s “L’Assommoir” (1877), Fodor Dostoevsky’s “Brothers Karamazov” (1880), Joaquim-Maria Machado de Assis’ “Memorias Postumas” (1881), Joris Huysmans’ “A Rebours” (1884), Perez Galdos’ “Tristana” (1892), Jose-Maria Eca de Queiros’ “Casa de Ramires” (1897), Thomas Mann’s “Buddenbrooks” (1901), Henry James’ Golden Bowl (1904), Luigi Pirandello’s “Il Fu Mattia Pascal” (1904), Joseph Conrad’s “Nostromo” (1904), Maksim Gorkij’s “The Mother” (1907) and Franz Kafka’s “Der Prozess” (1915).

Theatre was largely reinvented both as a realist and as a fantastic art through Henrik Ibsen’s “Wild Duck” (1884), Alfred Jarry’s “Ubu Roi” (1894), August Strindberg’s “The Dream” (1902), Anton Chekhov’s “The Cherries Garden” (1904), Gerhart Hauptmann’s “The Weavers” (1892), Arthur Schnitzler’s “Reigen “ (1896), Frank Wedekind’s “The Book of Pandora” (1904), Bernard Shaw’s “Pygmalion” (1914).

Poetry works outside of France’s “isms” ranged from Robert Browning’s “The Ring And The Book” (1869) to Gerald-Manley Hopkins’ “The Wreck Of The Deutschland” (1876), from Ruben Dario’s “Prosas Profanas” (1896) to Giovanni Pascoli’s “Canti di Castelvecchio” (1903), from Antonio Machado’s “Campos de Castilla” (1912) to Rabindranath Tagore’s “Gitanjali” (1913). In the new century, France still led the way of literary fashion with Guillaume Apollinaire’s “Alcools” (1913) and Paul Valery’s “La Jeune Parque” (1917).

Classical music reflected the nationalist spirit of the age (Richard Wagner in Germany, Hector Berlioz in France, Modest Moussorgsky in Russia, Giuseppe Verdi in Italy, Antonin Dvorak in the Czech Republic, Fryderyk Chopin in Poland, Ferencz Liszt in Hungary) and the impact of Beethoven’s symphonies on the German-speaking world (Johannes Brahms, Richard Strauss, Joseph Bruckner and Gustav Mahler).

At the beginning of the new century, a number of compositions announced that the classical format was about to exhaust its mission: Aleksandr Skrjabin’s “Divine Poem” (1905), Arnold Schoenberg’s “Pierrot Lunaire” (1912), Claude Debussy’s “Jeux” (1912), Igor Stravinskij’s “Le Sacre du Printemps” (1913), Charles Ives’ “Symphony 4” (1916), Sergej Prokofev’ “Classic Symphony” (1917) and Erik Satie’s “Socrates” (1918).

All the progress in Science, Philosophy and the Arts did not help avert a new international war, one so large that was called a “world war”. Its immediate causes (1914) were insignificant. The real causes were the “nations” themselves. The nationalistic spirit caused the confrontation, and the confrontation caused a massive arms race, and this race turned each European nation into a formidable war machine. Soldiers were transported by battleship, submarine, zeppelin, air fighter, train, car and tank. Enemies were killed with grenades, cannons, machine guns, torpedoes, bombs. 60 million men were mobilized. 8 million died. Serbia, Russia, France, Britain, Japan, Canada, Australia, Italy (1915), China (1917) and the USA (1917) won against Austria, Germany and Turkey. Russia was allied with the winners, but had to withdraw to take care of its own revolution (1917).

The post-war age opened with three new political “isms”: Vladimir Ilic Lenin’s communism (1917), Benito Mussolini’s fascism (1922) and Adolf Hitler’s nazism (1933). Mussolini and Hitler capitalized on the nationalist spirit of the two youngest nations of Europe. The Russian revolution was two revolutions in one. The first one (in february) was caused by food shortages, and involved women, workers and soldiers. The second one (in october) was in reality a coup by Lenin’s Bolshevik Party, determined to apply Leon Trotsky’s program of “permanent revolution” (bypass the bourgeoise-democratic society and aim directly for the dictatorship of the proletariat). Lenin inaugurated a collectivist economy supported by a terror apparatus. Lenin was succeeded by Joseph Stalin, under whose rule Marxism-Leninism became the euphemism for a vast, pervasive, centralized bureaucracy in charge of every aspect of life (the “nomenklatura” system). The communist goal required the mobilization of all human and material resources to generate economic power which guaranteed political and military power.

The three “isms” had something in common, besides the totalitarian regime: they soon became ideologies of mass murder. Lenin’s was scientific, with the goal to create absolute dictatorship (of the proletariat) via absolute violence; Stalin’s was political, to safeguard and increase his own power; Hitler’s was racist, to annihilate inferior races; Mao’s was idealist, to create a just society.

But they did not invent genocide: 2.4 million Chinese died in the 1911 revolution and 2 million would die in the civil war of 1928-1937, the Ottoman empire slaughtered 1.2 million Armenians in 1915, World War I killed 8 million soldiers. Britain had already experimented on concentration camps in the Boer war (1899-02).

However, the numbers escalated with the new ideologies of mass murder: Lenin’s “revolution” killed 5 million; Stalin’s purges of 1936-37 killed 13 million; World War 2 killed 55 million, of which millions in Hitler’s gas chambers; Mao’s “Great Leap Forward” (1958-1961) caused the death of perhaps 30 million and his “cultural revolution” (1966-1969) caused the death of perhaps 11 million.

In the meantime, Physics was still in a fluctuating state.

Niels Bohr (1913) showed that electrons are arranged in concentric shells outside the nucleus of the atom, with the number of electrons determining the atomic number of the atom and the outermost shell of electrons determining its chemical behavior. Paul Rutherford (1919) showed that the nucleus of the atom contains positively charged particles (protons) in equal number to the number of electrons. In 1932 James Chadwick showed that the nucleus of the atom contains electrically neutral particles (neutrons): isotopes are atoms of the same element (containing the same number of electrons/protons) but with different numbers of neutrons. Their model of the atom was another case of Nature preferring only discrete values instead of all possible values. (Max Planck had shown in 1900 that atoms can emit energy only in discrete amounts).

At this point, Physics was aware of three fundamental forces: the electromagnetic force, the gravitational force and now the nuclear force.

The theory that developed from these discoveries was labeled “Quantum Mechanics”. It was born to explain why Nature prefers some “quanta” instead of all possible values. Forces are due to exchanges of discrete amounts of energy (“quanta”).

The key intuition came in 1923, when Louis DeBroglie argued that matter can be viewed both as particles and waves: they are dual aspects of the same reality..This also explained the energy-frequency equivalence discovered by Albert Einstein in 1905: the energy of a photon is proportional to the frequency of the radiation.

Max Born realized (1926) that the “wave” corresponding to a particle was a wave of probabilities, it was a representation of the state of the particle. Unlike a pointless particle, a wave can be in several places at the same time. The implication was that the state of a particle was not a specific value, but a range of values. A “wave function” specifies the values that a certain quantity can assume, and, in a sense, states that the quantity “has” all those values (e.g., the particle “is” in all the places compatible with its wave function). The “wave function” summarizes (“superposes”) all the possible alternatives. Erwin Schroedinger’s equation describes how this wave function evolves in time, just like Newton’s equations describe how a classical physical quantity evolves in time. The difference is that, at every point in time, Schroedinger’s equation yields a range of values (the wave function) not a specific value.

The probability associated with each of those possible values is the probability that an observation would reveal that specific value (e.g., that an observation would find the particle in one specific point). This was a dramatic departure for Physics. Determinism was gone, because the state of a quantum system cannot be determined anymore. Chance had entered the picture, because, when a Physicist performs an observation, Nature decides randomly which of the possible values to reveal. And a discontinuity had been introduced between unobserved reality and observed reality: as long as nobody measures it, a quantity has many values (e.g., a particle is in many places at the same time), but, as soon as someone measures it, the quantity assumes only one of those values (e.g, the particle is in one specific point).

The fact that the equations of different quantities were linked together (a consequence of Einstein’s energy-frequency equivalence) had another odd implication, expressed by Werner Heisenberg’s “uncertainty principle”: there is a limit to the precision with which we can measure quantities. The more precise we want to be about a certain quantity, the less precise we will be about some other quantity.

Space-time turns out to be discrete: there is a minimum size to lengths and intervals, below which Physics ceases to operate. Thus, there is a limit to how small a physical system can be.

Later, Physicists would realize that vacuum itself is unrecognizable in Quantum Mechanics: it is not empty.

Besides randomness, which was already difficult to digest, Physicists also had to accept “non-locality”: a system can affect a distant system despite the fact that they are not communicating. If two systems get entangled in a wave, they will remain so forever, even if they move to the opposite sides of the universe, at a distance at which a signal cannot travel in time to tell one what the other one is doing.

If this were not enough, Paul Dirac (1928) realized that the equations of Quantum Mechanics allowed for “anti-matter” to exist next to usual matter, for example a positively charged electron exists that looks just like the electron but has the opposite charge. Paul Dirac’s equations for the electron in an electromagnetic field, which combined Quantum Mechanics and Special Relativity, transferred Quantum Theory outside Mechanics, into Quantum Electrodynamics.

Perhaps the most intriguing aspect of Quantum Mechanics is that a measurement causes a “collapse” of the wave function. The observer changes the course of the universe by the simple act of looking at a particle inside a microscope.

This led to different “interpretations” of Quantum Mechanics. Niels Bohr argued that maybe only phenomena are real. Werner Heisenberg, instead, thought that maybe the world “is” made of possibility waves. Paul Dirac thought that Quantum Mechanics simply represents our (imperfect) knowledge of a system. Hugh Everett took the multiple possible values of each quantity literally, and hypothized that we live in an ever multiplying “multiverse”: at each point in time, the universe splits according to all the possible values of a measurement. In each new universe one of the possible values is observed, and life goes on.

John Von Neumann asked at which point does the collapse occur. If a measurement causes Nature to choose one value, and only one, among the many that are allowed by Schroedinger’s equation, “when” does this occur? In other words, where in the measuring apparatus does this occur? The measurement is performed by having a machine interact with the quantum system and eventually deliver a visual measurement to the human brain. Somewhere in this process a range of possibilities collapses into one specific value. Somewhere in this process the quantum world of waves collapses into the classical world of objects. Measurement consists in a chain of interactions between the apparatus and the system, whereby the states of the apparatus become dependent on the states of the system. Eventually, states of the observer’s consciousness are made dependent on states of the system, and the observer “knows” what the value of the observable is. If we proceed backwards, this seems to imply that the “collapse” occurs in the conscious being, and that consciousness creates reality.

Einstein was the main critic: he believed that Quantum Mechanics was an incomplete description of the universe, and that some “hidden variables” would eventually turn it into a deterministic science just like traditional science and his own Relativity.

From the beginning, it was obvious what was going to be the biggest challenge for Quantum Mechanics: discovering the “quantum” of gravitation. Einstein had explained gravitation as the curvature of space-time, but Quantum Mechanics was founded on the premise that each force is due to the exchange of quanta: Gravity did not seem to work that way, though.

A further blow to the traditional view of the universe came from Edwin Hubble’s discovery (1929) that the universe is expanding. It is not only the Earth that is moving around the Sun, and the Sun that is moving around the center of our galaxy: but all galaxies are moving away from each other.

The emerging discipline was Biology. By the 1940s Darwin’s theory of evolution (variation plus selection) had been finally wed to Mendel’s theory of genetic transmission (mutation), yielding the “modern synthesis”. Basically, Mendel’s mutation explained were Darwin’s variation came from. At the same time, biologists focused on population, not individuals, using the mathematical tool of probabilities. “Population Genetics” was born.

Erwin Schroedinger noticed an apparent paradox in the biological world: as species evolve and as organisms grow, life creates order from disorder, thus contradicting the second law of Thermodynamics. The solution to this paradox is that life is not a “closed” system: the biological world is a world of energy flux. An organism stays alive (i.e., maintains its highly organized state) by absorbing energy from the outside world and processing it to decrease its own entropy (i.e., increase its own order). “Living organisms feed upon negative entropy”. Life is “negentropic”. The effect of life’s negentropy is that entropy increases in the outside world. The survival of a living being depends on increasing the entropy of the rest of the universe.

However, the lives of ordinary people were probably more affected by a humbler kind of science that became pervasive: synthetic materials. In 1907 Leo Baekeland invented the first plastic (“bakelite”). In 1925 cellophane was introduced and in 1930 it was the turn of polystyrene. In 1935 Wallace Carothers invented nylon.

The influence of Einstein can also be seen on Samuel Alexander, who believed in “emergent evolution”: existence is hierarchically arranged and each stage emerges from the previous one. Matter emerges from space-time, life emerges from matter, mind emerges from life, God emerges from mind.

Arguing against both idealism, materialism and dualism, Bertrand Russell took Einstein literally and adopted the view that there is no substance (“neutral monism”): everything in the universe is made of space-time events, and events are neither mental nor physical. Matter and mind are different ways of organizing space-time.

Elsewhere, he conceived of consciousness as a sense organ that allows us to perceive some of the processes that occur in our brain. Consciousness provides us with direct, immediate awareness of what is in the brain, whereas the senses “observe” what is in the brain. What a neurophysiologist really sees while examining someone else’s brain is part of her own (the neurologist’s) brain.

But Bertrand Russell was perhaps more influential in criticizing Frege’s program. He found a paradox that seemed to terminate the program to formalize Mathematics: the class of all the classes that are not members of themselves is both a member and not a member of itself (the barber who shaves all barbers who do not shave themselves both shaves and does not shave himself). He solved the paradox (and other similar paradoxes, such as the proposition “I am lying” which is true if it is false and false if it is true) by introducing a “theory of types”, which basically resolved logical contradictions at a higher level.

Ludwig Wittgenstein erected another ambitious logical system. Believing that most philosophical problems are non-issues created by linguistic misunderstandings, he set out to investigate the nature of language. He concluded that the meaning of the world cannot be understood from inside the world, and thus metaphysics cannot be justified from inside the world (no more and no less than religion or magic). Mathematics also lost some of its appeal: it cannot be grounded in the world, therefore it is but a game played by mathematicians.

Wittgenstein saw that language has a function, that words are tools. Language is a game between people, and it involves more than a mere transcription of meaning: it involves assertions, commands, questions, etc. The meaning of a proposition can only be understood in its context, and the meaning of a word is due to the consensus of a society. To understand a word is to understand a language.

Edward Sapir argued that language and thought influence each other. Thought shapes language, but language also shapes thought. In fact, the structure of a language exerts an influence on the way its speakers understand the world. Each language contains a “hidden metaphysics”, an implicit classification of experience, a cultural model, a system of values. Language implies the categories by which its speakers not only communicate but also think.

Lev Vygotsky reached a similar conclusion from a developmental viewpoint: language mediates between society and the child. Language guides the child’s cognitive growth. Thus, cognitive faculties are merely internalized versions of social processes that we learned via language as children. Thus, one’s cognitive development (way of thinking) depends on the society in which she grew up.

Something similar to the wave/particle dualism of Physics was taking place in Psychology. Behaviorists such as John Watson, Ivan Pavlov and Burrhus Skinner believed that behavior is due to stimulus-response patterns. Animals learn how to respond to a stimulus based on reward/punishment, i.e. via selective reinforcement of random responses. All of behavior can be reduced to such “conditioned” learning. This also provided an elegant parallel with Darwinian evolution, which is also based on selection by the environment of random mutations. Behaviorists downplayed mind: thoughts have no effect on our actions.

Cognitivists such as Max Wertheimer, Wolfgang Kohler and Karl Lashley (the “gestalt” school) believed just the opposite: an individual stimulus does not cause an individual response. We perceive (and react to) “form”, as a whole, not individual stimuli. We recognize objects not by focusing on the details of each image, but by focusing the image as a whole. We solve problems not by breaking them down in more and more minute details, but via sudden insight, often by restructuring the field of perception. Cognitivists believed that the processing (thought) between input and output was the key to human behavior, whereas Behaviorists believed that behavior was just a matter of linking outputs with inputs.

Cognitivists conceived the brain as a holistic system. Functions are not localized but distributed around the brain. If a piece of the brain stops working, the brain as a whole may still be working. They envisioned memory as an electromagnetic field, and a specific memory as a wave within that field.

Otto Selz was influenced by this school when he argued that to solve a problem entails to recognize the situation and to fill in the gaps: information in excess contains the solution. Thus solving a problem consists in comprehending it, and comprehending it consists in reducing the current situation to a past situation. Once we “comprehend” it, we can also anticipate what comes next: inferring is anticipating.

Last, but not least, Fredrick Bartlett suggested that memory is not a kind of storage, because it obviously does not remember the single words and images. Memory “reconstructs” the past. We are perfectly capable of describing a scene or a novel or a film even though we cannot remember the vast majority of the details. Memory has “encoded” the past in an efficient format of “schemata” that bear little resemblance to the original scenes and stories, but that take little space and make it easy to reconstruct them when needed.

Kurt Goldstein’s theory of disease is also an example of cognitivist thinking. Goldstein took issue against dividing an organism into separate “organs”: it is the whole that reacts to the environment. A “disease” is the manifestation of a change in the relationship between the organism and its environment. Healing is not a “repair”, but an adaptation of the whole organism to the new state. A sick body is, in fact, a system that is undergoing global reorganization.

Jean Piaget focused entirely on the mind, and precisely on the “growth” of the mind. He realized that, during our lifetime, the mind grows, just like the body grows. For him cognition was self-regulation: organisms need to constantly maintain a state of equilibrium with their environment.

Piaget believed that humans achieve that equilibrium through a number of stages, each stage corresponding with a reorganization of our cognitive life. This was not a linear, gradual progress of learning, but a discontinuous process of sudden cognitive jumps. Overall, the growth of the mind was a transition from the stage of early childhood, in which the dominant factor is perception, which is irreversible, to the stage of adulthood in which the dominant factor is abstract thought, which is reversible.

Charlie-Dunbar Broad was a materialist in the age of behaviorists and cognitivists. He believed that mind was an emergent property of the brain, just like electricity is an emergent property of conductors. Ultimately, all is matter.

That is not to say that the “spiritual” discourse was dead. Martin Buber that humans were mistaken in turning subjects into objects and losing the meaning of God. He argued that our original state was one of “I-You”, in which the “I” recognizes other “I”’s in the world, but we moved towards a “I-It” state, in which the “I” sees both objects and people merely as means to an end. This changes the way in which we engage in dialogue with each other, and thus our existence. Thus we lost God, which is the “Eternal You”.

For Martin Heidegger, the fundamental question was the question of “being”. A conceptual mistake is to think of the human being as a “what” instead of a “who”. Another conceptual mistake is to separate the “who” from the “what”: the human being is part of the world, at the same time that is the observer of the world. The human being is not “Dasein” (existence) but “Dase-in” (“existing in” the world). We cannot detach ourselves from reality because we are part of it. We just “act”: we are “thrown” in an action. We know what to do because the world is not a world of particles or formulas: it is a world of meaning, that the mind can understand. Technology alienates humans because it recasts the natural environment as merely a reservoir of natural resources to be exploited, when in fact it provided them with an identity.

Vladimir Vernadsky introduced the concept of the “biosphere” to express the unity of all life.

Alfred Whitehead believed in the fundamental unity of the world, due to the continuous interaction of its constituents, and that matter and mind were simply different aspects of the one reality, due to the fact that mind is part of the bodily interaction with the world. He thought that every particle is an event having both an “objective” aspect of matter and a “subjective” aspect of experience. Some material compounds, such as the brain, create the illusion that we call “self”. But the mental is not exclusive to humans, it is ubiquitous in nature.

The relationship of the self with the external reality was also analyzed by George Herbert Mead, who saw consciousness as, ultimately, a feature in the world, located outside the organism and due to the interaction of the organism with the environment. Consciousness “is” the qualities of the objects that we perceive. Those qualities are perceived the way they are because of the acts that we performed. The world is the result of our actions. It is our acting in the environment that determines what we perceive as objects. Different organisms may perceive different objects. We are actors as well as observers (of the consequences of our actions). Consciousness is not the brain process: the brain process is only the switch that turns consciousness on or off. Consciousness is pervasive in nature. What is unique to humans, as social species, is that they can report on their conscious experiences. That “reporting” is what we call the “self”. A self always belongs to a society of selves.

Sarvepalli Radhakrishnan believed that science was proving a universal process of evolution at different levels (material, organic, biological, social) whose ultimate goal was to reveal the absolute (the spiritual level). Human consciousness is not the last step in evolution, but will be succeeded by the emergence of a super-consciousness capable of realizing the union with a super-human reality that human science cannot grasp.

Muhammad Iqbal believed that humans are imperfect egos who are striving to reach God, the absolute ego.

However, it was an economist, John Maynard Keynes, to frame the fundamental philosophical problem of the post-industrial state. As citizens no longer need to worry about survival, “man will be faced with his real, permanent problem: how to use his freedom”.

But Karl Jaspers saw existence as a contradiction in terms. In theory humans are free to choose the existence they prefer, but in practice it is impossible to transcend the historical and social background. Thus one is only truly free of accepting of one’s destiny. Ultimately, we can only glimpse the essence of our own existence, but we cannot change it.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

The ambition of creating a universal language a` la Leibniz to find the solution to all philosophical problems had not died either. Several philosophers, particularly the “logical positivists” (such as Rudolf Carnap and Alfred-Jules Ayer), shed new light on this program. Carnap believed that meaning could be found only in the marriage of science and Frege’s symbolic logic. He believed in the motto “the meaning of a proposition is its method of verification”, which put all the responsibility on the senses. He demoted Philosophy to a second-rate discipline whose only function would be to clarify the “syntax” of the logical-scientific discourse. The problem is that the senses provide a subjective view of the world, and therefore the “meaning” derived from verification is personal, not universal. Also, it is not clear how one can “verify” statements about history. Soon it became clear that even scientific propositions cannot quite be “verified” in an absolute way. Last, but not least, Carnap could not prove the very principle of verification based on the principle of verification.

Karl Popper clarified that truth is always and only relative to a theory: no definition of absolute truth is possible. The issue is not what is “true”, but what is “scientific”. Popper argued that matching the facts was not enough to qualify as “scientific”: a scientific theory should also provide the means to falsify itself.

Symbolic logic had made huge progress and reached an impressive level of sophistication. The implicit premise of much work on Logic was that the laws of thought are the laws of logic, and viceversa. After Frege, contributions to “axiomatizating” Mathematics and Language had come from Russell, Whitehead and Wittgenstein. David Hilbert interpreted the spirit of his age when he advanced his program of “formal systems”, which, again, was an adaptation of Leibniz’s old dream: devising an automatic procedure such that, by applying a set of rules on a set of axioms, one could prove any possible theorem. A major setback for Hilbert’s program was Kurt Goedel’s theorem of incompleteness (1931): every formal system (that contains arithmetic) also contains at least one proposition that cannot be proven true or false (an unprovable proposition). In other words, there is always an unprovable theorem in every system a` la Hilbert. Thus Hilbert’s program appeared to be impossible. Nonetheless, Alan Turing eventually (1936) found Hilbert’s procedure, which came to be called “Turing Machine” (an imaginary machine, not a physical one). Such a machine is capable of a few elementary operations on symbols (read current symbols, process them, write new symbols, examine new symbols) and is capable of remembering its own state. One can imagine an infinite number of Turing machines, depending on the rules to manipulate the symbols. Turing then imagined a “universal” machine capable of simulating all possible Turing machines. Turing showed that, given infinite time and infinite memory, such a universal machine could prove any theorem (except, of course, Goedel’s unprovable one).

Turing did more than complete Hilbert’s program: he introduced a whole new vocabulary. Reasoning had been reduced to computation, which was manipulation of symbols. Thus “thinking” had been reduced to symbol processing. Also, Turing shifted the emphasis from “formulas” to “algorithms”: the Turing Machine was a series of instructions. Today’s computer is but the physical implementation of a universal Turing machine with a finite memory.

Alfred Tarski found the “truth” the Carnap was looking for. Tarski realized a subtle but key difference between the fact that “p” is true and the sentence “p is true”. The fact and the sentence are actually referring to two different things, or the same thing at different levels. The latter is a “meta-sentence”, expressed in a meta-language. The sentences of the meta-language are about sentences of the language. Tarski realized that truth within a theory can be defined only relative to another theory, the meta-theory. In the meta-theory one can define (one can list) all the statements that are true in the theory. Tarski replaced the intuitive notion of “truth” with an infinite series of rules which define truth in a language relative to truth in another language.

Ernst Cassirer adopted the view that was coming from Logic: that the human mind is a symbolic system, and that “Understanding” the world is turning it into symbols. The difference between animals and humans is that animals live in the world, whereas humans live in a symbolic representation of the world. All cultural artifacts are symbolic forms that mediate between the self and the world.

Alfred Korzybski made a similar distinction between animals and humans. Animals are only hunters and gatherers, activities that are bound to the territory, i.e. they are “space-binders”. Humans, instead, developed agriculture, that is bound to a memory of the past and to prediction of the future, i.e. they are “time-binders”. Time-binding is enabled by the manipulation of symbols, and allows to transmit knowledge to other humans.

The influence of Peirce was offering a different take on truth and meaning. John Dewey viewed knowledge as a way to generate certainty from doubt (habits from chaos). When faced with an indeterminate situation, humans work out a scientific or common-sense theory of that situation that reduces its indeterminacy. Charles Morris developed a theory of signs (“semiotics”) based on the theories of Peirce and Saussure, and separated three disciplines of signs: “syntax” studies the relation between signs and signs; “semantics” studies the relation between signs and objects; “pragmatics” studies the relation between signs, objects and users.

World War 2 (1939-1945) actually started during the 1930s, when Germany (Austria 1938, Czechoslovakia 1938), Italy (Ethiopia 1936) and Japan (Manchuria 1931, China 1937, Indochina 1940) began expanding their respective territories through a policy of aggression and annexation. Eventually, after Germany (1939-1940) invaded Poland and France, the powers of the world fell into two camps: Britain, USA and Russia (who won) against Germany, Italy and Japan (the “axis”). World War 2 made explicit the commitment to genocide: Germany slaughtered Jews by the millions in gas chambers, the USA won the war by detonating the first nuclear weapons (the first practical application of Einstein’s Relativity).

In fact, both shame of the apocalypse (the two world wars just ended) and fear of the apocalypse (the nuclear holocaust) permeated the cultural mood of the era.

In poetry the apocalyptic spirit of the time was captured by Rainer Maria Rilke’s “Duineser Elegien” (1923), William Yeats’ “The Tower” (1928), Czeslaw Milosz’s “Poem of the Stony Time” (1933), Fernando Pessoa’s “Mensagem” (1933), Federico Garcia-Lorca’s “Llanto por Ignacio Sanchez Mejias” (1935), Eugenio Montale’s “La Bufera” (1941), Nazim Hikmet’s In This Year” (1941), Wallace Stevens’s “Notes Toward A Supreme Fiction” (1942), Thomas-Stearns Eliot’s “Four Quartets” (1942), Juan-Ramon Jimenez’s “La Estacion Total” (1946).

In theater, new forms of expression were invented to deliver the message by Ernst Toller’s “Masse-Mensch” (1921), Luigi Pirandello’s “Enrico IV” (1922), Paul Claudel (1868): “Le Soulier de Satin” (1928), Jean Giraudoux’s “Electre” (1937), Bertold Brecht’s “Leben Des Galilei” (1939).

But this was certainly the century of the novel. The spirit of the times was captured by James Joyce’s “Ulysses” (1922), Marcel Proust’s “A la Recherche du Temp Perdu” (1922), Italo Svevo’s “La Coscienza di Zeno” (1923), Francis-Scott Fitzgerald’s “The Great Gatsby” (1925), Andre’ Gide’s “Les Faux-Monnayeurs” (1925), Virginia Woolf’s “To The Lighthouse” (1927), Julien Green’s “Adrienne Mesurat” (1927), Stanislaw Witkiewicz’s “Insatiability” (1930), Louis-Ferdinand Celine’s “Voyage a Bout de la Nuit” (1932), William Faulkner’s “Light in August” (1932), Robert Musil’s “The Man Without Qualities” (1933), Elias Canetti’s “Auto Da Fe” (1935), Flann O’Brien’s “At Swim-two-birds” (1939), Jean-Paul Sartre’s “La Nausee” (1938), Joseph Roth’s “Die Legende vom heiligen Trinker” (1939), Mikhail Bulgakov’s The Master and Margarita” (1940), Albert Camus (1913, France): “The Stranger” (1942), Jorge-Luis Borges’ “Ficciones” (1944), Julien Gracq’s “Un Beaux Tenebreux” (1945), Hermann Broch’s “Der Tod des Vergil” (1945).

If literature was getting, overall, more “narrative”, painting became more abstract and symbolic with Rene Magritte’s “Faux Miroir” (1928), Salvator Dali’s “La Persistence de la Memoire” (1931), Paul Klee’s “Ad Parnassum” (1932), Pablo Picasso’s “Guernica” (1937), Max Ernst’s “Europe After the Rain II” (1942). Constantin Brancusi and Hans Arp were the giants of sculpture.

Music continued its journey away from the classical canon with Leos Janacek’s “Glagolitic Mass” (1926), Bela Bartok’s “Music for Strings, Percussion and Celesta” (1936) Edgar Varese’s “Ionisation” (1933) Alban Berg’s “Violin Concerto” (1935) Olivier Messiaen’s “Quatuor pour la Fin du Temps” (1941) and Goffredo Petrassi’s “Coro di Morti” (1941).

On the lighter side, new forms of entertainment and mass media were born, mostly in the USA. In 1914 composer Jerome Kern had staged the first “musical”. In 1926 Hollywood debuted the “talking movie” (films with synchronized voice and music). In 1927 Philo Farnsworth invented television.

Cinema was by far the most influential of new forms of art, thanks to films such as David-Wark Griffith’s “The Birth of a Nation” (1915), Victor Sjostrom’s “Phantom Chariot” (1920), Erich von Stroheim’s “Greed” (1924), Sergei Eisenstein’s “Battleship Potemkin” (1925), Fritz Lang’s “Metropolis” (1926), Josef von Sternberg’s “Das Blaue Engel” (1930), the Marx Brothers’ “Duck Soup” (1933), Charlie Chaplin’s “Modern Times” (1936), Jean Renoir’s “La Grande Illusion” (1937), Howard Hawks’s “Bringing Up Baby” (1938), Orson Welles’s “Citizen Kane” (1941), Frank Capra’s “John Doe” (1941).

But the visual arts also added a new one: the comics. The comics came to compete with the novel and the film, and reached their artistic peak with “Little Nemo” (1905), “Popeye” (1929), “Buck Rogers” (1929), “Tintin” (1929), “Mickey Mouse” (1930), “Dick Tracy” (1931), “Alley Oop” (1933), “Brick Bradford” (1933), “Flash Gordon” (1934), “Li’l Abner” (1934), “Terry Lee” (1934).

America’s contribution to music included Afro-American music: the blues was born around 1912, jazz in 1917, gospel in 1932, rhythm’n’blues in 1942, bebop in 1945.

After World War 2, Stalin’s Soviet Union became an exporter of “revolutions” throughout the world, an ideological empire that had few precedents in history: Mao Tze-tung’s China in 1949, Ho Chi Min’s Vietnam in 1954, Fidel Castro’s Cuba in 1959, Julius Nyere’s Tanzania in 1961, Kenneth Kaunda’s Zambia in 1964, Siad Barre’s Somalia in 1969, Haile Mengitsu’s Ethiopia in 1974, Samora Machel’s Mozambique in 1975, Pol Pot’s Cambodia in 1975, Robert Mugabe’s Zimbabwe in 1980, Arap Moi’s Kenya in 1982, etc. The USA retaliated by supporting anti-communist regimes around the world (often as totalitarian as the ones imposed by the communist revolutions). In Latin America, for example, the Soviet Union, via its proxy of Cuba, sponsored a series of national insurrections, while the USA supported “caudillos” that were no more democratic than Hitler (Guatemala 1960, Bolivia 1965, Chile 1973, Peru 1970, Colombia 1979, El Salvador 1980).

Both the Soviet Union and the USA fought for supremacy in what was termed a “Cold War”, a war that was never fought directly but only indirectly, everywhere and all the time. They both became military superpowers by amassing thousands of nuclear weapons. The nuclear deterrence worked insofar as they never struck at each other. But the consequence was that the theater of military operations was the entire planet.

The “Cold War” resulted in a partition of the world in two spheres of influence: Soviet and American. An “iron curtain” divided Europe in two, and the Wall (1961) that divided West and East Berlin was its main symbol.

A parallel process, soon engulfed in the Cold War, was the decolonization of Africa and Asia. Mahatma Gandhi was the most celebrate of the independence leaders. The European powers granted independence to most of their colonies. New countries were born (India and Pakistan in 1947, Israel in 1948, Indonesia in 1949, Ghana in 1957, and most of Africa followed within a decade). The exceptions (Algeria, Angola, Portugal) suffered from decade-long independence wars. Even where independence had been granted, intestine civil wars caused massive convulsions, again exploited by the two superpowers for their power games.

Another by-product of the post-war order was the birth of Arab nationalism with Egyptian leader Gamal Nasser.

The most visible political decline was the one of Britain. While its empire was disintegrating and its economy was slower than the economies of Germany, Japan and France (that soon passed it in GDP terms), Britain maintained an aloof attitude, reveling in its diversity: it did not join the European Community, it did not adopt the metric system, etc. Outdated industrial infrastructure. It reorganized the empire as the Commonwealth, but that was a cost, no longer a source of revenues. Despite being the real winner of World War 2, Britain became rapidly irrelevant.

The western European countries, assembled around a USA-led alliance (NATO), were free and democratic (with the exception of the Iberian peninsula) but were nonetheless torn between a socialist left and a capitalist right. De facto, they all adopted different versions of the same model: a social-democratic state that guaranteed rights to workers and sheltered citizens through a generous social net.

Among armed conflicts, two were particularly significant: the Arab-Israeli conflict (1948-2004) and the USA-Vietnam war (1964-1973). They both dragged the USA into long and expensive military ventures.

Despite the political gloom, the post-war age was the age of consumerism, of the economic boom (in the USA, Japan and western Europe), of the “baby boom” and of the mass media.

The office was mechanized and electrified thanks to a deluge of calculators, photocopiers, telefax machines, telex machines, touch-tone phones, and, finally, mainframe computers (1964).

Landmarks in communications were the telephone cable across the Atlantic (1956) and the first telecommunication satellite (1962).

Landmarks in transportation were Pan Am’s first transatlantic flight (1939), the long-distance jet (1958) and the wide-body jet (1967).

Commercial television introduced cheap forms of mass entertainment.

The 33-1/3 RPM long-playing vinyl record (1948) and the transistor radio (1954) changed the way people (especially young people) listened to music.

A youth culture began to appear in the USA in the 1950s, initially blasted as a culture of “juvenile delinquents”, and evolved into the generation of the free-speech movement, of the civil rights, of the anti-war movement and of the hippies. It then migrated to Europe, where it transformed into the student riots of 1968.

Rock music was very much the soundtrack of the youth movement. Rock’n’Roll was the music of the young rebels who reacted against the repressive conventions of post-war society. Bob Dylan and the militant folk-singers interpreted young people’s distrust of the Establishment and their idealistic dreams. Psychedelic music was an integral part of the hippie movement (and dramatically changed the concept of “song” by introducing atonal and anarchic elements).

The 1960s were also the age of the sexual revolution and of feminism, announced by Simone de Beauvoir.

The single most emotional event for the collective imagination was space exploration, largely fueled by the rivalry between the USA and the Soviet Union. In 1957 the Soviet Union launched the first artificial satellite, the “Sputnik”. In 1961 Yuri Gagarin became the first human astronaut. In 1962 the USA launched the first telecommunication satellite, the “Telstar”. In 1969 Neil Armstrong became the first human to set foot on the Moon.

However, progress in Physics was certainly not limited to the space. In fact, Physics was booming from the very small to the very large.

Astronomy revealed a whole new world to the peoples of the Earth who used to believe (just a few thousand years earlier) that the Earth was all there was to it. There are billions of galaxies in the universe, each made of billion of stars (200 billion in our galaxy, roughly the “Milky Way”), and planets orbit around the stars (nine around ours, the Sun). Pluto, the last of the solar planets, turned out to be 5.9 billion kms from the Sun, a distance that no human could hope to cover during a lifetime. Distances were suddenly measured in “light-year”, one light-year being 9 trillion kms, a distance that would have been unimaginable just a century before. The nearest star is “Alpha Centauri”, 4.3 light-years from the Earth. Sirius, the brightest star in the sky, is actually 8.7 light-years away. The center of the Milky Way is 26 thousand light-years from the Sun. Andromeda, the nearest galaxy, is 2.2 million light-years far.

In 1965 the “microwave background radiation” was discovered, a remnant of some catastrophic event a long time back in the past of the universe. That event was named “Big Bang”: the universe was born when a massive explosion sent the original energy hurling away in all directions. Eventually, gravitation caused pieces of matter to coalesce together, thus forming the structures that we observe today (galaxies, stars, planets), leaving behind the background radiation and causing the expansion of the universe that is still going on. Depending on how much mass there is in the universe, this expansion may some day be reversed (and end in a “Big Crunch”) or continue forever. Cosmologists also realizes that there are different kinds of “stars”. Some of them are very small and very heavy, and spin frantically around their axis (“pulsars”). Some of them collapsed into “black holes”, which are bodies whose gravitational field is so strong that nothing can escape them, not even light. Inside black holes, time and space sort of switch roles: an object can only proceed ahead in space (towards the center of the black hole) while being able to move around in time.

The oddities of Cosmology fueled a boom in science fiction (comics, films, novels, tv series).

As for the “very small”, the view of matter made of three particles (electron, proton and neutron) was shattered by the discoveries of a multitude of subatomic particles. The radioactive decay of atomic nuclei, first observed in 1896 by Antoine Becquerel, Pierre Curie and Marie Curie, had already signaled the existence of a fourth kind of fundamental force (the “weak” force) to the known three (gravitational, electromagnetic, and nuclear or “strong”). Wolfgang Pauli in 1930 inferred the existence of the neutrino to explain a particular case of radioactive decay. Another source of new particles was the study of “cosmic rays”, that Victor Franz Hess reduced to radiation coming from the space (1912). This led to the discovery of muons (1937) and pions (predicted in 1935 by Yukawa Hideki). In 1963 Murray Gell-Man hypothesized that the nucleus of the atom was made of smaller particles. In 1967 the theory of quarks (Quantum Chromodynamics) debuted: the nucleus of the atom (neutrons and protons) is made of quarks, that are held together by gluons. Quarks differ from previously known particles because their magic number is “three”, not two: there are six quarks, each coming in three “flavors” (and each having, as usual, its anti-quark) and they combine not in pairs but in trios.

Forces are mediated by discrete packets of energy, represented as virtual particles or “quanta”. The quantum of the electromagnetic field (e.g., of light) is the photon: any electromagnetic phenomenon involves the exchange of a number of photons between the particles taking part in it. Other forces are defined by other quanta: the weak force by the W particle, gravitation (supposedly) by the graviton and the nuclear force by gluons. Particles can be divided according to a principle first formulated (in 1925) by Wolfgang Pauli: some particles (the “fermions”, named after Enrico Fermi) never occupy the same state at the same time, whereas other particles (the “bosons”, named after Satyendra Bose) do. The wave functions of two fermions can never completely overlap, whereas the wave fuctions of two bosons can completely overlap (the bosons basically lose their identity and become one). Fermions (such as electrons and its family, the leptons, and quarks and their “hadrons”, protons and neutrons) make up the matter of the universe, while bosons (photons, gravitons, gluons) are the virtual particles that glue the fermions together. Bosons therefore represent the forces that act on fermions. They are the quanta of interaction. An interaction is always implemented via the exchange of bosons between fermions. (There exist particles that are bosons but do not represent interactions, the so called “mesons”, which are made of quarks and decay very rapidly).

There are twelve leptons: the electron, the muon, the tau, their three neutrinos and their six anti-particles. There are 36 quarks: six times three flavors plus the corresponding anti-quarks. Thus there are 4 forces, 36 quarks, 12 leptons, 12 bosons.

Science was even applied to life itself.

Ilya Prigogine developed “Non-equilibrium Thermodynamics” to explain phenomena far from equilibrium such as life itself. He divided nature into “conservative” systems (the ones studied by classical Physics) and “dissipative” systems (subject to fluxes of energy/matter), and noticed that the latter are ubiquitous in nature: everything that is alive is a dissipative system. They create order by feeding on external energy/metter: they are non-equilibrium systems that are sustained by a constant influx of matter/energy. He realized that such systems exhibit spontaneous development of order: they are self-organizing systems, which maintain their internal organization by trading matter/energy with the environment.

James Jerome Gibson look at life from the point of view of a network of integrated living beings. A living being does not exist in isolation. In fact, its main purpose is to pick up information from the environment. All the information needed to survive is available in the environment. Thus information originates from the interaction between the organism and its environment. Information “is” the continuous energy flow of the environment.

The other great fascination was with computers, developed during the war to crack the secret German code. In 1946 the first non-military computer, “Eniac”, was unveiled. In 1947 William Shockley invented the transistor at Bell Labs. In 1951 the first commercial computer was built, the “Univac”. In 1955 John McCarthy founded “Artificial Intelligence”, a discipline to study if “intelligent” machines could ever be built. In 1956 Robert Noyce and Jack Kilby invented the microchip, that made it possible to build smaller computers. In 1958 Texas Instruments built the first integrated circuit. Also in 1958 Jim Backus (at IBM) invented the FORTRAN programming language, the first machine-independent language. In 1965 Gordon Moore predicted that the processing power of computers would double every 18 months. In 1964 IBM introduced the first “operating system” for computers (the “OS/360”). In 1965 DEC introduced the first mini-computer, the PDP-8, that used integrated circuits.

Genetics rapidly became the most exciting field in Biology. Each living cell contains deoxyribonucleic acid (DNA for short), discovered in 1944 by Oswald Avery,and, in 1953, Francis Crick and James Watson figured out the double-helix structure of the DNA molecule: genetic information is encoded in a rather mathematical form, which was christened “genetic code” because that’s what it is, a code written in an alphabet of four “letters” (which are, physically, acids). Crick reached the conclusion that information flows only from the (four) nucleid acids of the DNA to the (twenty) aminoacids of proteins, never the other way around. In other words: genes encoded in DNA determine the organism. An organism owes its structure to its “genome”, its repertory of genes. It took a few more years for biologists to crack the genetic code, i.e. to figure out how the four-letter language of DNA is translated into the twenty-letter language of proteins. Biologists also discovered ribonucleic acid (RNA), the single-strand molecule that partners with DNA to manufacture proteins.

Less heralded but no less powerful to change our view of our race was the progress made by neurologists in understanding how the brain works. The neuron had been discovered in 1891 by Santiago Ramon y Cajal, and in 1898 Edward Thorndike had already proposed that the brain was a “connectionist” system (that the connections, not the units, were the key to its working). But the picture remained fuzzy until (1949) Donald Hebb showed that those connections were dynamic, not static, and that they changed according to a system of punishment and reward, or “selectively”: a connection that was used to produce useful behavior was reinforced, one that was part of a failure was weakened. As new techniques allowed neurologists to examine the electrical and chemical activity of the brain, it became clear that neurons communicate via “neurotransmitters”. A neuron is nothing more than a generator of impulses, activated when the sum of its inputs (the neurotransmitters received from other neurons, weighted according to the “strength” of the corresponding connections) exceeds a certain potential. The connections between neurons are continuously adjusted to improve the accuracy of the brain’s responses. Basically, each operation of recognition is also an operation of learning, because connections are refined every single time they are used. The structure of the brain also became more clear, in particular the fact that there are two hemispheres, the left being dominant for language and speech, the right being dominant for visual and motor tasks.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

Michel Jouvet discovered that REM (“rapid eye movement”) sleep is generated in the pontine brain stem (or “pons”). The pons sends signals to eye muscles (causing the eye movement), to the midbrain (causing a low level of brain activity and inhibition of muscle movements), and to the thalamus. The thalamus then excites the cortex, which receives a valid sensory signal from the thalamus and interprets it as if it were coming from the sense organs. During REM sleep several areas of the brain are working frantically, and some of them are doing exactly the same job they do when the brain is awake. The only major difference is that the stimuli they process are now coming from an internal source rather than from the environment: during dreams the sensory input comes from the sensory cortex.

The obsession with Alan Turing’s abstract “machine” and with the first concrete computers shaped the intellectual landscape of many thinkers. Not only had Turing proved that the computer was, basically, an “intelligent” being. John Von Neumann, with his embryonic experiments on artificial life or “cellular automata” (1947), had also shown how such a machine could be made to reproduce and evolve.

It was Turing himself to frame the philosophical issue for the next generations, with what came to be known as “Turing’s test” (1947): a machine can be said to be intelligent if a human being, asking all sorts of questions, cannot tell whether the answers come from a human being or from a machine. It was, ultimately, a “behaviorist” approach to defining intelligence: if a machine behaves exactly like a human being, than it is as intelligent as the human being.

Kenneth Craik viewed the mind as a particular type of machine which is capable of building internal models of the world and of processing them to produce action. Craik’s emphasis was on the internal representation and on the symbolic processing of such a representation.

Craik’s ideas formed the basis for Herbert Simon’s and Allen Newell’s theory of mind, that the human mind is but a “physical symbol processor”. The implicationg was that the computer was, indeed, intelligent: it was just a matter of programming it the way the mind is. They proceeded to implement a “general solver”, a computer program that, using logic, would be able to solve any possible problem (Hilbert’s dream).

The implicit assumption behind the program of “Artificial Intelligence” was that the “function” is what matters: the “stuff” (brains or integrated circuits) is not important.

Hilary Putnam argued that the same mental state can be realized in more than one physical state, for example pain can be realized by more than one brain (despite the fact that all brains are different). Therefore, the physical state is not all that important. It is the “function” that makes a physical state of the brain also a mental state. Mental states are mere decorations: they have a function. The consequence of this conclusion, though, is that a mind doesn’t necessarily require a brain. In fact, a computer does precisely what a mind does: perform functions that can be implemented by different physical states (different software). The “functionalist” approach popularized the view that the mind is the software and the brain is its hardware. The execution of that program (the mind) in a hardware (brain or computer) yields behavior.

Jerry Fodor speculated that the mind represents knowledge in terms of symbols, and then manipulates those symbols to produce thought. The manipulation of those symbols is purely syntactic (without knowing what those symbols “mean”). The mind uses a “language of thought” (or “mentalese”), common to all sentient beings, and produced through evolution, to build those mental representations.

The program of Artificial Intelligence was not as successful as its pioneers hoped because they neglected the importance of knowledge. An “intelligent” system is only capable of performing logical operations, no matter how many and how smart; but, ultimately, even the most intelligent human being in the world needs knowledge to make sensible decisions. In fact, a person with a lot of knowledge is likely to make a more sensible decision than a much more clever person with very little knowledge. Thus the primacy shifted from “intelligence” to “knowledge”: “Expert Systems” (first conceived around 1965) apply a “general problem solver” to a “knowledge base”. The knowledge base is built by “cloning” a human expert (usually via a lengthy process of interviews). A knowledge base encodes facts and rules that are specific to the “domain” of knowledge of the human expert. Once the appropriate knowledge has been “elicited”, the expert system behaves like a human expert.

In parallel to the “knowledge-based” school, the search for machine intelligence pursued other avenues as well.

Norman Wiener noticed that both living systems and machines are “control systems”, systems in which “feedback” is employed to maintain internal “homeostasis” (a steady state). A thermostat is a typical control system: it senses the temperature of the environment and directs the heater to switch on or off; this causes a change in the temperature, which in turn is sensed by the thermostat; and so forth. Every living system is also a control system. Both living systems and machines are “cybernetic” systems. The “feedback” that allows a system to control itself is, ultimately, an exchange of information between the parts of the system.

Claude Shannon worked out a no less influential metaphor for machines: they are also similar to thermodynamic systems. The entropy of a thermodynamic system is a measure of disorder, i.e. a measure of the random distribution of atoms. As entropy increases, that distribution becomes more homogeneous. The more homogeneous the distribution is, the less “informative” it is. Therefore, entropy is also a measure of the lack of information.

Yet another version of the facts was delivered by the proponents of “Neural Networks”.

An artificial “neural network” is a piece of software or hardware that simulates the neural network of the brain. Several simple units are connected together, with each unit connecting to any number of other units. The “strength” of the connections can fluctuate from zero strength to infinite strength. Initially the connections are set randomly. During a “training” period, the network is made to to adjust the strength of the connections using some kind of feedback: every time an input is presented, the network is told what the output should be and asked to adjust its connections accordingly. The network continues to learn forever, as each new input causes a readjustment of the connections.

The difference between “expert systems” and “neural networks” is actually quite ideological. Expert systems operate at the level of knowledge, whereas neural networks operate at the level of connections. In a way, they describe two different ways to look at human intelligence: as a brain that produce intelligent behavior, and as a mind that produces intelligent decisions.

Needless to say, the mind-body problem, originally introduced by Descartes, was revived by the advent of the computer.

Dualists largely extended an intuition by Charlie-Dunbar Broad, that the universe is a series of layers, and that each layer yields the following layer but cannot explain the new properties that emerge with it. For example, the layer of elementary particles yields the layer of macroscopic phenomena. Each new layer is an “emergent” phenomenon of a lower layer. Thus the mind is an emergent property of the brain, and not a separate substance. The new dualism was a dualism of properties, not a dualism of substances. Dualism was restated as “supervenience”. Biological properties “supervene” (or “are supervenient”) on physical properties, because the biological properties of a system are determined by its physical properties. By the same token, mental properties are supervenient on neural properties.

Herbert Feigl revived materialism in the age of the brain: the mind is created by the neural processes in the brain. We have not explained how this happens the same way that humans could not explain lightning or magnetism for centuries. Nonetheless, mental states “are” physical states of the brain. Philosophers such as Donald Davidson realized that it is implausible to assume that for every single mental state there is a unique physical state of the brain (a one to one correspodence). For example, a person can have the same feeling twice, despite the fact that the configuration of the brain has changed. Thus, Feigl’s “identity theory” was revised to admit that many physical states of the brain may yield the same mental state.

Another form of materialism (“eliminative” materialism) originated with Paul Feyerabend and Richard Rorty, who believed that mental states do not exist. The mental world is only a vocabulary of vague terms that don’t refer to real entities. The mind-body dualism is a false problem that leads to false problems.

By that line of reasoning, Gilbert Ryle revived behaviorism in the context of the mind-body problem: the mental vocabulary does not refer to the structure of something, but simply to the way somebody behaves or will behave. The mind “is” the behavior of the body. Descartes invented a myth: the mind inside the body (“the ghost in the machine”).

Existentialism was the big philosophical “ism” of Post-war Europe. Existentialists focused on the human experience. Theirs was a philosophy of the crisis of values. The object and the subject of Existentialism are the same: the “I”.

Jean-Paul Sartre believed that there is no God. The individual is alone. There is no predestination (no “human nature”), determining our actions. We are free to act as we will. It is our actions that determines our nature. Existence (the free I) precedes essence (the I’s nature). In the beginning, the individual is nothing. Then she defines herself by her actions. Each individual is fully responsible for what she becomes. This total freedom causes angst. It is further amplified by the fact that an individual’s choices affect the whole of humankind. Existentialism abolishes God, but emphasizes that its atheism increases (not decreases) the individual responsibility for her actions. It complicates, not simplifies, the moral life. “We are alone, with no excuses”.

Maurice Merleau-Ponty countered that human freedom is never total: it is limited by our body. The individual is, first and foremost, a “situated” being, a body that lives in an environment. The body is not just an object surrounded by objects: it is the very subject of experience, that interacts with the environment. The body shapes the environment, but, in turn, the environment shapes the body, whose freedom is therefore limited by the way the environment shapes it. The same conditioning exists in society: the body is a linguistic actor but its linguistic action is constrained by the language it uses (the meaning of a linguistic action is constructed on the basis of a meaning acquired from the language). Ditto at the level of society: we are political agents, but we our political actions are shaped by the historical background. At all levels, there are a “visible” and an “invisible” dimensions of being that continuously affect each other.

A number of thinkers related to the zeitgeist of Existentialism even if they did not belong to the mainstream of it.

David Bohm fulfilled Einstein’s hope to find “hidden variables” to remove randomness from Quantum Theory. Bohm hypothized the existence of a potential that permeates the universe. This potential, that lies beyond the four-dimensional geometry of space-time, generates a field that acts upon particles the same way a classical potential does. This field can be expressed as the mother of all wave functions, a real wave that guides the particle (the “pilot-wave”). This field is, in turn, affected by all particles: everything in the universe is entangled in everything else. The universe is an undivided whole in constant flux. Similarly, at the higher dimension (the “implicate order”) there is no difference between matter and mind. That difference arises within the “explicate order” (the conventional space-time of Physics). As we travel inwards, we travel towards that higher dimension, the implicate order, in which mind and matter are the same. As we travel outwards, we travel towards the explicate order in which subject and object are separate. Mind and matter can never be completely separated because they are entangled in the same quantum field. Thus every piece of matter has a rudimentary mind-like quality.

Ghose Aurobindo speculared that Brahman first involutes (focuses on itself), next materializes (the material universe), and then evolves into consciousness. We are part of this process, which is still going on. Human consciousness is the highest stage of consciousness so far reached by Brahman, but not the last one, as proven by the fact that social, cultural and individual life in the human world are still imperfect.

Sayyid Qutb, the philosopher of militant Islam, lived in the dream of a purified world dedicated to the worship of God alone. Human relationships should be founded on the belief in the unity of God. Pagan ignorance (for example, of Christians and Jews) is the main evil in the world, because it rebels against God’s will and establishes secular societies that violate God’s sovereignty on Earth. The separation of church and state is “the” problem.

Pierre Teilhard de Chardin saw evolution as a general law of nature: the universe’s matter-energy is progressing towards ever increased complexity. Humans mark the stage when evolution leaves the “biosphere” and enters the “noosphere” (consciousness and knowledge). The evolution of the noosphere will end in the convergence of matter and spirit into the “omega point”.

Throughout the second half of the century, Structuralism was one of the dominant paradigms of philosophy: uncover the real meaning hidden in a system of signs.

Claude Levi-Strauss extended it to social phenomena, which he considered as systems of signs just like language. Myths from different cultures (myths whose contents are very different) share similar structures. Myth is a language, made of units that are combined together according to certain rules. The “langue” is the myth’s timeless meaning, the “parole” is its historical setting. “Mytheme” is the elementary unit of myth. Mythemes can be read both diachronically (the plot that unravels, the sequence of events) and synchronically (the timeless meaning of it, the “themes”). The themes of myths are binary relationships between two opposing concepts (e.g., between selfishness and altruism). Binary logic is, in a sense, the primordial logic, and mythical thinking is, in a sense, logical thinking. Mythical thinking is inherent to the human mind: it is is the human way of understanding nature and the human condition. Conversely, myths are tools that we can use to find out how the human mind works.

Roland Barthes transformed Saussure’s Structuralism into “semiology”, a science of signs to unveil the meaning hidden in the “langue” of cultural systems such as cinema, music, art.

Structuralism often reached provocative conclusions that had social and political implications.

Michel Foucault analyzed the mechanisms of western (liberal, democratic) society. Western society jails fools, who, in ancestral societies, were tolerated or even respected as visionary. Foucault perceived as disturbing the tendency to criminalize the creative force of madness. Basically, Western societies torture the minds of criminals, whereas totalitarian societies tortured their bodies. Prisons are, de facto, an instrument of social control, a device to train minds that do not comply with the dogmas. Thus western societies are vast mechanisms of repression, no less oppressive than the totalitarian regimes they replaced. Similar arguments can be made for sexuality and crime.

Jacques Lacan analyzed the unconscious as a system of signs. Motives are signifiers which form a “signifying chain”: the subconscious “is” that chain. This chain is permanently unstable because it does not refer to anything: the self itself is a fiction of the subconscious. A baby is born with a united psyche, but later in life, as the baby separates from the mother, that unity is broken, and the self is born; and the rest of one’s lifetime is spent trying to reunite the self and the other. Psychic life as a permanent struggle between two “consciousnesses”.

Dilthey’s Hermeneutics was also influential. Hans-Georg Gadamer applied it to Husserl’s Phenomenology and derived a discipline of “understanding”, where to him “understanding” was the “fusion of horizons” between a past text and a present interpreter.

Paul Ricoeur believed that the symbols of pre-rational culture (myth, religion, art, ideology) hide meaning that can be discovered by interpretation. There are always a patent and a latent meaning. A similar dichotomy affects human life, which is torn between the “voluntary” and the “involuntary” dimensions, between the “bios” (one’s spatiotemporal life) and the “logos” (one’s ability to grasp universal spacetime). He made a distinction between language and discourse: language is, indeed, only a system of signs, and therefore timeless, but discourse always occurs at some particular moment of time, i.e. it depends on the context. A language is a necessary condition for communication, but it itself does not communicate: only discourse communicates. The signs in a language system refer only to other signs in it, but discourse refers to a world. Discourse has a time dimension that is due to the merging of two different kinds of time: cosmic time (the uniform time of the universe) and lived time (the discontinuous time of our life’s events). Historical time harmonizes these two kinds of time.

The debate on language proceeded in multiple directions. Wilfred Sellars conceived a sort of linguistic behaviorism: thoughts are to the linguistic behavior of linguistic agents what molecules are to the behavior of gases.

Roman Jakobson, the leading exponent of “formalism”, decomposed an act of communication into six elements that summarize the act of communication like this: a message is sent by an addresser to an addressee who shares a common code, a physical channel and a context. These elements reveal that communication performs many functions in one.

The speculation on language culminated with Noam Chomsky’s studies on grammar. Chomsky rephrased Saussure’s dichotomy of langue and parole as performance and competence: we understand sentences that we have never heard before, thus our linguistic competence exceeds our linguistic performance. In fact, the number of sentences that we can “use” is potentially infinite. Chomsky concluded that what we know is not the infinite set of sentences of the language, but only a finite system of rules that defines how to build sentences. We know the “grammar” of a language. Chomsky separated syntax from semantics: a sentence can be “well-formed” without being meaningful (e.g., “the apple took the train”). In doing so, Chomsky reduced the problem of speaking a language to a problem of formal logic (because a grammar is a formal system). Chomsky realized that it was not realistic to presume that one learns a grammar from the sentences that one hears (a fraction of all the sentences that are possible in a language). He concluded that human brains are designed to acquire a language: they are equipped at birth with a “universal grammar”. We speak because our brain is meant to speak. Language “happens” to a child, just like growth. Chomsky’s universal grammar is basically a “linguistic genotype” that all humans share.

As Sellars had already noted, Chomsky’s analysis of the structure of language was not enough, though, to explain the phenomenon of language among humans. John-Langshaw Austin argued that the function of sentences is not so much to describe the state of the world as to cause action in the world. He classified a speaker’s “performative” sentences (requests, promises, orders, etc) based not on their structure but on their “effect” on the listener. We speak for a reason. “Pragmatics” is the study of “speech acts”. A speech act is actually made up of three components: a “locutionary” act (the words employed to deliver the utterance), an “illocutionary” act (the type of action that it performs, such as commanding, promising, asking) and a “perlocutionary” act (the effect that the act has on the listener, such as believing or answering).

There is more to a sentence than its meaning: a sentence is “used” for a purpose. Paul Grice realized that speech acts work only if the listener cooperates with the speaker, and the speaker abides by some common-sense rule: the speaker wants to be understood and cause an action, and the listener makes this assumption in trying to understand the speaker’s purpose. Grice believed that some “maxims” help the speaker say more than the word that she is saying: those maxims are implicit knowledge that the listener uses in order to grasp the purpose of the utterance. Language has meaning to the extent that some conventions hold within the linguistic community.

The intimidating progress of Science caused a backlash of sort among philosophers who disputed Science’s very foundations. After all, scientific hypotheses cannot be tested in isolation from the whole theoretical network within which they are formulated.

Aleksandr Koyre’ and Gaston Bachelard had already noted that scientific progress is not linear: it occurs in spurts. Thomas Kuhn formalized that intuition with the concept of “paradigm shifts”. At any point in time the scientific community agrees on a scientific paradigm. New evidence tends to be accomodated in the ruling paradigm. When the ruling paradigm collapses because of some evidence that cannot be accomodated, then a paradigm shift occurs. A paradigm shift results in a different way of looking at the world, analogous to a religious conversion. Scientific revolutions are, ultimately, linguistic in character. Thus the truth of a theory does not depend exclusively on the correspondence with reality. The history of science is the history of the transformations of scientific language.

Willard Quine argued that a hypothesis can be verified true or false only relative to some background assumptions, a condition that rapidly becomes recursive: each statement in a theory partially determines the meaning of every other statement in the same theory. One builds a “web of beliefs”, and each belief in the web is based on some other beliefs of the same web. Each belief contributes to support the entire web and is supported by the entire web. The web as a whole fits the requirements of Science. But there might be several such webs that would work as well: scientific theories are “undetermined” by experience. It is the same situation as with language: there are always many (potentially infinite) interpretations of a discourse depending on the context. A single word has no meaning: its meaning is always relative to the other words that it is associated to. The meaning of a sentence depends on the interpretation of the entire language. Its meaning can even change in time. For example, it is impossible to define what a “correct” translation of a sentence is from one language to another, because that depends on the interpretations of both entire languages. Translation from one language to another is indeterminate. Translation is possible only from the totality of one language to the totality of another language.

Another strong current of thinkers was the Marxist one, which frequently spent more time criticizing capitalism than in heralding socialism.

Juergen Habermas added an element that was missing from Marx’s “materialistic” treatment of society: social interaction, the human element. Societies rely both on labor (instrumental action) and socialization (communicative action). What we are witnessing is not so much alienation but a crisis of institutions that manipulate individuals. Communicative Action, not the revolution of the proletariat, can transform the world and achieve a more humane and just society based on free and unconditioned debate among equal citizens.

Herbert Marcuse analyzed the operation of mass societies and concluded that they seduce the citizens with the dream of individual liberty only to enslave them in a different way. The only true revolution is emancipation from the economic loop that enslaves us. Such a revolution would bring about an ideal state in which technology is used to provide individual happiness, not surplus.

Theodor Adorno warned that reason has come to dominate not only nature, but also humanity itself, and therefore Western civilization is moving towards self-destruction. For example, mass-culture industries manipulate the masses into cultivating false needs.

Cinema was probably the most faithful interpreter of the times through its well-established genres: Akira Kurosawa’s “Rashomon” (1950), Billy Wilder’s “Sunset Boulevard” (1950), Vittorio DeSica’s “Miracle in Milan” (1951), Kenji Mizoguchi’s “Ugetsu Monogatari” (1953), Yasujiro Ozu’s “Tokyo Monogatari” (1953), Elia Kazan’s “On The Waterfront” (1954), Ingmar Bergman’s “Seventh Seal” (1956), John Ford’s “The Searchers” (1956), Don Siegel’s “Invasion of the Body Snatchers” (1956), Alfred Hitchcock’s “North By Northwest” (1959), Jean-Luc Godard’s “Breathless” (1959), Federico Fellini’s “La Dolce Vita” (1960), John Huston’s “The Misfits” (1961), Robert Aldrich’s “Hush Hush Sweet Charlotte” (1965), Michelangelo Antonioni’s “Blow-Up” (1966), Luis Bunuel’s “Belle de Jour” (1967), Roman Polansky’s “Rosemary’s Baby” (1968), Stanley Kubrick’s “2001 A Space Odyssey” (1968), Sergio Leone’s “Once Upon a Time in The West” (1968), Sam Peckinpah’s “The Wild Bunch” (1969).

Music moved further away from the tradition of consonant music with John Cage’s “Concerto for Prepared Piano” (1951), Pierre Boulez’s “Le Marteau Sans Maitre” (1954), Luigi Nono’s “Canto Sospeso” (1956), Karlheinz Stockhausen’s “Gesang der Junglinge” (1956), Iannis Xenakis’s “Orient Occident” (1960), Britten’s “War Requiem” (1962), Penderecki’s “Passio Secundum Lucam” (1965), Berio’s “Sinfonia” (1968).

Poetry explored a much broader universe of forms: Pablo Neruda’s “Canto General” (1950), Andrade’s “Claro Enigma” (1951), Paul Celan’s “Mohn und Gedaechtnis” (1952), George Seferis’s “Emerologio Katastromatos” (1955), Yannis Ritsos’s “Moonlight Sonata” (1956), Ezra Pound’s “Cantos” (1960), Pierpaolo Pasolini’s “Le Ceneri di Gramsci” (1957), Vladimir Holan’s “A Night with Hamlet” (1964), Vittorio Sereni’s “Gli Strumenti Umani” (1965), Andrea Zanzotto’s “La Belta`” (1968).

Fiction was the most prolific of the literary genres: Cesare Pavese’s “La Luna e i Falo’” (1950), Elsa Morante’s “L’Isola di Arturo” (1957), Italo Calvino’s “Il Barone Rampante” (1957), Carlo-Emilio Gadda’s “La Cognizione del Dolore” (1963), Alejo Carpentier’s “Los Pasos Perdidos” (1953), Jose Donoso’s “Coronacion” (1957), Gabriel Garcia Marquez’s “Ciento Anos de Soledad” (1967), Malcom Lowry’s “Under the volcano” (1947), William Gaddis’ “The Recognitions” (1955), Wilson Harris’ “Palace of the Peacock” (1960), Anthony Burgess’s “Clockwork Orange” (1962), Janet Frame’s “Scented Gardens For The Blind” (1963), Saul Bellow’s “Herzog” (1964), John Barth’s “Giles Goat Boy” (1966), Yukio Mishima’s “Golden Pavillion” (1956), Boris Pasternak’s “Doctor Zivago” (1957), Witold Gombrowicz’s “Pornography” (1960), Gunther Grass’ “Die Brechtrommel” (1959), Thomas Bernhard’s “Verstoerung” (1967), Elias Canetti’s “Auto da fe” (1967), Raymond Queneau’s “Zazie dans le Metro” (1959), Julio Cortazar’s “Rayuela” (1963), Carlos Fuentes’s “Artemio Cruz” (1964), Jorge Amado’s “Dona Flor” (1966), Kobe Abe’s “Woman of Sand” (1962), Kenzaburo Oe’s “Silent Cry” (1967), Patrick White (Australia, 1912): “Voss” (1957).

Theatre built upon the innovations of the first half of the century: Tennessee Williams’ “A Streetcar Named Desire” (1947), Arthur Miller’s “Death of a Salesman” (1949), Samuel Beckett (1906, Ireland): “En Attendant Godot” (1952), Friedrich Durrenmatt (1921, Switzerland): “The Visit of the Old Lady” (1956), Max Frisch (1911): “Herr Biedermann und die Brandstifter” (1958), Harold Pinter (1930, Britain): “Caretaker” (1959), Eugene Ionesco (1912): “Rhinoceros” (1959), John Arden’s “Serjeant Musgrave’s Dance” (1959), Peter Weiss (1916): “Marat/Sade” (1964).

An epoch-defining moment was the landing on the Moon by USA astronauts, an event that ideally ended the decade of the boom.

If the Moon landing had seemed to herald complete domination by the USA, the events of the following decade seemed to herald its decline. The USA was defeated militarily in Vietnam (1975), Lebanon (1983) and Somalia (1992). The oil crisis of the 1970s created a world-wide economic crisis. The USA lost one of its main allies, Iran, to an Islamic revolution (1979), that was as significant for the political mood of the Middle East as Nasser’s Arab nationalism had been for the previous generation. After 30 years of rapid growth, both Japan and Germany became economic powers that threatened the USA globally. Both countrie caught up with the USA in terms of average wealth. Militarily, the Soviet Union remained a formidable global adversary, extending its political influence to large areas of the developed world.

Other problems of the age were drugs and AIDS. The culture of drugs and the holocaust of AIDS marked the depressed mood of the arts. Soon, another alarming term would surfance in the apocalyptic language: global warming.

However, space exploration continued, still propelled by the desire of the USA and the Soviet Union to compete anytime anywhere. In 1970 and 1971 the Soviet Union sent spacecrafts to our neighbors, Venus and Mars. In 1977 the USA launched the Voyager to reach other galaxies. In 1981 the U.S.A launched the first space shuttle. In 1986 the Soviet Union launched the permanent space station “MIR”. In 1990 the USA launched the Hubble space telescope.

Computers staged another impressive conceptual leap by reaching the desk of ordinary folks: the micro-processor (1971) enabled the the first “personal” computer (1974) which became ubiquitous from 1981 on.

As mass-media became more pervasive, they also changed format: the video-cassette recorder (1971), the cellular telephone (1973), the portable stereo (1979), the compact disc (1981), the DVD (1995). Ultimately, these innovations made both entertainment, communication and culture more “personal” and more “portable”.

Classical music reflected the complex world of the crisis with Dmitrij Shostakovic’s “Symphony 15” (1971), Morton Feldman’s “Rothko Chapel” (1971), Gyorgy Ligeti’s “Double Concerto” (1972), Henryk Gorecki’s “Symphony 3” (1976), Arvo Part’s “De Profundis” (1980), Witold Lutoslaski’s “Symphony 3” (1983).

The novel continued to experiment with newer and newer formats and structures: Vladimir Nabokov’s “Ada” (1969), Michel Tournier’s “Le Roi des Aulnes” (1970), Ismail Kadare’s “Chronicle in Stone” (1971), Danilo Kis’s “Hourglass” (1972), Thomas Pynchon’s “Gravity’s Rainbow” (1973), Nadine Gordimer’s “The Burger’s Daughter” (1979), Barbara Pym’s “Quartet in Autumn” (1977), Manuel Puig’s “El Beso de la Mujer Arana” (1976), Mario Vargas-Llosa’s “La Tia Julia” (1978), Salman Rushdie’s “Midnight’s Children” (1980), Elfriede Jelinek’s “Die Ausgesperrten” (1980), Toni Morrison’s “Tar Baby” (1981), Uwe Johnson’s “Jahrestage” (1983), Jose Saramago’s “Ricardo Reis” (1984). Milan Kundera’s “The Unbearable Lightness of Being” (1985), Joseph McElroy’s “Women and Men” (1987), Antonia Byatt’s “Possession” (1990), Winfried Georg Sebald’s “Die Ausgewanderten” (1992).

Poetry was becoming more philosophical through works such as Joseph Brodsky’s “Stop in the Desert” (1970), Mario Luzi’s “Su Fondamenti Invisibili” (1971), Derek Walcott’s “Another Life”“ (1973), Edward-Kamau Brathwaite’s “The Arrivants” (1973), Giorgio Caproni’s “Il Muro della Terra” (1975), John Ashbery’s “Self-Portrait in a Convex Mirror: (1975), James Merrill’s “The Changing Light at Sandover (1982).

By now, cinema was even more international than literature: John Boorman’s “Zardoz” (1973), Martin Scorsese’s “Mean Streets” (1973), Francis-Ford Coppola’s “The Godfather Part II” (1974), Robert Altman’s “Nashville” (1975), Theodoros Anghelopulos’s “Traveling Players” (1975), Bernardo Bertolucci’s “1900” (1976), Terence Malick’s “Days of Heaven” (1978), Ermanno Olmi’s “L’Albero degli Zoccoli” (1978), Woody Allen’s “Manhattan” (1979), Andrej Tarkovskij’s “Stalker” (1979), Istvan Szabo’s “Mephisto” (1981), Peter Greenaway’s “The Draughtsman’s Contract” (1982), Ridley Scott’s “Blade Runner” (1982), Terry Gilliam’s “Brazil” (1985), Wim Wenders’s “Wings of Desire” (1988), Zhang Yimou’s “Hong Gaoliang” (1987), Aki Kaurismaki’s “Leningrad Cowboys go to America” (1989), Tsui Hark’s “Wong Fei-hung” (1991), Takeshi Kitano’s “Sonatine” (1993), Krzysztof Kieslowski’s “Rouge” (1994), Bela Tarr’s “Satantango/ Satan’s Tango” (1994), Quentin Tarantino’s “Pulp Fiction” (1994), Jean-Marie Jeunet’s “City of Lost Children” (1995), Lars Von Trier’s “The Kingdom” (1995), Emir Kusturica’s “Underground” (1995), Jan Svankmajer’s “Conspirators of Pleasure” (1996), David Lynch’s “Lost Highway” (1997), Manuel de Oliveira’s “Viagem ao Principio do Mundo” (1997), Hirokazu Kore-eda’s “The Afterlife” (1998).

Physics was striving for grand unification theories. Both the “Big Bang” model and the theory of elementary particles had been successful examples of hybrid Quantum and Relativity theories, but, in reality, the quantum world and the relativistic world had little in common. One viewed the world as discrete, the other one viewed the world as continuous. One admitted indeterminacy, the other one was rigidly deterministic. One interpreted the weak, strong and electromagnetic forces as exchanges of virtual particles, the other one interpreted the gravitational force as space-time warping. Given the high degree of success in predicting the results of experiments, the two theories were surprisingly difficult to reconcile. Attempts to merge them (such as “Superstring Theory”) generally led to odd results.

Skepticism affected philosophers. Paul Feyerabend argued that the history of science proceeds by chance: science is a hodgepodge of more or less casual theories. And it is that way because the world is that way: the world does not consist of one homogeneous substance but of countless kinds, that cannot be “reduced” to one another. Feyerabend took the Science of his time literally: there is no evidence that the world has a single, coherent and complete nature.

Richard Rorty held that any theory is inevitably conditioned by the spirit of its age. The goal of philosophy and science is not to verify if our propositions agree with reality but to create a vocabulary to express what we think is reality. Facts do not exist independently of the way we describe them with words. Thus science and philosophy are only genres of literature.

Another sign that a new era had started was the decline of Structuralism. Jacques Derrida accused Structuralism of confusing “being” and “Being”, the code and the transcendental reality. Language is also a world in which we live. In fact, there are multiple legitimate interpretations of a text, multiple layers of meaning. Language is constantly shifting. He advocated deciphering the “archi-escriture” (“deconstruction” or “differance”).

France after World War II provided the ideal stage for a frontal critique of the rationalist tradition founded by Descartes and publicized by the Enlightenment that views reason as the source of knowledge and knowledge as the source of progress. “Modernism” had been based on the implicit postulate that progress founded on science is good, and that reason applied to society leads to a better (e.g. egalitarian) social order. The pessimistic views of Friedrich Nietzsche, Arnold Toynbee, Oswald Spengler, Martin Heidegger and Ludwig Wittgenstein escalated as modern society revealed the dark sides of rapid economic growth, industrialization, urbanization, consumerism, of the multiplying forms of communication and information, of globalization. Technology and media on one hand democratize knowledge and culture but on the other hand introduce new forms of oppression. The earliest forms of reaction to modernism had manifested themselves in Bohemian lifestyles, subcultures such as Dadaism, anticapitalist ideologies, phenomenology and existentialism. But it was becoming more and move self-evident that perception of the object by the subject is mediated by socially-constructed discourse, that heterogeneity and fragmentation make more sense than the totalization of culture attempted by modernism, and that the distinction of high-culture and low-culture was artificial. The post-modernist ethos was born: science and reason were no longer viewed as morally good; multiple sources of power and oppression were identified in capitalist society; education no longer trusted as unbiased but seen as politicized; etc. Realizing that knowledge is power, the postmodernist generation engaged in political upheavals such as student riots (Berkeley 1964, Paris 1968), adopted mottos such as “power to the imagination” and identified the bourgeoisie as the problem. For postmodernism the signifier is more important than the signified; meaning is unstable (at any point in time the signified is merely a step in a never-ending process of signification); meaning is in fact socially constructed; there are no facts, only interpretations.

Guy Debord argued that the “society of the spectacle” masks a condition of alienation and oppression. Gilles Deleuze opted for “rhizomatic” thought (dynamic, heterogeneous, chaotic) over the “arborescent thought” (hierarchical, centralized, deterministic) of modernism.

Felix Guattari speculatd that there is neither a subject nor an object of desire, just desire as the primordial force that shapes society and history; a micropolitics of desire that replace Nietsche’s concept of the “power to will”. In his bold synthesis of Freud, Marx and Nietzsche (“schizoanalysis”) the subject is a nomadic desiring machine.

Jean-Francois Lyotard was “incredulous” towards Metaphysics (towards “metanarratives”) because he viewed the rational self (capable of analyzing the world) as a mere fiction. The self, like language, is a layer of meanings that can be contradictory. Instead of “grand narratives”, that produce knowledge for its own sake, he preferred mini-narratives that are “provisional, contingent, temporary, and relative”; in other words, a fragmentation of beliefs and values instead of the grand unification theories of Metaphysics.

Jean Baudrillard painted the picture of a meanigless society of signs in which the real and the simulation are indistinguishable. The transformation from a “metallurgic” society to a “semiurgic” society (a society satured with artificial signs) leads to an implostion in all directions, an implosion of boundaries (eg politics becomes entertainment). More importantly, the boundary between the real and the simulation becomes blurred. Technology, economics and the media create a world of simulacra. The simulation can even become more real than the real (hyper-real). Post-modern society is replacing reality with a simulated reality of symbols and signs. At the same time meaning has been lost in a neutral sterile flow of information, entertainment and marketing. The postmodern person is the victim of an accelerating proliferation of signs that destroys meaning; of a global process of destruction of meaning. The postmodern world is meaningless, it is a reservoir of nihilism. In fact, the accelerating proliferation of goods has created a world in which objects rule subjects: “Things have found a way to elude the dialectic of meaning, a dialectic which bored them: they did this by infinite proliferation”. The only metaphysics that makes sense is a metaphysics of the absurd like Jarry’s pataphysics.

However, the topic that dominated intellectual life at the turn of the millennium and that fostered the first truly interdisciplinary research (involving Neurology, Psychology, Biology, Mathematics, Linguistics, Physics) was: the brain. It was Descartes’ mind-body problem recast in the age of the neuron: who are we? Where does our mind come from? Now that new techniques allowed scientists to study the minutiae of neural processes the ambition became to reconstruct how the brain produces behavior and how the brain produces consciousness. Consciousness became a veritable new frontier of science.

The fascination with consciousness could already be seen in Julian Jaynes’ theory that it was a relatively recent phenomenon, that ancient people did not “think” they way we think today. He argued that the characters in the oldest parts of the Homeric epics and of the Ancient Testament were largely “non-conscious”: their mind was “bicameral”, two minds that spoke to each other, as opposed to one mind being aware of itself. Those humans were guided by “hallucinations” (such as gods) that formed in the right hemisphere of the brain and that communicated to the left hemisphere of the brain, that received them as commands. Language did not serve as conscious thought: it served as communication between the two hemispheres of the brain. The bicameral mind began “breaking down” when the hallucinated voices no longer provided “automatic” guidance for survival. As humans lost faith in gods, they “invented” consciousness.

A major revolution in the understanding of the brain was started, indirectly, by the theory of the immune system advanced by Niels Jerne, which viewed the immune system as a Darwinian system. The immune system routinely manufactures all the antibodies it will ever need. When the body is attacked by foreign antigens the appropriate antibodies are “selected” and “rewarded” over the antibodies that are never used. Instead of an immune system that “designs” the appropriate antibody for the current invader, Jerne painted the picture of a passive repertory of antibodies that the environment selects. The environment is the actor. Jerne speculated that a similar paradigm might be applied to the mind: mind manufactures chaotic mental events that the environment orders into thought. The mind already knows the solution to all the problems that can occur in the environment in which it evolved over millions of years. The mind knows what to do, but it is the environment that selects what it actually does.

Neurologists such as Michael Gazzaniga cast doubt on the role of consciousness. He observed that the brain seems to contain several independent brain systems working in parallel, possibly evolutionary additions to the nervous system. Basically, a mind is many minds that coexist in a confederation. A module located in the left hemisphere interprets the actions of the other modules and provides explanations for our behavior: that is what we feel as “consciousness”. If that is the case, than our “commands” do not precede action, they follow it. First our brain orders an action, then we become aware of having decided it. There are many “I”’s and there is one “I” that makes sense of what all the other “I”’s are doing: we are aware only of this “verbal self”, but it is not the one in charge.

A similar picture was painted by Daniel Dennett, who believed the mind is due to competition between several parallel narrative “drafts”: at every point in time, there are many drafts active in the brain, and we are aware only of the one that is dominant at that point in time. There is, in fact, no single stream of consciousness. The continuity of consciousness is an illusion.

Jerne’s theory was further developed by Gerald Edelman, who noticed that the human genome alone cannot specify the complex structure of the brain, and that individual brains are wildly diverse. The reason, in his opinion, is that the brain develops by Darwinian competition: connections between neurons and neural groups are initially under-determined by the genetic instructions. As the brain is used to deal with the environment, connections are strengthened or weakened based on their success or failure in dealing with the world. Neural groups “compete” to respond to environmental stimuli (“Neural Darwinism”). Each brain is different because its ultimate configuration depends on the experiences that it encounters during its development. The brain is not a direct product of the information contained in the genome: it uses much more information that is available in the genome, i.e. information from the environment. As it lives, the brain continuously reorganizes itself. Thus brain processes are dynamic and stochastic. The brain is not an “instructional” system but a “selectional” system.

The scenario of many minds competing for control was further refined by William Calvin, who held that a Darwinian process in the brain finds the best thought among the many that are continuously produced. A neural pattern copies itself repeatedly around a region of the brain, in a more or less random manner. The ones that “survive” (that are adequate to act in the world) reproduce and mutate. “Thoughts” are created randomly, compete and evolve subconsciously. Our current thought is simply the dominant pattern in the copying competition. A “cerebral code” (the brain equivalent of the genetic code) drives reproduction, variation and selection of thoughts.

Paul MacLean introduced the view of the human brain as three brains in one, each brain corresponding to a different stage of evolution: the “reptilian” brain for instinctive behavior (mostly the autonomic system), the “old mammalian” brain for emotions that are functional to survival (mostly the limbic systemi) and the “new mammalian” brain for higher cognitive functions (basically, the neo-cortex). Mechanical behavior, emotional behavior and rational behavior arose chronologically and now coexist and complement each other.

Merlin Donald viewed the development of the human mind in four stages: the “episodic” mind, that is limited to stimulus-response associations and cannot retrieve memories without environmental cues (lives entirely in the present); the “mimetic” mind, capable of motor-based representations and of retrieving memories independently of environmental cues (understands the world, communicates and makes tools; the “mythic” mind, that constructs narratives and creates myths; and the “theoretical” mind, capable of manipulating symbols.

Steven Mithen identified four “modules” in the brain, which evolved independently and represent four kinds of intelligence: social intelligence (the ability to deal with other humans), natural-history intelligence (the ability to deal with the environment), tool-using intelligence and linguistic intelligence. The hunters-gatherers of pre-history were experts in all these domains, but those differente kinds of expertise did not mix. For thousands of years these different skillsets had been separated. Mithen speculates that the emergence of self-awareness caused the integration of these kinds of intelligence (“cognitive fluidity”) that led to the cultural explosion of art, technology, farming, religion.

The role of a cognitive system in the environment was analyzed by Humberto Maturana and Francisco Varela. They believed that living beings are units of interaction, and that cognition is embodied action (or “enaction”). Organisms survive by “autopoiesis”, the process by which an organism continuously reorganizes its own structure to maintain a stable relationship with the environment. A living being cannot be understood independently of its environment, because it is that relationship that molds its cognitive life. Conversely, the world is “enacted” from the actions of living beings. Thus living beings and environment mutually specify each other. Life is an elegant dance between the organism and the environment, and the mind is “the tune of that dance”.

Wilson-Edward Osborne, the founder of “sociobiology”, applied the principles of Darwinian evolution to behavior, believing that the social behavior of animals and humans can be explained from the viewpoint of evolution.

Richard Dawkins pointed out that one can imagine a Darwinian scenario also for the evolution of ideas, which he called “memes”. A meme is something that infects a mind (a tune, a slogan, an ideology, a religion) in such a way that the mind feels the urge to communicate it to other minds, thus contributing to spreading it. As memes migrate from mind to mind, they replicate, mutate and evolve. Meme are the cultural counterpart of genes. A meme is the unit of cultural evolution, just like a gene is the unit of biological evolutionJust like genes use bodies as vehicles to spread, so memes use minds as vehicles to spread. The mind is a machine for copying memes, just like the body is a machine for copying genes. Memes have created the mind, not the other way around.

Dawkins held the view that Darwinian evolution was driven by genes, not by bodies. It is genes that want to live forever, and that use bodies for that purpose. To Dawkins, evolution is nothing but a very sophisticated strategy for genes to survive. What survives is not my body but my genes.

Dawkins also called attention to the fact that the border of a “body” (or, better, phenotype) is not so obvious: a spider would not exist without its cobweb. Dawkins’ “extended phenotype” includes the world that an organism interacts with. The organism alone is an oversimplification, and does not really have biological relevance. The control of an organism is never complete inside and null outside: there is a continuum of degrees of control, which allows partiality of control inside (e.g., parasites operate on the nervous system of their hosts) and an extension of control outside (as in the cobweb). What makes biological sense is an interactive system comprising the organism and its neighbors. The very genome of a cell can be viewed as a representation of the environment inside the cell.

Stuart Kauffman and others saw “self-organization” as a general property of the universe. Both living beings and brains are examples of self-organizing systems. Evolution is a process of self-organization. The spontaneous emergence of order, or self-organization of complex systems, is ubiquitous in nature. Kauffman argued that self-organization is the fundamental force that counteracts the universal drift towards disorder. Life was not only possible and probable, but almost inevitable.

Linguistics focused on metaphor as more than a poetic tool. George Lakoff argued that language is grounded in our bodily experience, that language is “embodied”. Our bodily experience creates our concepts. Syntax is created by our bodily experience. The “universal” grammar shared by all humans is due to the fact that we all share roughly the same bodily experience. The process by which we create concepts out of bodily experience is metaphor, the process of experiencing something in terms of something else. The entire human conceptual system is metaphorical, because a concepts can always be understood in terms of less abstract concepts, all the way down to our bodily experience. No surprise that we understand the world through metaphors, and we do so without any effort, automatically and unconsciously. Lakoff held that language was created to deal with physical objects, and later extended to non-physical objects by means of metaphors. Thus metaphor is biological: our brains are built for metaphorical thought.

Dreams continued to fascinate neurologists such as Allan Hobson and Jonathan Winson, as new data showed that the brain was “using” sleep to consolidate memories. Dreams are a window on the processing that goes on in the brain while we sleep. The brain is rapidly processing a huge amount of information, and our consciousness sees flashes of the bits that are being processed. The brain tries to interpret these bits as narratives, but, inevitably, they look “weird”. In reality, there is no story in a dream: it is just a parade of information that is being processed. During REM sleep the brain processes information that accumulated during the day. Dreams represent “practice sessions” in which animals refine their survival skills. Early mammals had to perform all their “reasoning” on the spot. Modern brains have invented a way to “postpone” processing sensory information.

Colin McGinn was skeptic that any of this could lead to an explanation of what consciousness is and how it is produced by the brain. He argued that we are not omnipotent: like any other organisms, there may be things that we just can’t conceive. Maybe consciousness just does not belong to the “cognitive closure” of our organism. In other words, understanding our consciousness is beyond our cognitive capacities.

The search for consciousness inside the brain took an unexpected turn when a mysterious biorhythm of about 40 Hertz was detected inside the brain. The traditional model for consciousness was “space-based binding”: there must be a place inside the brain where perceptions, sensations, memories and so forth get integrated into the “feeling” of my consciousness.

Thus Gerald Edelman and Antonio Damasio hypothesized mechanisms by which regions of the brain could synthesize degrees of consciousness. Damasio realized that the “movie in the mind” consciousness caused by the flow of sensory inputs was not enough to explain self-awareness. He believed that “Self” consciousness reqired a topography of the body and a topography of the environment, that ultimately the “self” originated from its juxtaposition against the “non-self”. An “owner” and “observer” of the movie is created within a second-order narrative of the self interacting with the non-self. The self is continuously reconstructed via this interaction. The “I” is not telling the story: the “I” is created by stories told in the mind.

Francis Crick launched the opposite paradigm (“time-based binding”) when he speculated that synchronized firing (the 40 Hertz biorhythm) in the region connecting the thalamus and the cortex might “be” a person’s consciousness. Instead of looking for a “place” where the integration occurs, Crick and others started looking for integration in “time”. Maybe consciousness arises from the continuous dialogue between regions of the brain.

Rodolfo Llinas noticed a possible implication of this viewpoint. It looks like neurons are active all the time. We do not control our neurons, no more than we control our blood circulation. In fact, neurons are always active, even when there are no inputs. Neurons operate at their own pace, regardless of the pace of information. A rhythmic system controls their activity, just like rhythmic systems control heartbeat or breathing. It seems that neurons are telling the body to move even when the body is not moving. Neurons generate behavior all the time, but only some behavior actually takes place. It sounds like Jerne’s model all over again: it is the environment that selects which movement the body will actually perform. Consciousness is a side-effect: the thalamus calls out all cortex cells that are active, and the response “is” consciousness.

How consciousness was produced by evolution was a fascinating mystery in itself. Graham Cairns-Smith turned the conventional model upside down when he claimed that emotions came first. A rudimentary system of feelings was born by accident during evolution. That system proved to be useful for survival, and therefore evolved. The organism was progressively flooded with emotions until a “stream of consciousness” appeared. Language allowed to express it in sounds and thoughts instead of mere facial expressions. Then the conscious “I” was born.

The Berlin Wall fell in 1989 and the Soviet Union was dissolved in 1991, two years after withdrawing from Afghanistan (a lengthy and debilitating war). Most of its Eastern European satellites adopted the American model (democracy and capitalism) and applied for membership in both NATO (the USA-led military alliance) and the European Union (the economic union originally sponsored by Italy, France and Germany, which now included also Britain and Spain). From the point of view of the USA, not only had the enemy (the communist world) surrendered, but most of its allies had turned friends to the USA. Almost overnight, the entire world was adopting the American model. The “domino” effect that the USA had feared would propagate communism took place in the opposite direction: the moment the Soviet Union fell, almost all the countries of the world abandoned communism and adopted the American economic and political model. Democratic reforms removed the dictators of Latin America, Far East and (later) Africa. The exceptions were rare (Cuba in Latin America, Burma and North Korea in the Far East, Zimbabwe in subequatorial Africa). There were only two notable exceptions. Under the stewardship of Deng Xiaoping (who had seized power in 1978), China itself had embarked into pseudo-capitalistic economic reforms, but the one-party system remained in place and kept strict control over freedom of speech. The Arab world, from Morocco to Iraq, from Syria to Yemen, and its eastern neighbors Iran and Afghanistan, were probably ruled by the most totalitarian regimes.

Other than these exceptions, the world was largely being molded after the example of the USA. What had been a picturesque melting pot (mostly a demographic experiment) had become a highly efficient economic and military machine, now imitated throughout the planet.

The adoption of the same economic model favored the creation of several free-trade zones and the creation of a “global village”.

The 1990s were largely a decade of economic boom and (relative) peace (Africa being the theater of most remaining wars).

The USA found itself at the helm of an odd structure. It was definitely not an “empire”, since each country maintained plenty of independence and every country became a fierce USA competitor, but at the same time. It had assumed a revolutionary (not imperial) mission to spread liberal democracy around the world. It fought wars that were more liberation than expansion wars. Its enemies were the enemies of liberal democracy (Nazism, Fascism, Communism, Islamic fundamentalism). It was, first and foremost, an empire of Knowledge: 75% of all Nobel laureates in the sciences, economics, and medicine were doing research in the USA.

As the two countries of the world were not forced anymore to choose between the USA and the Soviet camp, some of them achieved enough independence to exert significant regional influence. The new regional powers included the European Union (which kept growing in size and ambition), China, India (the largest democracy in the world), Japan, Brazil, Nigeria and South Africa. Last, but not least, there was Russia, intent in rebuilding itself as a non-communist country.

There was a significant shift from the Atlantic Ocean to the Pacific Ocean, as Japan, China, South Korea, Indochina, Australia, India became more and more relevant, while Western Europe was becoming less and less relevant. The combined gross product of Asian-Pacific countries increases from 7.8% of world GDP in 1960 to 16.4% in 1982 and to over 20% in 2000.

Islamic fundamentalism was not easy to define as a political entity, but, benefiting from the example of Iran’s Islamic state and the funds pouring from the oil states, it managed to hijack a country, the very Afghanistan that had contributed to the fall of the Soviet Union.

After a crisis in the 1970s, that had proven to the whole world how crucial the supply of oil was for the world’s economy, the Middle East had become a strategic area above and beyond the scope of the Cold War. With the end of the Cold War, the Middle East became an even more dangerous place because its hidden dynamics became more evident: a deadly combination of totalitarian regimes, Islamic fundamentalism, Palestinian-Israeli conflict and huge oil reserves. A practical demonstration came with the “Gulf War” in which a large USA-led coalition repulsed an Iraqi invasion of Kuwait.

Western society was being dominated by automation, from the sphere of the household to the sphere of science. The main innovation of the 1990s was the Internet, which, created in 1985 became a new tool to communicate and spread knowledge, thanks to electronic mail (“e-mail”) and the “World-Wide Web”. This was a completely new landscape, not too dissimilar from the landscape that explorers of the 16th century had to face. Suddenly, companies had to cope with computer viruses that spread over the Internet and people could find virtually unlimited amounts of information about virtually any topic via the “search engines”. The effects of the Internet were also visible on the economy of the USA: it largely fueled the boom of the 1990s, including the bubble of the stock market (the “dot-com” bubble).

The other emerging technology was genetic engineering. Having explained how life works, humans proceeded to tamper with it. The first genetically-engineered animal was produced in 1988, followed in 1994 by the first genetically-engineered edible vegetable, and, in 1997, by the first clone of a mammal. The Human Genome Project was deciphered.

Both ordinary folks and the intellectual elite had the feeling that they lived a special time. It is not surprising that thinkers turned increasingly to interpreting history. Ironically, this autobiographical theme started when Francis Fukuyama declared the “end of history”, meaning that the ideological debate had ended with the triumph of liberal democracy.

John Ralston Saul criticized globalization, that he viewed as caused by a geopolitical vacuum: nation states had been replaced by transnational corporations. The problem is that natural resources and consumers live in real places.

Samuel Huntington interpreted the history of the world of the last centuries as a “Western Civil War”, in which the Christian powers of Europe fought each other anytime anywhere. The fall of Communism and the triumph of Capitalism ended the Western Civil War. Now the world was turning towards a “clash of civilizations” (Western, Islamic, Confucian, Japanese, Hindu, Slavic-Orthodox, Latin American, African).

Just like the Moon landing that had seemed a good omen for the USA turned out to open a decade of problems, the fall of the Soviet Union that seemed another good omen for the USA turned out to open another category of problems. In 2001, hyper-terrorism staged its biggest success yet by crashing hijacked planes into two New York skyscrapers. The USA retaliated by invading (and democratizing) their home base, Afghanistan, and, for good measure, Iraq. Hyper-terrorism rapidly found new targets around the world, from Spain to the Arab countries themselves. Far from having fostered an era of peace, the fall of communism had opened a new can of worms. The second Iraqi war was also the first instance of a crack within the European allies: Britain, Italy and Poland sided with the USA, while France and Germany strongly opposed the USA-led invasion.

Into the new millennium, Science is faced with several challenges: unifying Quantum and Relativity theories; discovering the missing mass of the universe that those theories have predicted; understanding how the brain manufactures consciousness; deciphering the genome; managing an ever larger community of knowledge workers; using genetics for medical and agricultural purposes; and resuming the exploration of the universe.

Appendix: The New Physics: The Ubiquitous Asymmetry (physics.doc), a chapter of Nature of Consciousness

Piero Scaruffi, December 2004

Bibliography

William McNeill: A History of the Human Community (1987)
Charles VanDoren: A History of Knowledge (1991)
Mark Kishlansky: Civilization In The West (1995)
Roberts: Ancient History (1976)
Arthur Cotterell: Penguin Encyclopedia of Ancient Civilizations (1980)
John Keegan: A History of Warfare (1993)
Bernard Comrie: The Atlas Of Languages (1996)
Henry Hodges: Technology in the Ancient World (1970)
Alberto Siliotti: The Dwellings of Eternity (2000)
Alan Segal: Life After Death (2004)
David Cooper: World Philosophies (1996)
Ian McGreal: Great Thinkers of the Eastern World (1995)
Richard Popkin: The Columbia History of Western Philosophy (1999)
Mircea Eliade: A History of Religious Ideas (1982)
Paul Johnson: Art, A New History (2003)
Ian Sutton: Western Architecture (1999)
Donald Grout: A History of Western Music (1960)
Geoffrey Hindley: Larousse Encyclopedia of Music (1971)
Michael Roaf: Mesopotamia and the Ancient Near East (1990)
Hans Nissen: The Early History of the Ancient Near East (1988)
Annie Caubet: The Ancient Near East (1997)
Trevor Bryce: The kingdom of the Hittites (1998)
Rosalie David: Handbook to Life in Ancient Egypt (1998)
Henri Stierlin: Pharaohs Master-builders (1992)
Glenn Moore: Phoenicians (2000)
Barry Cunliffe: The Ancient Celts (1997)
David Abulafia: The Mediterranean in History (2003)
Henri Stierlin: Hindu India (2002)
Hermann Goetz: The Art of India (1959)
Heinrich Zimmer: Philosophies of India (1951)
Surendranath Dasgupta: A History of Indian Philosophy (1988)
Gordon Johnson: Cultural Atlas of India (1996)
Jadunath Sinha: “History Of Indian Philosophy” (1956)
Haridas Bhattacharyya: “The Cultural Heritage Of India” (1937)
Heinrich Zimmer: Philosophies of India (1951)
Charles Hucker: China’s Imperial Past (1975)
Sherman Lee: A History of Far Eastern Art (1973)
Wolfgang Bauer : China and the Search for Happiness (1976)
Joseph Needham: Science and Civilisation in China (1954)
John King Fairbank & Edwin Reischauer: East Asia Tradition and Transformation (1989)
Penelope Mason: History Of Japanese Art (1993)
Paul Varley: Japanese Culture (1973)
Thomas Martin: Ancient Greece (1996)
Katerina Servi: Greek Mythology (1997)
Robin Sowerby: The Greeks (1995)
Peter Levi: The Greek World (1990)
Tomlinson: Greek And Roman Architecture (1995)
Bruno Snell: The Discovery of the Mind (1953)
Henri Sierlin: The Roman Empire (2002
Duby & Perrot: A History of Women in the West vol 1 (1992)
Giovanni Becatti: The Art of Ancient Greece and Rome (1968)
Marvin Tameanko: Monumental Coins (1999)
John Norwich: A short history of Byzantium (1995)
Kevin Butcher: Roman Syria (2003)
Tomlinson: Greek And Roman Architecture (1995)
Bart Ehrman: Lost Scriptures (2003)
Elaine Pagels: The Origins Of Satan (1995)
Robert Eisenman: James the Just (1997)
Timothy Freke: The Jesus Mysteries (1999)
John Dominic Crossan: The Historical Jesus (1992)
Albert Hourani: A History of the Arab peoples (1991)
Bernard Lewis: The Middle East (1995)
John Esposito: History of Islam (1999)
Michael Jordan: Islam – An Illustrated History (2002)
Edgar Knobloch: Monuments of Central Asia (2001)
Huseyin Abiva & Noura Durkee: A History of Muslim Civilization (2003)
Vernon Egger: A History of the Muslim World to 1405 (2003)
David Banks: Images of the Other – Europe and the Muslim World Before 1700 (1997)
Reza Aslan: No God but God (2005)
Majid Fakhry: A History of Islamic Philosophy (1970)
Carter-Vaughn Findley: The Turks in World History (2005)
Richards, John: The Mughal Empire (1995)
Bertold Spuler: The Mongol Period – History of the Muslim World (1994)
David Christian: A History of Russia, Central Asia and Mongolia (1999)
Graham Fuller: The Future of Political Islam (2003)
Norman Cantor: Civilization of the Middle Ages (1993)
Henri Pirenne: Histoire Economique de l’Occident Medieval (1951)
Robert Lopez: “The Commercial Revolution of the Middle Ages” (1976)
Will Durant: “The Age of Faith” (1950)
James Chambers: “The Devil’s Horsemen” (1979)
Henry Bamford Parkes: The Divine Order (1968)
Fernand Braudel: The Mediterranean (1949)
Lynn White: Medieval Technology and Social Change (1962)
Gerhard Dohrn-van Rossum: “History of the Hour” (1996)
Frances & Joseph Gies: Cathedral Forge and Waterwheel (1994)
Georges Duby: The Age of the Cathedrals (1981)
Gunther Binding: High Gothic Art (2002)
Xavier Barral: Art in the Early Middle Ages (2002)
Daniel Hall: “A History of Southeast Asia” (1955)
Geoffrey Hosking: Russia and the Russians (2001)
Simon Schama: “A History of Britain” (2000)
Will Durant: The Renaissance (1953)
John Ralston Saul: “Voltaire’s Bastards” (1993)
Joel Mokyr: Lever of Riches (1990)
Hugh Thomas: The Slave Trade (1997)
Peter Watson: Ideas (2005)
John Crow: “The Epic of Latin America” (1980)
David Fromkin: “Europe’s Last Summer” (2004)
Mary Beth Norton: A People And A Nation (1986)
John Steele Gordon: “An Empire Of Wealth” (2004)
Daniel Yergin: “The Prize” (1991)
Lawrence James: Rise and Fall of the British Empire (1994)
Robert Jones Shafer: “A History of Latin America” (1978)
Paul Kennedy: The Rise and Fall of the Great Powers (1987)
Jonathan Spence: “The Search for Modern China” (1990)
Henry Kamen: Empire (2002)
Edward Kantowicz: The World In The 20th Century (1999)
Christian Delacampagne: A History of Philosophy in the 20th Century (1995)
Piero Scaruffi: Nature of Consciousness (2006)
Jacques Barzun: “From Dawn to Decadence” (2001)
Peter Hall: Cities in Civilization (1998)
Sheila Jones: The Quantum Ten (Oxford Univ Press, 2008)
Orlando Figes: “Natasha’s Dance – A Cultural History of Russia” (2003)
Roger Penrose:The Emperor’s New Mind (1989)
Gerard Piel: The Age Of Science, 2001)
Paul Johnson: Modern Times (1983)
Edward Kantowicz: The World In The 20th Century (1999)
Tony Judt: Postwar – A History of Europe Since 1945 (2005)
John Lewis Gaddis : The Cold War (2005)
Stephen Kinzer: Overthrow – America’s Century of Regime Change (2007)
Piers Brendon: The Decline And Fall Of The British Empire 1781-1997
HH Arnason: History of Modern Art (1977)
Herbert Read: A Concise History of Modern Painting (1959)
Jonathan Glancey: 20th Century Architecture (1998)
MOCA: At The End of the Century (1998)
Jonathan Glancey: 20th Century Architecture (1998)
Eric Rhode: A History of the Cinema (1976)
Robert Sklar: Film (1993)
Eileen Southern: The Music of Black Americans (1971)
Ted Gioia: A History of Jazz (1997)
Mark Prenderast: The Ambient Century (2000)
Piero Scaruffi: A History of Jazz (2007)
Piero Scaruffi: History of Rock and Dance Music (2009)

The History of Language: Why We Speak

The Origin of Language Charles Darwin observed that languages seem to evolve the same way that species evolve. However, just like with species, he failed to explain what the origin of language could be.

Languages indeed evolved just like species, through little “mistakes” that were introduced by each generation. It is not surprising that the evolutionary trees drawn by biologists (based on DNA similarity) and linguists (based on language similarity) are almost identical. Language may date back to the beginning of mankind.

What is puzzling, then, is not the evolution of modern languages from primordial languages: it is how it came to be that non-linguistic animals evolved into a linguistic animal such as the human being. The real issue is the “evolution of language” from non-language, not the “evolution of languages” from pre-existing languages, that is puzzling.

Several biologists and anthropologists believe that language was “enabled” by accidental evolution of parts of the brain and possibly other organs.

The USA biologist Philip Lieberman views the brain as the result of evolutionary improvements that progressively enabled new faculties. Human language is a relatively recent evolutionary innovation that came about when speech and syntax were added to older communication systems. Speech allowed humans to overcome the limitations of the mammalian auditory system, and syntax allowed them to overcome the limits of memory.

The USA neurologist Frank Wilson believes that the evolution of the human hand allowed humans a broad new range of new activities that, in turn, fostered an evolution of the brain that resulted in the brain of modern humans. Anatomical changes of the hand dramatically altered the function of the hand, eventually enabling it to handle and use objects. This new range of possibilities for the hand created a new range of possibilities for thought: the brain could think new thoughts and could structure them. The human brain (and only the human brain) organizes words into sentences, i.e. does syntax, because of the hand. “The brain does not live inside the head”.

Linguistic Darwinism

According to Chomsky’s classical theory, language is an innate skill: we come pre-wired for language, and simply “tune” that skill to the language that is spoken around us. In Chomsky’s view language is biology, not culture. This implies that the language skill is a fantastic byproduct of evolution. Syntax must be regarded as any other organ acquired via natural selection. How did such a skill develop, since that skill is not present elsewhere in nature? Where did it come from? Language appears to be far too complex a skill to have been acquired via step-by-step refinement of the Darwinian kind, especially since we are not aware of any intermediary steps (e.g., species that use a grammar only to some extent).

The British linguist Derek Bickerton advanced a theory that attempted to bridge Darwin and Chomsky. Bickerton argued that language was the key to the success of the human species, the one feature that made us so much more powerful than all other species. Everything else, from memory to consciousness, seems to be secondary to it. We cannot recall any event before we learned language. We can remember thoughts only after we learned language. Language seems to be a precondition to all the other features that we rank as unique to humans.

First of all, human language cannot just be due to the evolution of primitive, emotion-laden “call systems”. We still cry, scream, laugh, swear, etc. Language has not fully replaced that system of communication. The primitive system of communication continues to thrive alongside language. Language did not replace it, and probably did not evolve from it. Language is something altogether different.

He emphasized the difference (not the similarity) between human and animal communication. Animal communication is holistic: it communicates the whole situation. Human language deals with the components of the situation. Also, animal communication is pretty much limited to what is evolutionarily relevant to the species. Humans, on the other hand, can communicate about things that have no relevance at all for our survival. In fact, we could adapt our language to describe a new world that we have never encountered before. The combinatorial power of human language is what makes it unique. Bickerton thinks that human and animal communication are completely different phenomena.

In fact, Bickerton believes that human language is not primarily a means to communicate but a means to represent the world. Human language did not evolve from animal communication but from older representation systems. First, some cells (the sensory cells) were born whose only task was to respond to the environment. As sensory cells evolved and their inputs became more complex, a new kind of cells appeared that was in charge of mediating between these cells and motor cells. These mediating cells eventually evolved categories that were relevant to their survival. Animals evolved that were equipped with such “primary” representational systems. At some point, humans evolved who were equipped with syntax and were capable of representing representations (of models of models). Human language was so advantageous that it drove a phenomenal growth in brain size (not the other way around).

Two aspects of language, in particular, set it apart from the primitive call system of most animals: the symbolic and the syntactic aspects. A word stands for something (such as an object, a concept, an action). And words can be combined to mean more than their sum (“I walk home” means more than just the concepts of “I”, “walking” and “home”). Bickerton believes that syntax is what makes our species unique: other species can also “symbolize”, but none has showed a hint of grammar.

The philosopher Nicholas Humphrey once advocated that language was born out of the need to socialize. On the contrary, Bickerton believes that Humphrey’s “social intelligence” had little to do with the birth of proto-language. Socialization as a selective pressure would not have been unique to humans, and therefore language would have developed as well in other primates. Syntax, instead, developed only in humans, which means that a selective pressure unique to humans must have caused it. Bickerton travels back to the origins of hominids, to the hostile savannas where hominids were easy targets for predators and had precious little food sources. Other primates had a much easier life in the forests. The ecology of early hominids created completely different selective pressures than the ones faced by other primates. In his quest for the very first utterances, Bickerton speculates that language was born to label things, then evolved to qualify those labels in the present situation: “leopard footprints” and “danger” somehow needed to be combined to yield the meaning “when you see leopard footprints, be careful”.

Bickerton shows how this kind of “social calculus”, coupled with Baldwin effects, could trigger and successfully lead to the emergence of syntax. Social intelligence was therefore important for the emergence of syntax, even if it was not important for the emergence of proto-language.

Bickerton points out that the emergence of language requires the ability to model other minds. I am motivated to communicate information only if I can articulate this simple scenario in my mind: I know something that you don’t know and I would gain something if you knew it. Otherwise the whole point of language disappears.

Bickerton thinks that consciousness and the self were enabled by language: language liberated humans from the constraints of animal life and enabled off-line thinking. The emergence of language even created the brain regions that are essential to conscious life. Basically, he thinks that language created the human species and the world that humans see.

Summarizing, Bickerton believes that: language is a form of representation, not just of communication, a fact that sets it apart from animal communication; language evolved from primordial representational systems; language has shaped the cognitive life of the human species.

Grooming

The psychologist Robin Dunbar believes that originally the function of language was not to communicate information, but to cement society.

All primates live in groups. The size of a primate’s neocortex, as compared to the body mass, is directly proportional to the size of the average group for that primate. Humans tend to live in the larger groups of primates, and human brains are correspondingly much larger.

As humans transitioned from the forest to the savanna, they needed to band together in order to survive the increased danger of being killed by predators. Language helped keep large groups together. Thus humans who spoke had an evolutionary advantage (the group) over humans who did not develop that skill. Dunbar believes that human speech is simply a more efficient way of “grooming”: apes cement social bonds by grooming the members of their group. Humans “gossiped” instead of grooming each other. Later, and only later, humans began to use language also to communicate information.

Dunbar believes that dialects developed for a similar reason: to rapidly identify members of the same group (it is notoriously difficult to imitate a dialect).

Language and societies evolved together: society stimulated the invention of language, and language enabled larger societies, that stimulated even more sophisticated languages, that enabled even larger societies, etc.

Co-evolution

The USA anthropologist Terrence Deacon believes that language and the brain co-evolved. They evolved together influencing each other step by step. In his opinion, language did not require the emergence of a language organ. Language originated from symbolic thinking, an innovation that occurred when humans became hunters because of the need to overcome the sexual bonding in favor of group cooperation.

Both the brain and language evolved at the same time through a series of exchanges. Languages are easy to learn for infants not because infants can use innate knowledge but because language evolved to accommodate the limits of immature brains. At the same time, brains evolved under the influence of language through the Baldwin effect. Language caused a reorganization of the brain, whose effects were vocal control, laughter and sobbing, schizophrenia, autism.

Deacon rejects the idea of a universal grammar a` la Chomsky. There is no innate linguistic knowledge. There is an innate human predisposition to language, but it is due to the co-evolution of brain and language and it is altogether different from the universal grammar envisioned by Chomsky. What is innate is a set of mental skills (ultimately, brain organs) which translate into natural tendencies, which translate into some universal structures of language.

Another way to describe this is to view language as a “meme”. Language is simply one of the many “memes” that invade our mind. And, because of the way the brain is, the meme of language can only assume such and such a structure: not because the brain is pre-wired to such a structure but because that structure is the most natural for the organs of the brain (such as short-term memory and attention) that are affected by it.

Chomsky’s universal grammar is an outcome of the evolution of language in our mind during our childhood. There is no universal grammar in our genes, or, better, there are no language genes in our genome.

The secret of language is not in the grammar, but in the semantics. Language is meaningful. Deacon envisions a hierarchy of levels of reference (of meaning), that reflects the evolution of language. At the top is the level of symbolic reference, a stable network of interconnected concepts. A symbol does not only refer to the world, but also to other symbols. The individual symbol is meaningless: what has meaning is the symbol within the vast and ever changing semantic space of all other symbols. At lower levels, Deacon envisions less and less symbolic forms of representation, which are also less and less stable. At the lowest, most fluctuating level of the hierarchy there lie references that are purely iconic and indexicals, created by a form of learning that is not unique to language (in fact it is widespread to all cognitive tasks). The lower levels are constrained by what humans can experience and learn, which is constrained by innate abilities. The higher level, on the other hand, is an emergent system due to the interaction among linguistic agents.

Gesturing in the Mind

According to USA neuroscientist Rhawn Joseph, one of the youngest parts of the brain, the inferior parietal lobe of the left hemisphere, enabled both language, tool making and art itself. It enabled us, in other words, to create visual symbols. It also enabled us to create verbal symbols, i.e. of writing.

The inferior parietal lobe allows the brain to classify and label things. This is the prerequisite to forming concepts and to “abstracting” in general. Surprisingly, this is also the same organ that enables meaningful manual gesturing (a universal language, that it is also shared with many animals). Thus the evolution of writing is somehow related (neurally speaking) to manual gesturing. The inferior parietal lobe was one of the last organs of the brain to evolve, and it is still one of the last organs to mature in the child (which explains why children have to wait for a few years before they can write and do math).

This lobe is much more developed in humans than in other animals (and non-existent in most). The neurons of this lobe are somewhat unique in that they are “multimodal”: they are capable of simultaneously processing different kinds of inputs (visual, auditory, movement, etc). They are also massively connected to the neocortex, precisely to three key regions for visual, auditory and somatosensory processing. Their structure and location makes them uniquely fit to handle and create multiple associations. It is probably this lobe that enables us to understand a word as both an image, a function, a name and many other things at the same time.

Joseph claims that the emotional aspect of speaking is the original one: the motivation to speak comes from the limbic system, the archaic part of the brain that deals with emotions, and that we share with other mammals. The limbic system embodies a universal language that we all understand, a primitive language made of calls and cries. Each species has its own, but within a species all members understand it. Joseph believes that at this stage the “vocal” hemisphere is the right one. Only later, after a few months, does the left hemisphere impose structure to the vocalizing and thus become dominant in language.

Language as A Sexual Organ

The USA evolutionary psychologist Geoffrey Miller believes that the human mind was largely molded by sexual selection and is therefore mainly a sexual ornament. Culture, in general, and language, in particular, are simply ways that males and females play the game of sex. When language appeared, it quickly became a key tool in sexual selection, and therefore it evolved quickly.

Darwin had already speculated that language may have evolved through sexual selection. Miller agrees, finding that the usual explanation (that language helps a group trade key information) is only a small piece of the puzzle (individuals, unless they are relatives, have no motivation to share key information since they are supposed to compete).

Even more powerful is the evidence that comes from observing the behavior of today’s humans: they compete to be heard, they compete to utter the most sensational sentences, they are dying to talk.

Miller also mentions anatomical evidence: what has evolved dramatically in the human brain is not the hearing apparatus but the speaking apparatus. Miller believes that language, whose intended or unintended effect is to deliver knowledge to competitors, must also have a selfish function, otherwise it would not have developed: individuals who simply delivered knowledge to competitors would not have survived. On the other hand, if language is a form of sexual display, then it makes sense that it evolved rapidly, just like any other organ (bull horns or peacock tails) that served that function. It is unique to humans the same way that the peacock’s tail is unique to peacocks. It is pointless to try and teach language to a chimpanzee the same way that it is pointless to expect a child to grow a colorful tail.

The Origin of Communication

Where does language come from is a question that does not only apply to humans, but to all species, each species having its own “language”.

One might as well as the question “where do bee dances come from”? The bees are extremely good at providing details about the route and the location of food. They do so not with words but with dances. The origins of bee dances are no less intriguing than the origins of human language.

The point is that most species develop a social life and the social life depends on a mechanism of communication, and in humans that mechanism is language. But language may be viewed as a particular case of a more general process of nature, the process by which several individuals become aggregated in a group.

There is a bond within the members of a species, regardless of whether they are cooperating or competing: they can communicate. A dog cannot communicate much to a cat. A lion cannot communicate with an ant. And the greatest expert in bees cannot communicate much with a bee. Communication between members of different species is close to impossible. But communication within members of a species is simple, immediate, natural, and, contrary to our beliefs, does not require any advanced skills. All birds communicate; all bees communicate. There is no reason to believe that humans would not communicate if they were not taught a specific language. They might, in fact, communicate better: hand gestures and facial expressions may be a more efficient means of communication among humans than words.

Again, this efficiency is independent of the motives: whether it is for cooperation or for aggression. We can communicate with other members of our species. When we communicate for cooperation, the communication can become very complex and sophisticated. We may communicate that a herd is moving east, that clouds are bringing rain, that the plains are flooded. A bee can communicate similar information to another bee. But an ant cannot communicate this kind of information to a fish and a fish cannot communicate it to a bird. Each species has developed a species-specific form of communication.

The origin of language is but a detail in a much more complex story, the story of how intra-species communication evolved. If all species come from a common ancestor, there must have been only one form of communication at the beginning. Among the many traits that evolved over the ages, intra-species communication is one that took the wildest turns. While the genetic repertoire of bees and flies may be very similar, their system of communication is wildly different.

The fact that communication is different for each species may simply be due to the fact that each species has different kinds of senses, and communication has to be tailored to the available senses.

A reason for this social trait to exist could be both sexual reproduction and altruism.

The Origin of Cellular Communication

Even before social behavior was invented, there was a fundamental language of life. Living cells communicate all the time, even in the most primitive organisms: cell communication is the very essence of being alive.

There are obvious parallels between the language of words and the language of cellular chemicals. Two cells that exchange chemicals are doing just that: “talking” to each other, using chemicals instead of words. Those chemicals are bound in molecular structures just like the words of human language are bound in grammatical structures.

The forms of communication that do not involve chemical exchange still cause some chemical reaction. A bee that changes course or a human brain that learns something have undergone chemical change, that has triggered changes in their cognitive state.

From this point of view, there are at least three main levels of communication: a cellular level, in which living cells transmit information via chemical agents; a bodily level, in which living beings transmit information via “gestures”; and a verbal level, in which living beings transmit information via words.

Each level might simply be an evolution of the previous one.

Who Invented Language?

Linguists, geneticists and anthropologists have explored the genealogical tree of human languages to determine where human language was invented. Was it invented in one place and then spread around the globe (why then so many languages rather than just one?) or was it invented in different places around the same time? (What a coincidence that would be).

The meta-issue with this quest is the role of free will, i.e. whether we humans have free will and decide what happens to us. We often assume that somebody “invented” something and then everybody started using it. The truth could be humbler: all we humans share pretty much the same brain, and that brain determines our behavior. We all sleep, we all care for our children, we all avoid danger. Not because one human “invented” these behaviors, but because our brains are programmed to direct us to behave that way. Our free will (if indeed we have any) is limited to deciding which woman to marry, but the reason we want a wife is sex and children, a need that is programmed in our brain (and, of course, one could claim that the choice of the specific wife is also driven by our brain’s circuits).

In fact, we consider “sick” or “abnormal” any human being who does not love her/his children, any human who does not like sex, etc.

Asking who invented language could be like asking who invented sex or parenting. It may just come with the race. We humans may be programmed to communicate using the human language. It didn’t take a genius to invent language. We started speaking, worldwide, as soon as the conditions were there (as soon as we started living in groups, more and more heterogeneous groups, more and more collaborative groups).

The mystery may not be who invented language, but why we invented so many and so different languages. There are striking differences between Finnish and Chinese, even though those two peoples share pretty much the same brain. The effect of the environment on the specific language we start speaking must indeed be phenomenal.

What Are Jokes And Why Do We Make Them

Language developed because it had an evolutionary function. In other words, it helped us survive. For example, language enabled humans to exchange information about the environment. A member of a group can warn the member of another group about an impending danger or the source of water or the route taken by a predator.

This may be true, but it hardly explains the way we use language every day. When we write an essay, we may be matter of factual, but most of the day we are not. For example, we make jokes all the time. A human being who does not make jokes, or does not laugh at jokes made by others, is considered a case for a psychoanalyst. Jokes are an essential part of the use of language.

Nonetheless, jokes are a peculiar way to use language. We use words to express something that is not true, but could be true, and the brain somehow relates to this inconsistency and… we laugh.

There must be a reason why humans make jokes. There must be a reason why we use language to make jokes.

Upon closer inspection, we may not be so sure that the main function of language is communicating information about the environment.

If a tiger attacks you, I will not read you an essay on survival of the fittest: I will just scream “run!” We don’t need the complex, sophisticated structure of language to “communicate” about us and the environment. If you are starving, I may just point to the refrigerator. For most practical purposes, street signs communicate information about locations better than geography books. It is at least debatable whether we need language to communicate information about the environment that is relevant to survival. We can express most or all of that information in very simple formats, often with just one word or even just a gesture.

On the other hand, if we want to make a joke, we need to master the whole power of the language. Every beginner in a foreign language knows that the hardest part is to understand jokes in that language, and the second hardest is making them. Joking does require the whole complex structure of language, and, at closer inspection, it is the only feature of human life that requires it.

Jokes are probably very important for our survival. A joke is a practice: we laugh because we realize that something terrible would happen in that circumstance: the logic of the world would be violated, or a practical disaster would occur. The situation is “funny” because it has to be avoided. Being funny helps remember that we should avoid it.

Joking may well be an important way to learn how to move in the environment without having to do it first person, without having to pay the consequences for a mistake.

In that case, it would be more than justified that our brain evolved a very sophisticated tool to make jokes: language.

Ultimately, language may have evolved to allow us to make more and more useful (funnier and funnier) jokes.

Tools

The British psychologist Richard Gregory has shown how language is but one particular type of “tool”. The human race, in general, is capable of making and using tools, and language happens to be one of them.

Gregory claims that “tools are extensions of the limbs, the senses and mind.” The fundamental difference between humans and apes is not in the (very small) anatomical differences but in language and tools. Man is both a tool-user and a tool-maker.

Gregory shows that there are “hand” tools (such as level, pick, axe, wheel, etc) and “mind” tools, which help measuring, calculating and thinking (such as language, writing, counting, computers, clocks).

Tools are extensions of the body. They help us perform actions that would be difficult for our arms and legs. Tools are also extensions of the mind. Writing extended our memory. We can make a note of something. So do photographs and recordings. This book extends my mind. It also extends your mind. Tools like books create a shared mind.

Gregory qualifies information as “potential intelligence” and behavior as “kinetic intelligence”. Tools increase intelligence as they enable a new class of behavior. A tool “confers” intelligence to a user, meaning that it turns some potential intelligence into kinetic intelligence.

A person with a tool is a person with a potential intelligence to perform an action that without the tool would not be possible (or much more difficult).

Behavior is often just using that tool to perform that action. It may appear that intelligence is in your action, but, actually, intelligence is in the tool, not in your action. Or, better, they are two different types of intelligence.

And words are just one particular type of tool.

There is also a physical connection in our body between language and tool usage: they are both controlled by the same hemisphere.

Tools as Intentionality

The USA philosopher Daniel Dennett advanced a theory of language developed based on his theory of “intentionality” (the ability to refer to something). Basically, his idea is that different levels of intentionality correspond to different “kinds” of minds.

The “intentional stance” is the strategy of interpreting the behavior of something (a living or non-living thing) as if it were a rational agent whose actions are determined by its beliefs and desires. This is the stance that we adopt, for example, when dealing with ourselves and other humans: we assume that we and others are rational agents whose actions are determined by our beliefs and desires. Intentional systems are those to which the intentional stance can be applied, and they include artifacts such as thermostats and computers, as well as all living beings. For example, we can say that “this computer program wants me to input my name” or that “the tree bends south because it needs more light” (both “wants” and “needs” express desire).

The intentional stance makes the assumption that an intentional system has goals that it wants to achieve; that it uses its own beliefs to achieve its own goals, and that it is smart enough to use the right ones in the appropriate way.

It seems obvious that artifacts possess only “derived” intentionality, i.e. intentionality that was bestowed on them by their creators. A thermostat measures temperature because that is what the engineer designed it for. The same argument, though, applies to us: we are artifacts of nature and nature bestows on us intentionality. (The process of evolution created our minds to survive in an environment, which means that our mind is about the environment).

Dennett speculates that brains evolved from the slow internal communication systems of “sensitive” but not “sentient” beings when they became equipped with a much swifter communication agent (the electro-chemicals of neurotransmitters) in a much swifter communication medium (nerve fibers). Control was originally distributed around the organism in order to be able to react faster to external stimuli. The advent of fast electro-chemicals allowed control to become centralized, because now signals traveled at the speed of electricity. This also allowed control to become much more complex, as many more things could be done in a second.

“Evolution embodies information in every part of every organism”. And that information is about the environment. A chameleon’s skin, a bird’s wings, and so forth, they all embody information about the medium in which their bodies live. This information does not need to be replicated in the brain as well. The organ already “knows” how to behave in the environment. Wisdom is not only in the brain, wisdom is also embodied in the rest of the body. Dennett speculates that this “distributed wisdom” was not enough: a brain can supplement the crudeness, the slowness, the limitations of the organs. A brain can analyze the environment on a broader scale, can control movement in a much faster way and can predict behavior over a longer range.

As George Miller put it, animals are “informavores”. Dennett believes in a distributed information-sucking system, each components of which is constantly fishing for information in the environment. They are all intentional systems, which get organized in a higher-level intentional system, with an “increasing power to produce future”.

This idea, both evolutionarily and conceptually, can be expressed in a number of steps of intentionality, each of which yields a different kind of mind. First there were “Darwinian creatures”, that were simply selected by trial and error on the merits of their bodies’ ability to survive (all living organisms are Darwinian creatures). Then came “Skinnerian creatures”, which were also capable of independent action and therefore could enhance their chances of survival by finding the best action (they are capable of learning from trial and error). The third stage of “mind, “Popperian creatures”, were able to play an action internally in a simulated environment before they performed it in the real environment and could therefore reduce the chances of negative outcomes (information about the environment supplemented conditioning). Popperian creatures include most mammals and birds. They feel pain, but do not suffer, because they lack the ability to reflect on their sensations.

Humans are also “Gregorian creatures”, capable of creating tools, and, in particular, of mastering the tool of language. Gregorian creatures benefit from technologies invented by other Gregorian creatures and transmitted by cultural heritage, unlike Popperian creatures that benefit only from what has been transmitted by genetic inheritance.

A key step in the evolution of “minds” was the transition from beings capable of an intentional stance towards others to beings capable of an intentional stance towards an intentional stance. A first-order intentional system is only capable of an intentional stance towards others. A second-order intentional system is also capable of an intentional stance towards an intentional stance. It has beliefs as well as desires about beliefs and desires. And so forth. Higher-order intentional systems are capable of thoughts such as “I want you to believe that I know that you desire a vacation”.

This was not yet conscious life because there are examples, both among humans and among other animals, of unaware higher-order intentionality. For example, animals cheat on each other all the time, and cheating is possible only if you are capable of dealing with the other animal’s intentional state (with the other animal’s desires and beliefs), but Dennett does not think that animals are necessarily conscious. In other words, he thinks that one can be a psychologist without being a conscious being.

Dennett claims that our greater “intelligence” is due not to a larger brain but to the ability to “off load” as much as possible of our cognitive tasks into the environment. We construct “peripheral devices” in the environment to which those tasks can be delegated. We can do this because we are intentional: we can point to those things in the environment that we left there. In this way the limitations of the brain do not matter anymore, as we have a potentially infinite area of cognitive processing. Most species rely on natural landmarks to find their way around and track food sources. But some species (at least us) have developed the skills to “create” their own landmark, and they can therefore store food for future use. They are capable of “labeling” the world that they inhabit. Individuals of those species alter the environment and then the altered environment alters their behavior. They create a loop to their advantage. They program the environment to program them.

Species that store and use signs in the environment have an evolutionary advantage because they can “off-load” processing. It is like “taking a note” that we can look up later, so we don’t forget something. If you cannot take a note, you may forget the whole thing.

Thanks to these artifacts, our mind can extend out into the environment. For example, the notepad becomes an extension to my memory.

These artifacts shape our environment. Our brains are semiotic devices that contain pointers and indices to the external world.

Semiotics: Signs and Messages

Semiotics provides a different perspective to study the nature and origin of language.

Semiotics, founded in the 1940s by the Danish linguist Louis Trolle Hjelmslev, had two important precursors in the USA philosophers Charles Peirce (whose writings were rediscovered only in the 1930s) and Charles Morris (who in 1938 had formalized a theory of signs).

Peirce reduced all human knowledge to the idea of “sign” and identified three different kinds of signs: the index (a sign which bears a causal relation with its referent); the icon (which bears a relation of similarity with its referent); and the symbol (whose relation with its referent is purely conventional). For example, the flag of a sport team is a symbol, while a photograph of the team is an icon. Movies often make use of indexes: ashes burning in an ashtray mean that someone was recently in the room, and clouds looming on the horizon mean it is about to rain. Most of the words that we use are symbols, because they are conventional signs referring to objects.

Morris defined the disciplines that study language according to the roles played by signs. Syntax studies the relation between signs and signs (as in “the” is an article, “meaning” is a noun, “of” is a preposition, etc.). Semantics studies the relation between signs and objects (“Piero is a writer” means that somebody whose name is “Piero” writes books). Finally, Pragmatics studies the relation between signs, objects and users (the sentence “Piero is a writer” may have been uttered to correct somebody who said that Piero is a carpenter).

The Swiss linguist Ferdinand DeSaussure introduced the dualism of “signifier” (the word actually uttered) and the “signified” (the mental concept). (“Semiology” usually refers to the Saussure-an tradition, whereas “semiotics” refers to the Peirce-an tradition. Semiotics, as opposed to Semiology, is the study of all signs).

The Argentine semiotician Luis Prieto studied signs, in particular, as means of communication. For example, the Braille alphabet and traffic signs are signs used to communicate. A “code” is a set of symbols (the “alphabet”) and a set of rules (the “grammar”). The code relates a system of expressions to a set of contents. A “message” is a set of symbols of the alphabet that has been ordered according to the rules of the grammar. This is a powerful generalization: language turns out to be only a particular case of communication. A sentence can be reduced to a process of encoding (by the speaker) and decoding (by the listener).

The Hungarian semiotician Thomas Albert Sebeok views Semiotics as a branch of communication theory that studies messages, whether emitted by objects (such as machines) or animals or humans. In agreement with Rene’ Thom, Sebeok thinks that human sign behavior has nothing special that can distinguish it from animal sign behavior or even from the behavior of inanimate matter.

The USA linguist Merlin Donald speculated on how the human mind developed. He argued that at the beginning there was only episodic thinking: early hominids could only remember and think about episodes. Later, they learned how to communicate and then they learned how to build narratives. Symbolic thinking came last. Based on this scenario, the Danish semiotician Jesper Hoffmeyer has drawn his own conclusions: in the beginning there were stories, and then little by little individual words rose out of them. Which implies that language is fundamentally narrative in nature; that language is corporeal, has to do with motor-based behavior; and that the unit of communication among animals is the whole message, non the word.

Hoffmeyer has introduced the concept of “semiosphere”, the semiotic equivalent of the atmosphere and the biosphere, that incorporates all forms of communication, from smells to waves: all signs of life. Every living organism must adapt to its semiosphere or die. At all levels, life must be viewed as a network of “sign processes”. The very reason for evolution is death: since organisms cannot survive in the physical sense they must survive in the semiotic sense, i.e. by making copies of themselves. “Heredity is semiotic survival”.

Rene’ Thom, the French mathematician who invented catastrophe theory, aims at extending his method so as to “geometrize thought and language”. Thom is envisioning a Physics of meaning, of significant form, which he calls “Semiophysics”.

Following in this generalization of signs, James Fetzer has even argued in favor of extending Newell and Simon’s theory to signs: the mind not as a processor of symbols, but as a processor of signs.

Collective Cognition

What is, ultimately, the function of language? To communicate? To think? To remember? All of this and more. But, most likely, not only for the sake of the individual. Language’s crucial function is to create a unit out of so many individuals. Once we learn to speak, we become part of something bigger than our selves. We inherit other people’s memories (including the memories of people who have long been dead) and become capable of sharing our own memories with other people (even those who have not been born yet).

Thanks to language, the entire human race becomes one cognitive unit, with the ability to perceive, learn, remember, reason, and so forth. Language turns the minds of millions of individuals into gears at the service of one gigantic mind.

As the USA neuroscientist Paul Churchland once pointed out, language creates a collective cognition, a collective memory and intelligence.

Further Reading

Bickerton, Derek & Calvin, William: LINGUA EX MACHINA (MIT Press, 2000)

Bickerton, Derek: LANGUAGE AND SPECIES (Chicago Univ Press, 1992)

Churchland, Paul: ENGINE OF REASON (MIT Press, 1995)

Darwin, Charles: LANGUAGES AND SPECIES (1874)

Deacon, Terrence: THE SYMBOLIC SPECIES (W.W. Norton & C., 1997)

DeSaussure, Ferdinand: COURSE IN GENERAL LINGUISTICS (1916)

Dennett, Daniel: KINDS OF MINDS (Basic, 1998)

Donald, Merlin: ORIGINS OF THE MODERN MIND (Harvard Univ Press, 1991)

Dunbar, Robin: GROOMING, GOSSIP AND THE EVOLUTION OF LANGUAGE (Faber and Faber, 1996)

Gardenfors, Peter: HOW HOMO BECAME SAPIENS (Oxford Univ Press, 2003)

Gregory, Richard: MIND IN SCIENCE (Cambridge Univ Press, 1981)

Hoffmeyer, Jesper: SIGNS OF MEANING IN THE UNIVERSE (Indiana Univ. Press, 1996)

Joseph, Rhawn: NAKED NEURON (Plenum, 1993)

Lieberman, Philip: THE BIOLOGY AND EVOLUTION OF LANGUAGE (Harvard Univ Press, 1984)

Miller, Geoffrey: THE MATING MIND (Doubleday, 2000)

Morris, Charles: FOUNDATIONS OF THE THEORY OF SIGNS (University Of Chicago Press, 1938)

Niehoff, Debra: THE LANGUAGE OF LIFE: HOW CELLS COMMUNICATE IN HEALTH AND DISEASE (Joseph Henry Press, 2005)

Peirce, Charles: COLLECTED PAPERS (Harvard Univ Press, 1931)

Prieto, Luis: PRINCIPES DE NOOLOGIE (1964)

Sebeok, Thomas Albert: CONTRIBUTION TO A DOCTRINE OF SIGNS (Indian Univ, 1976)

Thom, Rene’: SEMIOPHYSICS (Addison-Wesley, 1990)

Wilson, Frank: THE HAND (Pantheon Books, 1998)

Medio siglo del «Boom»

1 de Diciembre de 2012 Este año se conmemoran 50 años del pistoletazo de salida del más importante movimiento literario continental, bautizado por el crítico chileno Luis Harss como el “Boom de la literatura latinoamericana” en el libro Los nuestros, instituyendo así un nuevo canon literario.

En 1962 se publican dos novelas fundacionales, La ciudad y los perros, de Mario Vargas Llosa; y La muerte de Artemio Cruz, de Carlos Fuentes. En 1963, Rayuela, de Julio Cortázar; y en 1967, Cien años de soledad, de Gabriel García Márquez.

Estos cuatro escritores y países, Perú, México, Argentina y Colombia, son los pilares sobre los que descansará el “Boom”, que revolucionará a escala mundial la literatura de ficción y cuyas repercusiones lanzarán a Latinoamérica, un continente hasta entonces tenido como un conjunto de repúblicas bananeras, hasta las primeras páginas culturales y literarias de todos los medios de comunicación del planeta.

Hay otros convidados de piedra en este festín literario, como el cubano Guillermo Cabrera Infante, que publica en 1968 su magistral obra Tres tristes tigres; o el chileno José Donoso, quien lanza en 1970 su novela El obsceno pájaro de la noche.

La frase, acuñada por el cubano Alejo Carpentier tipificando a Latinoamérica como un continente de “lo real maravilloso”, fue la que inspiró el llamado “realismo mágico”, que se entronizó como “marca de la casa”, a partir de Cien años de soledad.

No todo sin embargo es producto del azar, pues antes de estos cuatro magníficos de la literatura continental, había ya una tradición cimentada por padres fundadores como el premio Nobel guatemalteco Miguel Ángel Asturias, autor de Hombres de maíz; el peruano José María Arguedas y su novela Los ríos profundos; el mexicano Juan Rulfo y su novela Pedro Páramo; el cubano Alejo Carpentier y su obra Los pasos perdidos; así como el uruguayo Juan Carlos Onneti, La vida breve; el argentino Jorge Luis Borges, Historia universal de la infamia; el peruano Ciro Alegría, El mundo es ancho y ajeno; el paraguayo Augusto Roa Bastos, Hijo de hombre; o el argentino Adolfo Bioy Casares, La invención de Morel (1940).

En esta misma década de los sesenta, sin embargo, hay dos grandes escritores que curiosamente no entran en el “Boom”, debido a que ya están ocupadas las cuatro sillas de sus propietarios. Nos referimos al argentino Ernesto Sábato (1961) y su novela Sobre héroes y tumbas, así como a la obra cumbre de la literatura latinoamericana del siglo XX, Paradiso, del cubano José Lezama Lima, publicada en 1966.

Paralelamente, el “Boom” tuvo un soporte poético con gigantes de la poesía universal que continuaron la tradición de Rubén Darío, como Gabriela Mistral, Pablo Neruda, Octavio Paz, Nicanor Parra, Roque Dalton García o Ernesto Cardenal.

Acompañado por una máquina publicitaria de gran calaje, y aupado por la agente literaria española Carmen Balcells así como por el editor catalán Carlos Barral, el “Boom” fue, excepcionalmente, un excelente negocio con excelente literatura, que rompió esquemas, puso la periferia latinoamericana en el centro de la metrópoli del español y arrasó todo el planeta con historias increíbles contadas como lo más natural del mundo.

No en balde provenían del continente de los siete colores, territorio del realismo mágico, donde la ficción supera con creces la realidad, y los sueños son la mejor manera de interpretar esa realidad.

La fábula del águila y la gallina

La globalización representa una nueva etapa en el proceso de cosmogénesis y de antropogénesis. Tenemos que entrar en ella. No de la manera que las potencias controladas del mercado mundial quieren mercado competitivo y nada cooperativo, solamente interesadas en nuestras riquezas materiales, reduciéndonos a meros consumidores.

Nosotros queremos entrar soberanos y conscientes de nuestra posible contribución ecológica, multicultural y espiritual.

Se percibe un desmesurado entusiasmo del actual gobierno por la globalización. El presidente habla de ellas sin los matices que situarían con la debida luz nuestra singularidad. Posee capacidad para ser una voz propia y no eco de la voz de los otros.

Para él y sus aliados, cuento una historia que viene de un pequeño país de África occidental, Gana, narrada por un educador popular, James Aggrey, a principios de este siglo cuando se daban los embates por la descolonización.

Ojalá los haga pensar.

Era una vez un campesino que fue al bosque cercano a atrapar algún pájaro con el fin de tenerlo cautivo en su casa. Consiguió atrapar un aguilucho. Lo colocó en el gallinero junto a las gallinas. Creció como una gallina.

Después de cinco años, ese hombre recibió en su casa la visita de un naturalista. Al pasar por el jardín, dice el naturalista: “Ese pájaro que está ahí, no es una gallina. Es un águila.”

“De hecho”, dijo el hombre. “Es un águila. Pero yo la crié como gallina. Ya no es un águila. Es una gallina como las otras.

“No, respondió el naturalista”. Ella es y será siempre un águila. Pues tiene el corazón de un águila. Este corazón la hará un día volar a las alturas”.

“No, insistió el campesino. Ya se volvió gallina y jamás volará como águila”.

Entonces, decidieron, hacer una prueba. El naturalista tomó al águila, la elevó muy alto y, desafiándola, dijo: “Ya que de hecho eres un águila, ya que tú perteneces al cielo y no a la tierra, entonces, abre tusa alas y vuela!”

El águila se quedó, fija sobre el brazo extendido del naturalista. Miraba distraídamente a su alrededor. Vio a las gallinas allá abajo, comiendo granos. Y saltó junto a ellas.

El campesino comentó. “Yo lo dije, ella se transformo en una simple gallina”.

“No”, insistió de nuevo el naturalista, “Es un águila”. Y un águila, siempre será un águila. Vamos a experimentar nuevamente mañana.

Al día siguiente, al naturalista subió con el águila al techo de la casa. Le susurró: “Águila, ya que tú eres un águila, abre tus alas y vuela!”.

Pero cuando el águila vio allá abajo a las gallinas picoteando el suelo, saltó y fue a parar junto a ellas.

El campesino sonrió y volvió a la carga: “Ya le había dicho, se volvió gallina”.

“No”, respondió firmemente el naturalista. “Es águila y poseerá siempre un corazón de águila. Vamos a experimentar por última vez. Mañana la haré volar”.

Al día siguiente, el naturalista y el campesino se levantaron muy temprano. Tomaron el águila, la llevaron hasta lo alto de una montaña. El sol estaba saliendo y doraba los picos de las montañas.

El naturalista levantó el águila hacia lo alto y le ordenó: “Águila, ya que tú eres un águila, ya que tu perteneces al cielo y no a la tierra, abre tus alas y vuela”.

El águila miró alrededor. Temblaba, como si experimentara su nueva vida, pero no voló. Entonces, el naturalista la agarró firmemente en dirección al sol, de suerte que sus ojos se pudiesen llenar de claridad y conseguir las dimensiones del vasto horizonte.

Fue cuando ella abrió sus potentes alas. Se erguió soberana sobre sí misma. Y comenzó a volar a volar hacia lo alto y a volar cada vez más a las alturas. Voló. Y nunca más volvió.

Pueblos de África (y de Brasil)! Fuimos creados a imagen y semejanza de Dios. Pero hubo personas que nos hicieron pensar como gallinas. Y aun pensamos que efectivamente somos gallinas. Pero somos águilas.

Por eso, hermanos y hermanas, abran las alas y vuelen. Vuelen como las águilas. Jamás se contenten con los granos que les arrojen a los pies para picotearlos.

Tradujo Daniel Rodríguez (MCCLP), México 1997

Alejo Carpentier o La sangrienta primavera de la historia

Un presupuesto surrealista permite a Carpentier construir el dualismo más constante de su obra: la oposición Europa/América. La percepción anquilosada de la realidad cotidiana, la historia convertida en la pequeña historia, es europea. La posibilidad de romper esta habitualidad y acceder a la “sobrerrealidad” que caracteriza a la visión surrealista, es americana. Tal vez así podamos simplificar la hinchada cuestión de lo real maravilloso, con el curioso agregado de que, para nuestro escritor, América es exclusivamente la cultura afrocaribeña, una cultura sin nada aborigen pues la despoblación de indígenas propició la repoblación a cargo de blancos, negros y amarillos.

América, contrafigura de la historia, opone religión a secularidad, arcaísmo a modernidad, mito a devenir, regeneración utópica a continuidad evolutiva: la promesa de dicha de la historia, la revolución. Esta caracterización cumple distintas derivas en las obras de Carpentier.

En El reino de este mundo estamos ante una América francamente africana, si vale la paradoja, la magia negra contra el arma blanca. El líder rebelde Mackandal pasa a convertirse en figura épica de los himnos populares y en personaje de las liturgias animistas del vudú.

América es mitología o, como prefiere precisar Carpentier, “ontología”, transformación del descubrimiento en revelación, mestizaje fecundo, carácter fantástico de lo negro y lo indígena. Su historia es la crónica de lo real maravilloso, es decir de lo surreal del surrealismo.

Carpentier caracteriza al negro sublevado como vital y potente, opuesto al blanco europeo, racional y desvaído. Aquellas características parecen denunciar su sesgo sobrenatural. Está sobre la realidad cotidiana y sobre la naturaleza. Es paranormal. Tiene una cualidad demiúrgica.

Mackandal puede ir y venir del mundo de los muertos, posee el don de la metamorfosis, controla a sus fieles, hace trabajar a los difuntos como zombies, sale volando de la hoguera donde acaba de ser incinerado. “El manco Mackandal, hecho un houngan del rito Radá, investido de poderes extraordinarios por varias caídas en posesión de dioses mayores, era el Señor del Veneno. Dotado de suprema autoridad por los Mandatarios de la otra orilla, había proclamado la cruzada de exterminio, elegido como estaba para acabar con los blancos y crear un gran imperio de negros libres en Santo Domingo.”

El blanco, aprisionado por su condición histórica, está destinado a pasar, a convertirse en pasado, pues la historia es consumación, exterminio y muerte. En cambio el negro, al poderse comunicar con “la otra orilla”, el mundo de las sombras, tiene acceso a una movediza eternidad, marcada por las reencarnaciones y retornos. Es trascendente y le basta invocar a sus dioses guerreros para asaltar con éxito la fortaleza de la Diosa Razón.

En Los pasos perdidos América es la primavera del tiempo, la tierra donde el mundo se regenera. En América se atesoran las energías que darán nueva vida a la exhausta cultura europea, dormida en el invierno de la razón.

El protagonista es un músico que percibe esas energías instintivas en los instrumentos de percusión. En la historia, el hombre europeo ha perdido sus pasos, alejándose de su origen, que es sagrado, extraviándose. Intentar recuperarlo por medio de la música, es inútil, pues al compositor le falta el trance del sacerdote.

En El siglo de las luces reaparece la superioridad de la magia negra sobre la ciencia blanca cuando el asmático Esteban es curado por los conjuros y bebedizos de Ogé. Mientras el ilustrado francés Victor Hugues cree en la revolución como estallido de la luz, aquél anuncia los trastornos causados por la “llegada de los tiempos” y el Apocalipsis. ¿De qué lado cae el cambio histórico? Carpentier no sabrá contestar.

También el arquitecto y la bailarina de La consagración de la primavera, hartos de la revolución surrealista, la bolchevique y la guerra civil española, marchan a América en busca de una primavera para consagrar. Ella quiere llevar a Europa el ballet de Stravinski que da nombre a la novela, pero “bailado a la cubana”. Tal vez, en clave alegórica, la revolución castrista.

La vuelta al origen hace de América el lugar de la utopía. En Carpentier adquiere la forma de la ciudad ideal, hecha a partir del grado cero de los tiempos, una fundación. En Los pasos perdidos es la obra de El Adelantado y se llama Puerto Anunciación. Es tarea de la libertad y en ella se confunden las direcciones del tiempo, de modo que el porvenir es memoria.

El Adelantado no advierte, sin embargo, que su plan reproduce el modelo de las ciudades históricas. Es una forma disimulada del fracaso utópico, similar a la de Victor en El siglo de las luces, cuando construye en el Amazonas una ciudad ideal destinada a ser devorada por la selva.

Un destino comparable aguarda, en Carpentier, a las revoluciones. Sobre el fondo cíclico y circular del tiempo, la revolución altera la naturaleza de las cosas y las jerarquías establecidas.

Sus líderes son juzgados y condenados por traidores ante los tribunales de la propia revolución, a menos que se conviertan en servidores del orden que intentaron subvertir, y que se restablece como algo natural.

El reino de negros fundado por Henri Christophe reproduce los mandos, crueldades y pompas del antiguo régimen. Victor y Esteban, emisarios de la masonería cubana, viajan a Francia y España en tiempos de la Revolución Francesa y vuelven a Cuba para divulgar sus ideales de igualdad. Llevan una guillotina.

Con el tiempo, Victor se hace militar y brilla por su represión de los sublevados. Los negros son liberados, se los rebautiza con nombres romanos y se les enseña el catecismo jacobino, pero siguen sometidos a los mismos y extenuantes trabajos de siempre. Bajo mosquiteros de tul y servido por hermosas mulatas, Victor decreta las ejecuciones en la guillotina.

La irrealizable utopía, al llevarse a la práctica, se convierte en tiranía. El revolucionario, en comisario terrorista de Estado. En principio, las nuevas autoridades no comercian con esclavos pero acaban haciéndolo cuando los capturan a las potencias enemigas. La conclusión de Esteban es pesimista: “Cuidémonos de las palabras demasiado hermosas, de los Mundos Mejores creados por las palabras. Nuestra época sucumbe por un exceso de palabras. No hay más Tierra Prometida que la que el hombre puede encontrar en sí mismo.” En el exterior de la historia toda promesa decepciona. En el interior del individuo, se cumple. Las Luces se convierten en la sombra de un jardín.

Carpentier declaró su proyecto de escribir una novela sobre la revolución cubana. Nunca lo hizo. Sólo hay algunas referencias en La consagración de la primavera: los últimos combates contra Batista, la instalación de los revolucionarios en el poder, la frustrada invasión de la bahía de Cochinos.

La narradora se entera de esto leyendo revistas francesas, donde los castristas, con sus barbas y melenas, le parecen hombres de una nueva raza, similar a los revolucionarios franceses del 89. Cabe suponer que les espera el destino de Victor Hugues.

La palabra revolución tiene, en Carpentier, el significado de ciclo completo, de vuelta a empezar. Las sociedades se asientan sobre un pacto sagrado y quebrarlo es generar desorden e invocar la restauración. Los negros siguen con sus cultos de santería aunque los franceses les inculquen ideas racionalistas o el caudillo libertador, el culto católico.

Si ha habido algún intento de cambio, su fracaso redunda en decadencia y ruina, esa postrimería barroca que se armoniza con el barroquismo de las descripciones carpenterianas. Su narración tiende a la inmovilidad descriptiva, acentuada por la escasez de los diálogos. La historia se paraliza en tiempos muertos. Si la historia es cíclica como las estaciones del año, su primavera exige sacrificios y se vuelve sangrienta.

Carpentier sale románticamente del Siglo de las Luces: consciencia desdichada, desajuste entre mundo y deseo, desproporción entre lo limitado del hombre y lo inconmensurable del universo. La Ilustración intentó regular socialmente la felicidad, estableciendo un código de cosas razonablemente deseables.

Pero la historia es la antropología de la desdicha, muerte y devoración, tiniebla barroca y noche romántica. El reino del Hombre no es el mundo de los hombres, que se preguntan cuál será. Como dice el barbero Ti Noel, “el hombre nunca sabe para quién padece y espera.”

En algún momento, Carpentier absuelve al hombre de la infelicidad temporal, la mortalidad, por medio del arte. En su busca del momento original, presente absoluto sin antes ni después, donde ha de haber un signo incomparable del origen, el novelista inviste a un músico. Pues, en efecto, es la música, arte de la unidad, y no la literatura, arte de la escisión, la que puede recuperar el instante exento de muerte.

Hay connotaciones sexuales de la escena. Si la historia, reino de la muerte, es paterna, el origen, reino de la inmortalidad, es materno. No ya el Dios masculino de Occidente, sino la Madre de Dios. La identidad fundamental de todo lo existente es femenina. El principio subjetivo masculino introduce la finitud, la asunción de la muerte, la irregularidad y el desorden: la historia.

La paz ordenada y serena es materna, pero es también prenatal y carece de lenguaje articulado. Lo que hace humano al ser humano es desprenderse del origen, nacer. Esa es la marca, el tajo que instaura el tiempo, los pasos contados que se van convirtiendo en pasos perdidos.

Por volver al comienzo, América es la promesa de dicha de la historia porque es la promesa de retorno a la protección materna y al perdido paraíso donde no existe la muerte.

América es la feliz casa sin padre, el cuarto de los juegos, la utopía que es origen y paraíso, pero todo ello ilusorio porque no se pueden desandar los pasos perdidos en el tiempo, no puede retraerse la historia. Si se recupera el origen caótico y dichoso anterior al tiempo será para repetir la creación del tiempo y la refundación de la historia, con lo que ciclo de la revolución volverá a empezar donde terminó para terminar donde empezó.

Obras de Alejo Carpentier

Écue-Yamba-O! (1933).

El reino de este mundo (1949).

Los pasos perdidos (1953).

Guerra del tiempo (1956).

El acoso (1958).

El Siglo de las Luces (1962).

El recurso del método (1974).

Concierto barroco (1974).

El arpa y la sombra (1978).

La consagración de la primavera (1978).

Visión de América.

Los advertidos (cuento).

Semejante la noche (cuento).

Viaje a la semilla (cuento).

Los fugitivos (cuento).

Deleuze and Guattari: Schizos, Nomads, Rhizomes

We live today in the age of partial objects, bricks that have been shattered to bits, and leftovers… We no longer believe in a primordial totality that once existed, or in a final totality that awaits us at some future date (Deleuze and Guattari 1983: p.42)

A theory does not totalize; it is an instrument for multiplication and it also multiplies itself… It is in the nature of power to totalize and … theory is by nature opposed to power (Deleuze 1977a: p.208)

Gilles Deleuze and Felix Guattari have embarked on postmodern adventures that attempt to create new forms of thought, writing, subjectivity, and politics. While they do not adopt the discourse of the postmodern, and Guattari (1986) even attacks it as a new wave of cynicism and conservativism, they are exemplary representatives of postmodern positions in their thoroughgoing efforts to dismantle modern beliefs in unity, hierarchy, identity, foundations, subjectivity and representation, while celebrating counter-principles of difference and multiplicity in theory, politics, and everyday life.

Their most influential book to date, Anti-Oedipus (1983; orig. 1972) is a provocative critique of modernity’s discourses and institutions which repress desire and proliferate fascists subjectivities that haunt even revolutionary movements. Deleuze and Guattari have been political militants and perhaps the most enthusiastic of proponents of a micropolitics of desire that to precipitate radical change through a liberation of desire. Hence they anticipate the possibility of a new postmodern mode of existence where individuals overcome repressive modern forms of identity and stasis to become desiring nomads in a constant process of becoming and transformation.

Deleuze is a professor of philosophy who in the 1950s and 1960s gained attention for his studies of Spinoza, Hume, Kant, Nietzsche, Bergson, Proust and others. Guattari is a practicing psychoanalyst who since the 1950s has worked at the experimental psychiatric clinic, La Borde. He was trained in Lacanian psychoanalysis, has been politically active from an early age, and participated in the events of May 1968. He has collaborated with Italian theorist Antonio Negri (Guattari and Negri 1990) and has been involved in the autonomy’ movement which seeks an independent revolutionary movement outside of the structures of organized parties. Deleuze and Guattari’s separate careers first merged in 1969 when they began work on Anti-Oedipus. This was followed by Kafka: Toward a Minor Literature (1986; orig. 1975), A Thousand Plateaus (1987; orig. 1980), and numerous independent works by each author.

There are many interesting similarities and differences between their work and Foucault’s. Like Foucault, Deleuze was trained in philosophy and Guattari has worked in a psychiatric hospital, becoming interested in medical knowledge as an important form of social control. Deleuze and Guattari follow the general tenor of Foucault’s critique of modernity. Like Foucault, their central concern is with modernity as an unparalleled historical stage of domination based on the proliferation of normalizing discourses and institutions that pervade all aspects of social existence and everyday life.

Their perspectives on modernity are somewhat different, however. Most conspicuously, where Foucault tended toward a totalizing critique of modernity, Deleuze and Guattari seek to theorize and appropriate its positive and liberating aspects, the decoding of libidinal flows initiated b the dynamics of the capitalist economy. Unlike Foucault, Deleuze and Guattari’s work is less a critique of knowledge and rationality than of capitalist society; consequently, their analyses rely on traditional Marxist categories more than Foucault’s. Like Foucault, however, they by no means identify themselves as Marxists and reject dialectical methodology for a postmodern logic of difference, perspectives, and fragments. Also while all three foreground the importance of theorizing microstructures of domination. Deleuze and Guattari more clearly address the importance of macrostructures as well and develop a detailed critique of the state.

Further where Foucault’s emphasis is on the disciplinary technologies of modernity and the targeting of the body within regimes of power/knowledge. Deleuze and Guattari focus on the colonization of desire by various modern discourse and institutions. While desire is a sub-theme in Foucault’s later genealogy of the subject, it is of primary importance for Deleuze and Guattari. Consequently, psychoanalysis, the concept of psychic repression, engagements with Freudo-Marxism, and the analysis of the family and fascism play a far greater role in the work of Deleuze and Guattari than Foucault, although their critique of psychoanalysis builds on Foucault’s critique of Freud, psychiatry, and the human sciences.

In contrast to Foucault who emphasizes the productive nature of power and rejects the repressive hypothesis’, Deleuze and Guattari readily speak of the repression’ of desire and they do so, as we shall argue, because they construct an essentialist concept of desire. In addition, Deleuze and Guattari’s willingness to champion the liberation of bodies and desire stands in sharp contrast to Foucault’s sympathies to the Greco-Roman project of mastering the self. All three theorists, however, attempt to decenter and liquidate the bourgeois, humanist subject. Foucault pursues this through a critical archaeology and genealogy that reduces the subject to an effect of discourse and disciplinary practices, while Deleuze and Guattari pursue a schizophrenic’ destruction of the ego and superego In favor of a dynamic unconscious. Although Foucault later qualified his views on the subject, all three theorists reject the modernist notion of a unified, rational, and expressive subject and attempt to make possible the emergence of new types of decentered subjects, liberated from what they see to be the terror of fixed and unified identities, and free to become dispersed and multiple, reconstituted as new types of subjectivities and bodies.

All three writers have shown high regard for each other’s work. In his book Foucault (1988; orig. 1986 p.14), Deleuze hails Foucault as a radically new thinker whose work represents the most decisive step yet taken in the theory-practice of multiplicities’. For his part, Foucault (1977; p. 213) claims that Deleuze and Guattari’s work was an important influence on his theory of power and has written a laudatory introduction to Anti-Oedipus. In his review of Deleuze’s work in “Theatrum Philosophicum” (1977: pp. 165-96), Foucault praises him for contributing to a critique of Western philosophical categories and to a positive knowledge of the historical event’. Modestly downplaying his own place in history, Foucault even claims (1977; p. 165) that perhaps one day, this century will be known as Deleuzian’. In the dialogue “Intellectuals and Power” (Foucault 1977: pp.205-17), Foucault and Deleuze’s voices freely interweave in a shared project of constructing a new definition of theory which is always -already practice and local and regional’ in character.

Foucault and the Critique of Modernity

Is it not necessary to draw a line between those who believe that we can continue to situate our present discontinuities within the historical and transcendental tradition of the nineteenth century and those who are making a great effort to liberate themselves, once and for all, from this conceptual framework? (Foucault 1977: p.120)

What’s going on just now? What’s happening to us? What is this world, this period, this precise moment in which we are living? (Foucault 1982a p.216)

[T]he impression of fulfillment and of end, the muffled feeling that carries and animates our thought, and perhaps lulls it so sleep with the facility of its promises… and makes us believe that something new is about to begin, something that we glimpse only as a thin line of light low on the horizon – that feeling and impression are perhaps not ill founded (Foucault 1973b: p.384)

Foucault’s critique of modernity and humanism, along with his proclamation of the death of man’ and development of new perspectives on society, knowledge, discourse, and power, has made him a major source of postmodern thought. Foucault draws upon an anti-Enlightenment tradition that rejects the equation of reason, emancipation, and progress, arguing that an interface between modern forms of power and knowledge has served tog create new forms of domination. In a series of historico-philosophical studies, he has attempted to develop and substantiate this theme from various perspectives: psychiatry, medicine, punishment and criminology, the emergence of the human sciences, the formation of various disciplinary apparatuses, and the constitution of the subject. Foucault’s project has been to write a critique of our historical era’ (1984: p.42) which problematizes modern forms of knowledge, rationality, social institutions, and subjectivity that seem given and natural but in fact are contingent sociohistorical constructs of power and domination.

While Foucault has decisively influenced postmodern theory, he cannot be wholly assimilate to that rubric. He is a complex and eclectic thinker who draws from multiple sources and problematics while aligning himself with no single one. If there are privileged figures in his work, they are critics of reason and Western thought such as Nietzsche and Bataille. Nietzsche provided Foucault, and nearly all French poststructuralists, with the impetus and ideas to transcend Hegelian and Marxist philosophies. In addition to initiating a postmetaphysical, posthumanist mode of thought, Nietzsche taught Foucault that one could write a genealogical’ history of unconventional topics such as reason, madness, and the subject which located their emergence within sites of domination. Nietzsche demonstrated that the will to truth and knowledge is indissociable from the will to power, and Foucault developed these claims in his critique of liberal humanism, the human sciences, and in his later work on ethics. While Foucault never wrote aphoristically in the style of Nietzsche, he did accept Nietzsche’s claims that systematizing methods produce reductive social and historical analyses, and that knowledge is perspectival in nature, requiring multiple viewpoints to interpret a heterogeneous reality.

Foucault was also deeply influenced by Bataille’s assault on Enlightenment reason and the reality principle of Western culture. Bataille (1985, 1988, 1989) championed the realm of heterogeneity, the ecstatic and explosive forces of religious fervor, secularity, and intoxicated experience that subvert and transgress the instrumental rationality and normalcy of bourgeois culture. Against the rationalist outlook of political economy and philosophy, Bataille sought a transcendence of utilitarian production and needs, while celebrating a general economy’ of consumption, waste, and expenditure as liberator. Bataille’s fervent attach on the sovereign philosophical subject and his embrace of transgressive experiences were influential for Foucault and other postmodern theorists. Through his writings, Foucault valorizes figures such as Holderlin, Artaud, and others for subverting the hegemony of modern reason and its norms and he frequently empathized with the mad, criminals, aesthetes, and marginalized types of all kinds.

Recognizing the problems with attaching labels to Foucault’s work, we wish to examine the extent to which he develops certain postmodern positions. We do not read Foucault as a postmodernist tout court, but rather as a theorist who combines premodern, modern, and postmodern perspectives. We see Foucault as a profoundly conflicted thinker whose thought is torn between oppositions such as totalizing/detotalizing impulses and tensions between discursive/extra-discursive theorization, macro/microperspectives, and a dialectic of domination/resistance.

In search of the postmodern

For the past two decades, the postmodern debates dominated the cultural and intellectual scene in many fields throughout the world. In aesthetic and cultural theory, polemics emerged over whether modernism in the arts was or was not dead and what sort of postmodern art was succeeding it. In philosophy, debates erupted concerning whether or not the tradition of modern philosophy had ended, and many began celebrating a new postmodern philosophy associated with Nietzche, Heidegger, Derrida, Rorty, Lyotard, and others. Eventually, the postmodern assault produced new social and political theories, as well as theoretical attempts to define the multifaceted aspects of the postmodern phenomenon itself.

Advocates of the postmodern turn aggressively criticized traditional culture, theory, and politics, while defenders of the modern tradition responded either by ignoring the new challenger, by attacking it in return, or by attempting to come to terms with and appropriate the new discourses and positions. Critics of the postmodern turn argued that it was either a passing fad (Fo 1986/7; Guattari 1986), a specious invention of intellectuals in search of a new discourse and source of cultural capital (Britton 1988), or yet another conservative ideology attempting to devalue emancipatory modern theories and values (Habermas 1981 and 1987a). But the emerging postmodern discourses and problematics raise issues which resist easy dismissal or facile incorporation into already established paradigms.

In view of the wide range of postmodern disputes, we propose to explicate and sort out the differences between the most significant articulations of postmodern theory, and to identify their central positions, insights, and limitations. Yet, as we shall see, there is no unified postmodern theory, or even a coherent set of positions. Rather, one is struck by the diversities between theories often lumped together as `postmodern’ and the plurality – often conflictual – of postmodern positions. One is also struck by the inadequate and undertheorized notion of the `postmodern’ in the theories which adopt, or are identified in, such terms. To clarify some of the key words within the family of concepts of the postmodern, it is useful to distinguish between the discourses of the modern and the postmodern (see Featherston 1988).

To begin, we might distinguish between `modernity’ conceptualized as the modern age and `postmodernity’ as an epochal term for describing the period which allegedly follows modernity. There are many discourses of modernity, as there would later be of postmodernity, and the term refers to a variety of economic, political, social, and cultural transformations. Modernity, as theorized by Marx, Weber, and others, is a historical periodizing term which refers to the epoch that follows the’Middle Ages’ or feudalism. For some, modernity is opposed to traditional societies and is characterized by innovation, novelty, and dynamism (Berman 1982). The theoretical discourses of modernity from Descartes through the Enlightenment and its progeny championed reason as the source of progress in knowledge and society, as well as the privileged locus of truth and the foundation of systematic knowledge. Reason was deemed competent to discover adequate theoretical and practical norms upon which system sof thought and action could be built and society could be restructured. This Enlightenment project is also operative in the American, French, and other democrateic revolutions which attempted to overturn the feudal world and to produce a just and egalitarian social order that would embody reason and social progress (Toulmin 1990).

Aesthetic modernity emerged in the new avant-garde modernist movements and bohemian subcultures, which rebelled against the alienating aspects of industrialization and rationalization, while seeking to transform culture and to find creative self-realization in art. Modernity entered everyday life through the dissemination of modern art, the products of consumer society, new technologies, and new modes of transportation and communication. The dynamics by which modernity produced a new industrial and colonial world can be described as `modernization’ – a term denoting those processes of individualization, secularization, industrialization, cultural differentiation, commodification, urbanization, bureaucratization, and rationalization which together have constituted the modern world.

Yet the construction of modernity produced untold suffering and misery for its victims, ranging form the peasantry, proletariat, and artisans oppressed by capitalist industrialization to the exclusion of women from the public sphere, to the genocide of imperialist colonialization. Modernity also produced a set of disciplinary institutions, practices, and discourses which legitimate its modes of domination and control. The `dialectic of Enlightenment’ (Horkheimer and Adorno 1972) thus described a process whereby reason turned into its opposite and modernity’s promises of liberation masked forms of oppression and domination. Yet defenders of modernity (Habermas 1981, 1987a, and 1987b) claim that it has `unfulfilled potential’ and the resources to overcome its limitations and destructive effects.

Postmodern theorists, however, claim that in the contemporary high tech media society, emergent processes of change and transformation are producing a new postmodern society and its advocates claim that the era of postmodernity constitutes a novel state of history and novel sociocultural formation which requires new concepts and theories. Theorists of postmodernity (Baudrillard, Lyotard, Harvey, etc.) claim that technologies such as computers and media, new forms of knowledge, and changes in the socioeconomic systems are producing a postmodern social formation. Baudrillard and lyotard interpret these developments in terms of novel types of information, knowledge, and technologies, while neo-Marxist theorists like Jameson and Harvey interpret the postmodern in terms of development of a higher stage of capitalism marked by a greater degree of capital penetration and homogenization across the globe. These processes are also producing increased cultural fragmentation, changes in the experience of space and time, and new modes of experience, subjectivity, and culture. These conditions provide the socioeconomic and cultural basis for postmodern theory and their analysis provides the perspectives from which postmodern theory can claim to be on the cutting edge of contemporary devleopments.

In additiona to the distinction between modernity and postmodernity in teh field of social theory, the discourse of the postmodern plays an important role in the field of aesthetics and cultural theory. Here the debate revolves around distinctions between modernism and postmodernism in the arts. Within this discourse, `modernism’ could be used to describe the art movements of hte modern age (impressionism, l’art our l’art, expression, surrealism, and other avant-garde movements), while `postmodernism’ can describe those diverse aesthetic forms and practices which come after and break with modernism. These forms include the architecture of Robert Venturi and Philip Johnson, the musical experiments of John Cage, the art of Warhol and Rauschenberg, the novels of Pynchon and Ballard, and filesm like Blade Runner or Blue Velvet. Debates centre on whether there is or is not a sharp conceptual distinction between modernism and postmodernism and the relative merits and limitations of these movements.

The discourses of the postmodern also appear in the field of theory and focus on the critique of modern theory and arguments for a postmodern rupture in theoyr. Modern theory – rangin from the philosophical project of Descartes, through the Enlightenment, to the social theory of Comte, Marx, Weber and others – is criticized for its serach for a foundation of knowlecdge, for its universalizing and totalizing claims, for its hubris to supply apodictic truth, and for its allegedly fallacious rationalism. defenders of modern theory, by contract, attack postmodern relativism, irrationalism, and nihilism.

More specifically, postmodern theory provides a critique of representation and the modern belief that theory mirrors reality, taking instead `perspectivist’ and `relativist’ positions that theories at best provide partial perspectives on their objects, and that all cognitive representations of the world are historically and linguistically mediated. Some postmodern theory accordinaly rejects the totalizing macroperspectives on socieyt and history favored by mdoern theory in favour of microtheory and micropolitics (Lyotard 1984a). Postmodern theory also rejets modern assumptions of social coherence and notions of causality in favour of multiplicity, plurality, fragmentation, and indeterminacy. In addition, postmodern theory abandons the rational and unified subject postulated by much modern theory in favour of a socially and linguistically decentered and fragmented subject.

Thus, to avoid conceptual confusion, in this book we shall use the term `postmodernity’ to describe the supposed epoch that follows moderntiy, and `postmodernism’ to descibe movements and artifacts tin the cultural field that can be distinguished form modernist movements, textx, and practices. We shall also distinguish between `modern theory’ and `postmodern theory’, as well as between `modern politics’ which is characterized by party, parliamentary, or trade union politics in opposition to `postmodern politics’ associated with locally base micropolitics that challenge a broad array of discourses and institutionalized forms of power.

To help clarify and illuminate the confusing and variegated discourse of the postmodern, we shall first provide an archaeology of the term, specifying its history, early usages, and conflicting meanings. Next, we situate the development of contemporary postmodern theory in the context of post-1960’s France where the concept of a new postmodern condition became an important theme by the late 1970’s. An in 1.3 we sketch the problematic of our interrogations of postmodern theory and the perspectives that will guide our inquiries throughout this book.

Jorge Luis Borges y la nueva era del mundo: La paradoja, el laberinto y la física cuántica

¿De dónde brota el encanto de la obra literaria de Jorge Luís Borges (1899-1986)? ¿Por qué ese creciente interés que lleva a sumar congresos, conferencias, publicaciones sobre su obra en diversas partes del mundo? ¿De qué ese fervor que llegó a mitad de obra y vida del escritor y que después de su muerte no cesa de multiplicarse? El escritor se defendió muchas veces de esos vientos contrarios describiendo su obra, con leve ironía, como “mis borrones”, “irresponsables juegos de un tímido”, “serie de divagaciones, acumulaciones, reiteraciones”, y de ser un escritor “decididamente monótono” [1].

Muchos de sus críticos, tratando de no ceder al encanto, se apresuraron a amarrarse al palo mayor, a taponarse los oídos, y a acusar esta obra de plagio, “escritura parasitaria”, “reiterativa”, “estetizante” (Lafforgue, 1999; Helft y Paúl: 2000). Apropiándome de una frase del Borges de “Magias parciales del Quijote” (1952), podría decir: “creo haber dado con la causa”. La obra de Borges parece constituirse en irradiación de los signos fundamentales de la nueva “imagen de mundo” [2] en la que estamos inmersos, que empieza a configurarse a la vez como rechazo y continuación de la modernidad y que ha sido llamada “postmodernidad” (Lyotard, 1983), “era del vacío”(Lipovetsky, 1993), “edad hermenéutica de la razón” (Ricoeur, 2006), etc. Quizás podría decirse que, en el mismo sentido en que Foucault ve El Quijote (1605 y 1615) y Las meninas (1653) como umbrales y representaciones de la “modernidad optimista” (Rorty, 1990), así puede verse la obra de Borges respecto a la postmodernidad.

La reflexión moderna, de Heidegger a Gadamer, de Edgar Morin a Habermas, de Rorty a Lipovetsky ha establecido claros deslindes y cualificaciones de la época moderna. Morin ha sintetizado en una frase el paso de la visión de mundo de la fe a la razón: “La ley eterna que regula la caída de las manzanas ha suplantado a la Ley de lo Eterno que, por una manzana, hizo caer a Adán” (Morin1981:50). Podría abrirse un arco desde el siglo XV hasta finales del siglo XIX, que podríamos llamar con Rorty, de la modernidad optimista: el paso de la teleología a la “ley causal”, a lo que con Leibniz o Schopenhauer podemos llamar el principio de razón suficiente como dominante en una visión de mundo, donde Descartes y Newton se muestran como paradigmas; el discurso científico como su más acabada expresión; en la conquista de la verdad objetiva y la certeza de la capacidad de revelar los enigmas del universo. De allí la frase de Pope, “Dios dijo hágase la luz, y nació Newton”, o la certeza de Laplace de poder dilucidar todos los enigmas si se conocieran las causas. Borges se ha referido explícitamente a la teoría del “monstruo” de Laplace: “.si existiese un mortal cuyo espíritu pudiera abarcar el encadenamiento general de las causas, sería infalible; pues el que conoce las causas de todos los acontecimientos futuros, prevé necesariamente el porvenir”. Borges puntualiza: “Laplace jugó con la posibilidad de cifrar en una sola fórmula matemática todos los hechos que componen un instante del mundo, para luego extraer de esa fórmula todo el porvenir y todo el pasado” (OC 1: 282). Laplace, como Newton, como Descartes, representará el paradigma de la modernidad optimista. Cuando Borges observa en “Los crímenes de la calle Morgue” (“The murders in the rue Morgue”, 1841 ), de E. A. Poe (1809-1849) el nacimiento del relato policiaco, no hace sino poner en evidencia un ámbito de relato donde la “ratio” despliega sus poderes y “des-oculta”, de manera objetiva, la verdad. La verdad revelada por la mente racional de Auguste Dupin, como posteriormente, en la inusitada expansión del género por Sherlock Holmes, Hércules Poirot, El Padre Brown, Maidret. Pepe Carvalho… cederá ante los poderes de la razón que son los mismos que del “monstruo” de Laplace.
La razón moderna explora lo desconocido para vencer sus enigmas y sumarlo a los horizontes de lo conocido, a la identidad del poder y lo real donde la vida transcurre. De allí que el discurso científico moderno, en un calco secularizado de la tierra prometida de las religiones, irá acompañado de una “promesa de felicidad”. Esta nueva visión imaginó otros mundos, regidos, ya no por Dios sino por la razón, donde la felicidad sería posible, los mundos de la utopía, que tiene formal nacimiento en la obra de Tomás Moro y que ha impactado de manera significativa la cultura y la literatura de occidente.
La modernidad es la celebración del orden y de lo real, bajo el dominio de la objetividad de la razón, de sus presuposiciones lógicas y causales, de las fuerzas cohesionadas de la identidad, de su condición de límite; la ciencia es su lámpara exploratoria. La modernidad aún no cesa: es, como diría Habermas, un proyecto inacabado. De antiguo, sin embargo, una fuerza subterránea que, como veremos, tendrá, en el siglo XX, uno de sus centros de confluencia en la obra de Borges, avanzará para expresarse en la crítica de lo real de Hume, de Berkeley; en la crítica de la causalidad que tendrá una de sus primeras formulaciones en Kant y que luego será retomada, por ejemplo, por Schopenhauer, en la puesta en crisis de la objetividad, de la verdad objetiva, y que confluirá en Nietzsche, a quién llamará Habermas “La tabla giratoria de la posmodernidad”. Esta nueva visión de mundo, que no cancela la visión moderna y optimista sino que a veces va en paralelo, a veces es su prolongación, a veces su refutación, elaborará una conciencia ecológica para oponerse con alarma al optimismo moderno de la ciencia; opondrá la indeterminación a la objetividad; la perspectiva a la verdad objetiva; etc. Se articulará a la teoría cuántica, ese otro modo del pensar científico, e imaginará otros mundos, no en la perfección de la racionalidad utópica sino en prefiguraciones de la repetición y la extrañeza. Cuando Borges, en contraposición del relato policíaco de Poe, escribe, “La muerte y la brújula”, nos muestra la razón posmoderna, aquella donde la verdad no es tal sino una interpretación: la verdad revelada por la “ratio” no es sino una verdad indeterminada, falaz.
De igual forma, los continuos desplazamientos, inclusiones, revelaciones de otros mundos, sean estos los del sueño o de mundos inventados o imaginados, no son proyecciones utópicas, sino, podríamos decir, mundo de una “pos-utopía” que asechan, anulan o revelan la insustancialidad de lo real. Paul Davies ve en los años de 1900 y 1930 los puntos de señalización en el arco de esta nueva visión de mundo (Davies, 1994: IX) y es posible ver, teniendo como antecedentes a Spinoza y a Leibniz, sus figuras paradigmáticas primero en Einstein y luego en figuras como Heisenberg y Shrodinger: Visión de mundo que permite el paso de la certeza deterministas a las paradojas microfísicas; de la visión del universo con perfección de relojería, a otro, aterrador, de caos, sin razón, y destrucción.
Borges ha señalado: “El escritor es un producto de su tiempo, pero el tiempo es también un producto del escritor”. Los autores crean también la realidad. Oscar Wilde llegó a decir que en Londres no había habido neblina antes del pintor Wistler “(Borges Oral, 1979).
Escritores como umbrales y representación de nuevas visiones del mundo pueden señalarse como ejemplos: Platón, fundamentalmente en Fedro en la Carta VII colocado en el paso de la visión del mundo de la oralidad a la visión del mundo de la escritura (Havelock, 1994); El Quijote y Las meninas, como hemos dicho, y podríamos agregar a Shakespeare, como umbrales y representaciones de la modernidad. Habermas ha señalado al Italo Calvino de Si una noche de invierno un viajero (1982), como ejemplo de la época posmoderna (Habermas, 1994: 52); también ha señalado a Joyce a Kafka a Beckett; y a Borges.
En este horizonte, ¿Cómo podríamos leer la obra de Borges? Creo que por lo menos en dos instancias: en la interrogación sobre sus representaciones, del orden y lo real del límite y la causalidad, en el juego de sus combinaciones textuales; y en la interrogación de su significación del enigma y el universo de la paradoja y el laberinto.

LA PROLIJIDAD DE LO REAL
Lo real está allí, como prolongación de la vida. No hay “real” sin un “orden”; y el hombre no puede vivir sin las presuposiciones de un orden. Por ello quizá ha dicho Pope que el orden baja del cielo; pero, ¿Qué es el orden y qué es lo real? El orden es el horizonte donde se constituye toda cultura que, por constituirse, legitima ese orden que podríamos concebir como un ámbito, una suerte de “burbuja” donde se hace posible la vida. Ámbito constituido por un conjunto de presuposiciones, por relaciones de poder y jerarquía, por interdictos y leyes, por la legitimación de la verdad, por redes causales y redes lógicas; por el despliegue del poder y la imposición de límites; por el lenguaje y su competencia comunicativa, por una moral reguladora. La cohesión del orden y lo real lo dan las convicciones y hábitos identitarios, de allí que la condición de siervo sea el primer elemento cohesionador del orden. De allí el valor de la libertad como elemento perturbador. .Rudolf. Carnap ha señalado: “todo objeto real pertenece a un sistema comprensivo que se comporta según ciertas leyes” (Carnap, 988:318). La modernidad y sobre todo, la posmodernidad han visto en la génesis del orden y lo real menos una ontología que es una construcción. En este sentido Borges habla de una “postulación de la realidad”; y señala: “el hecho mismo de percibir, de atender, es de orden selectivo: toda atención, toda fijación de nuestra consciencia comporta una deliberada omisión de lo no interesante. Vemos y oímos a través de recuerdos, de temores de previsiones” (OC 1: 218). Uno de los primeros asombros de Borges no es tanto que el hombre no pueda vivir sino según un orden; sino descubrir esa enigmática e invencible vocación por la construcción de un orden y por la voluntad, que parece venir del fondo del ser, de someterse a ese orden. En el perfil que va creando de su imaginario personal (y que se despliega en infinidades de entrevistas y biografías sobre el autor) Borges se imagina en un lugar intermedio: “Lo cierto es que me crié en un jardín detrás de una verja con lanzas, y en una biblioteca de ilimitados libros ingleses”. Es posible imaginar al escritor en esa suerte de franja, observando en un lado, sin duda superior, el “mundo” épico, de elegante valentía de sus mayores; y del otro, hacia abajo el mundo de los malevos, el mundo del coraje. El escritor se obsesionará por representar uno y otro mundo, desde la franja de su jardín, dibujando esa especial topología, acaso los primeros trazos laberínticos de su imaginario.
“Los Borges” son vistos en su grandeza heroica emergiendo del horizonte histórico, en contra posición al presente del escritor y en la vieja querella de las armas y las letras. En el poema un mañana de El oro de los tigres (1972 ) dirá “yo, que padecí la vergüenza/de no haber sido aquel Francisco Borges que murió en 1874; en “1972” de la Rosa profunda (OC III: 104), ya había revelado:
No soy aquellas sombras tutelares
Que honré con versos que no olvida el tiempo.
Estoy ciego. He cumplido los setenta;
No soy el oriental Francisco Borges
que murió con dos balas en el pecho,
Entre agonías de los hombres,
En el hedor de un hospital de sangre,
Pero la Patria, hoy profana quiere
Que con mi oscura pluma de gramático,
Docta en las nimiedades académicas
Y ajena a los trabajos de la espada,
Congregue el gran rumor de la epopeya
Y exija mi lugar. Lo estoy haciendo.
En “The thing I am” de Historia de la noche, (1977). Había señalado: “soy apenas la sombra que proyectan/estas íntimas sombras intrincadas” (OC III: 196). Frente al esplendor de los mayores, presencia reiterada a través de su obra, el poeta es “el que cuenta las sílabas”. Esta topología guardará correspondencia con la significación estética del “Yo soy nadie” que atravesará con distintas modulaciones la obra borgiana.
En el otro extremo del mundo de los mayores, en una suerte de pliegue de la representación, en una franja de ilegalidad, irrumpe desde lo bajo esa forma ciega e instantánea de la heroicidad que es el coraje. Foucault ha descrito esa franja donde un orden, con sus leyes y sus ritos, se instaura: “Por debajo de la paz, el orden, la riqueza, la autoridad, por debajo del orden apacible de las subordinaciones, por debajo del Estado, de los aparatos del Estado, de las leyes, etcétera, ¿hay que escuchar y redescubrir una especie de guerra primitiva y permanente?” (Foucault, 2000: 52).
Temprano Borges se siente fascinado por ese “orden” donde “no hay otra obligación que ser valiente”, fascinación que se desplegará desde Evaristo Carriego (1930), uno de sus textos sobre el mundo “malevo” y la “religión del coraje”. Desde entonces la “mitología del malevo” no abandonará la expresión de esta obra y aparecerá aquí y allá en diversos tonos y representaciones. En este contexto la figura del duelo será una de las reiteraciones al lado de la caracterización de figuras de la tradición del coraje como Hormiga Negra o Juan Moreira, desprendidos de la cultura popular a partir fundamentalmente de las novelas populares de Jorge Gutiérrez (Gutierrez, 1999ª y 1999b). Josefina Ludmer ha señalado: “Juan Moreira representa la continuación de la tradición gauchesca de la confrontación y la violencia: sigue a La ida de Martín Fierro con la lucha hasta el fin… el gaucho Moreira encarna la violencia popular en su estado puro” (Ludmer, 1999: 232 y 233). Esta peculiar fascinación lleva a Borges a privilegiar, en contraposición a Lugones, al Martín Fierro de la primera parte, de la ilegalidad, de la violencia y el malevaje, y no el de la segunda parte, edificante, de la legalidad; lo lleva a narrar escenas de cuchilleros y a dibujar en un proceso de repetición y reescritura, una de las más sorprendentes articulaciones entre la cobardía y la valentía.

EL PERSPECTIVISMO
La narración de malevos le dará a la poética borgiana que se va configurando una serie de signos que se desplegarán en el amplio campo de una estética reconocible más allá de sus fuentes racionales: el juego de repeticiones, de reescrituras, de combinatorias. En la progresión que va de “Hombres pelearon” (1928) a “Hombre de la esquina rosada” (1935) hasta “Historia de Rosendo Juárez” (1970 ) se despliega un juego de textualidades que, desde el perspectivismo, pone en cuestión la certeza de lo real, y que asoma los rasgos de una poética y de una visión de mundo: el desplazamiento en el duelo, por arte de la perspectiva de los hombres que pelean, a los cuchillos (este desplazamiento, cuyo surco es la metonimia, será retomado en muchos otros textos; el perspectivismo, que desplaza en un horizonte ético la noción de cobardía a valentía y, finalmente, el dibujo de la paradoja: no atender al desafío por hacer verdad una convicción ética (“mi consejo es que no te metas en las historias por lo que la gente pueda decir y por una mujer que ya no te quiere”). El desplazamiento de la perspectiva, de uno a otro relato, pone en evidencia los límites de la verdad y sus encubrimientos.
La fascinación por el orden, lleva a Borges a fundar ámbitos de juego en el relato para interrogar las diversas maneras en que ese orden alcanza sus representaciones culturales; por ejemplo, en los relatos sobre la formación de sectas o aquellos de la representación imaginaria de otros mundos. La formación, las razones y sin razones de las sectas son asimiladas en ese ámbito de juego narrativo, no en la construcción de una nueva mitología sino en el despliegue de una perspectiva humorística que puede pasar desapercibida por el lector; asi la “secta del fénix” (OC 1: 521) y “El Congreso” (OC.III: 20) parten de un principio paradojal que deriva en representación humorística: el principio del mapa del tamaño del mundo se desplaza aquí a la secta del tamaño de la comunidad del total de los hombres. El principio del juego se encuentra en la siguiente formulación que recorre la obra borgiana: “…un mapa del Imperio, que tenía el tamaño del imperio y coincidía puntualmente con él” (OC II: 205). La creación de otro mundo, no en el sentido de otro mundo utópico de la racionalidad optimista; sino más cerca de la teoría de los otros mundos concebido por Leibniz y que tienen “Tlön, Uqbar, Orbis Tertius” (1943) su más plena expresión.
Paradojas en el mundo y fisuras que nos permiten imaginar otros mundos. Borges lo ha señalado explícitamente: “Nosotros (la indivisa divinidad que opera en nosotros) hemos soñado el mundo. Lo hemos soñado resistente, misterioso, visible, ubicuo en el espacio y firme en el tiempo; pero hemos consentido en su arquitectura tenues y eternos intersticios de sinrazón para saber que es falso” (Borges, OC1: 258) Borges como Carroll, crea un campo narrativo de juego, vértigo de combinaciones y desplazamientos, de inversiones y rupturas de límites, donde brota la crítica y la refutación a lo real; y donde se revela la “fisura” de lo real El hueco de lo real diría Lacan desde donde se prefigurarán otros mundos, otras formas de lo real; tramadas por otros principios lógicos y causales que emergen de un fondo de paradojas y laberintos. De Carroll Borges aprenderá, con el episodio del sueño del rey, que podemos ser soñados por otros. Que es posible invertir la causalidad y crear causalidades inmotivadas, y que la paradoja crea mundos distintos, extraños, pero íntimamente ligados a las presuposiciones lógicas del mundo. De este modo, si el límite es lo que determina la configuración de lo real (Trias, 1991: 58) muchos relatos tienen como centro la transgresión o el desbordamiento del límite; así el ámbito del sueño irrumpiendo en lo real, como en La flor de Coleridge, reiteradamente citada por el autor, y cuyo principio está presente por ejemplo en “Las ruinas circulares”, o en la irrupción de objetos desde un mundo imaginado, como los “hrönir” desde Tlön; así la memoria apropiándose de los espacios propios del olvido como en “Funes el memorioso”; así “el inmortal”; así “El libro de arena”… Es posible observar, en la expresión plástica de un Magritte. como la transgresión de los límites, en formas heterogéneas de ámbitos distintos, por ejemplo, lo humano y lo animal y lo mineral, etc. Es posible observar en algunos cuentos de “duelo” el desplazamiento metonímico, de los hombres a las armas: el cuchillo o el puñal es el que pelea (así podemos decir en un poema: “el destino que acecha tácito en un cuchillo”; o en breve relato del “puñal” donde se dice: “sueña el puñal su sencillo sueño de tigre”… el metal que presiente contacto con cada contacto con homicida el destino para el que lo crearon los hombres. En “el encuentro” igualmente se dirá, en el arco de ese desplazamiento: “las armas no los hombres pelearon”) igualmente en la espada persiste la osadía y el pasado de crímenes y duelos. Es posible ver también el parentesco de estos desplazamientos en cuentos como “La mano” (de Maupassant) o “la nariz” (de Gogol) donde el desplazamiento, en el relato, lleva a efectos de horror u humorismo.
Ese juego de desplazamientos lleva, en Borges, a la intervención narrativa del mundo creado por Cervantes (como en “Pierre Menard, autor del Quijote”) ( OC 1: 431); y de Shakespeare (como en “La memoria de Shakespeare”) (OC III: 391); y lleva a un espectro de posibilidades, de meandros causales y de “senderos que se bifurcan” por donde se precipitan los diversos relatos; así la lógica de la inversión en el cambio de roles de Caín y Abel, tal como se describe en diversos textos (y que guarda correspondencias con la inmersión del cobarde y el valiente de otra serie de textos); tal la inversión del sentido de la influencia en “Kafka y sus precursores” con la refutación “del color local” en “El escritor argentino y la tradición”; movimiento de inversión tramado en el carril de la paradoja. Es importante mencionar en este juego de inversiones la trama narrativa de “La intrusa” (OC II: 401)) donde la pasión por la amante propicia por encima de los enfrentamientos de los hermanos, el sorprendente asesinato de la intrusa: el amor de hermanos por encima de la sexualidad, para desconcierto del esquema psicoanalista.
Así el espectro de versiones y variaciones, estrechamente ligado al perspectivismo, se despliega en el relato desde diversas perspectivas y construcciones. Así la “escritura”, que abren diversas perspectivas narrativas en el texto canónico del Martín Fierro.
Con “El fin” Borges “interviene” el Martín Fierro para romper “el nudo edificante” que instaura la “vuelta…”. En el poema de Hernández duelo y muerte se expresan en su repetición en la primera parte (dos duelos y dos muertes) y en la clausura de la repetición en la segunda parte (en el reto del moreno, no aceptado por Martín Fierro). Esas dos fases, repetición y clausura, hacen de este poema una obra edificante, de allí la resignificación como texto fundacional que realizará Lugones en sus conferencias de Mayo de 1913. Contra esta clausura se escribirá “El fin”, texto especular, recursivo, que restituirá la fuerza repetitiva de la violencia. Esta reescritura refutará la condición edificante de “La vuelta” y el fundamento de la lectura de Lugones. El texto convoca temas recurrentes en Borges: la espera, la venganza, la imposición de un destino, la perspectiva desde la inmovilidad. El cuento resignifica el duelo, desde la perspectiva del testigo inmóvil: Recabarren. “El fin” narra un segundo duelo con el hermano del negro, no narrado por Hernández, y el texto es el “fin” de Martín Fierro: su muerte (a diferencia del poema de Hernández que narra de manera edificante su reintegración social, su “desaparición” como gaucho) y la resignificación de la “apertura” de la repetición del duelo. Una de las más fascinantes significaciones del duelo es su resistencia a la clausura: frente al valiente vencedor de todos los retos siempre aparecerán, una y otra vez, los retadores con la intención de ganarle y hacerse con el prestigio del valiente. De allí el “cansancio” del valiente y la crisis ética que genera tal cansancio (tal como hemos visto en “Hombre de la esquina rosada”,: y como puede observarse en la “mitología del pistolero” del oeste norteamericano). La distancia entre el texto de Hernández (“procurando los presentes que no se armara pendencia, / se pusieron de por medio/ y la cosa quedó quieta”) y el de Borges (“mi destino ha querido que yo matara y ahora otra vez, me pone el cuchillo en la mano”) es la distancia entre la clausura y la insistencia de la repetición, entre lo edificante y la especularidad, entre la tranquilidad del orden recuperado, y la fascinación del mal absoluto.
Si en “El fin” la reescritura borgiana salva la repetición de su clausura para que el duelo pueda cumplir con el imperativo de la venganza, en “Biografía de Tadeo Isidoro Cruz (1829-1874)”, la escritura del cambio de bando de Cruz, narrado en el Martín Fierro (“Tal vez en el corazón / le tocó un santo bendito / a un gaucho que pegó el grito, / y dijo: “Cruz no consciente / que se cometa el delito / de matar ansí un valiente”), se constituye en la representación del instante en que un hombre cambia para siempre su destino, acontecimiento que no dejará de fascinar a Borges y quien lo rescribirá de diversas maneras, con distintos contextos y en diferentes momentos de su vida de escritor. El cuento se propone buscar una respuesta, por medio del relato, a ese instante en el que un hombre cambia para siempre su destino. Las fechas de nacimiento y muerte de Cruz, señaladas en el título, no indican sino el precipitado hacia la noche de ese instante único; en ese lapso se extiende el relato para interrogar el enigma de esa noche y de ese instante. El enigma es otra de las fascinaciones borgianas: El relato de Borges se propone dar una explicación al enigma. El juego de desplazamientos (de asesino fugitivo, a oficial fugitivo) que concurre en el desplazamiento fundamental: el de Cruz al bando de Martín Fierro, pues ese juego de desplazamientos es el mismo de Fierro: uno es el otro. En la “reescritura” del destino de Cruz, el relato desplegado, desde la “concepción” hasta la noche con Martín Fierro, tratará de explicar ese acto definitivo: la pesadilla del padre, uno de los montoneros hostigados por Lavalle, que fecunda a Isidora Cruz, antes de perecer “perseguido por la caballería de Suárez”, se convierte en el remoto enigma (“nadie sabe lo que soñó”) que dará inicio al precipitado de una vida hacia la “lúcida noche fundamental”.

VERSIONES Y VARIACIONES
Versiones y variaciones se constituyen en uno de los movimientos fundamentales del tejido borgiano.
Quizá uno de los campos textuales más atractivos para este movimiento sea el que se establece entre el discurso sagrado y el espectro de variantes heréticas, en las religiones del mundo, lo que históricamente ha llevado a persecuciones y a instituciones implacablemente vigilantes como la inquisición en la edad media. En la historia de occidente la tenebrosa figura del inquisidor va a tomar para sí la divina tarea de la persecución de los heréticos. Escritores modernos han explorado, desde la perspectiva estética, las posibilidades textuales de las variantes heréticas; quizás el más conocido de ello, por el escándalo ocurrido en el contexto musulmán, sea el de Salman Rusdhie con la publicación en 1982 de su novela Los versos satánicos. En muchos textos Borges explora las posibilidades estéticas de lo herético; mencionemos uno: “Las tres versiones de Judas” (1943) que plantea un espectro de variantes del “más precioso acontecimiento de la historia del mundo”; una de estas variantes es especialmente estremecedora: “Dios totalmente se hizo hombre pero hombre hasta la infamia, hombre hasta la reprobación y el abismo. Para salvarnos pudo haber elegido cualquiera de los destinos que trama la compleja red de la historia; pudo ser Alejandro o Pitágoras o Rurik o Jesús; eligió un ínfimo destino: fue Judas”. La variante herética de la historia cristiana que tiene en este texto uno de sus más importantes momentos, se multiplica en otros textos y autores: así en Terra Nostra (1987) de Carlos Fuentes, donde, en sorprendente variante, por ejemplo, José es Judas y es quien como carpintero quien hace la cruz, vengando en Jesús el engaño de María.
“Automatismo de repetición” llama Foucault a uno de los modos esenciales del acaecer y el vivir. Modo del vivir estudiado por Freud y por Nietzsche de diversa manera, pero siempre orientado a la configuración o disolución del sentido. Ha señalado Foucault: “No hay repeticiones en sentido estricto, creo, sino en el orden del lenguaje (…) quizá en el análisis de la forma de repetición es donde se pudiera esbozar algo parecido a una ontología del lenguaje (…). El lenguaje no cesa de repetirse” (1994: 506). Podríamos decir, en términos generales que la repetición de lo identitario es el elemento cohesionador fundamental del orden (en el reconocimiento de la ley y el interdicto; en el hábito, el ritual y la costumbre), y que la repetición de la alteridad es la manifestación de lo ominoso, incluso del mal absoluto, cuando esa repetición no tiene clausura, tal como puede observarse en la repetición del crimen en Macbeth de Shakespeare. En su seminario de “la carta robada” Lacan cree observar en la repetición de una escena fundamental, el campo de la manifestación de la verdad, en el mismo momento en que se muestra el drama entre la paradoja y el sentido. la obra de Borges no cesa de repetirse. Su famosa expresión “La historia de la literatura no es sino la historia de las diversas entonaciones de unas cuantas metáforas” su señalamiento de que el relato de todos los tiempos corresponde a pocas historias que se repiten, repercuten en la constitución de los rasgos de una poética.
El propio Borges ha señalado en su obra la recurrencia de pocos elementos: la paradoja y el laberinto; el espejo y el tigre, que aparecen en un ritmo de repeticiones que alguno de sus contemporáneos han visto como monotonía; otros por el contrario han intuido allí uno de los movimientos fundamentales de la creación. Borges se ha referido a esta recurrencia, con la leve ironía que lo acompañaba como un escudo en las cientos de entrevistas que le hacían: “Al final de cada año me hago una promesa: el año próximo renunciaré a los laberintos y los tigres, a los cuchillos, a los espejos. Pero no hay nada que hacer, es algo más fuerte que yo: comienzo a escribir y, de golpe, he aquí que surge un laberinto, que un tigre atraviesa la página” (Borges Oral) Mencionemos, a título de ejemplo algunos textos donde la repetición alcanza significativa modalidades: así en Pierre Menard autor del Quijote, la representación central del texto, exactamente igual escrito por Cervantes o por Menard, pero diferentes, pone en evidencia, el regreso de lo mismo como diferente, el eterno retorno; en “La biblioteca de babel”, la repetición de libros y anaqueles, pasillos y galerías, crea representaciones alucinatorias del orden y el universo. Muchos textos convocan a repetición de una escena, así en La trama (OCII: 171) “Lo maté y no sabe que muere para que se repita en la escena”; así en “El evangelio según San Marcos” (OC II: 444) el acto de crucifixión de Espinoza, por parte de los Gutre, responderá de manera desdibujada a una venganza (por haber tomado a la hija y hermana) y de manera estremecedora, a la repetición de la escena alucinada de la lectura en voz alta del evangelio.
De Borges, y de gran parte de la literatura contemporánea podría decirse lo que él dijo de Chesterton (OC II: 72): “La repetición de un esquema a través de los años y de los libros… parece conformar que se trata de una forma esencial, no de un artificio retórico”. Forma esencial de una poética: en este ritmo de repeticiones se repetirá el doble (donde el Yo es “otro yo mismo”) y en los otros (“Las cosas que le ocurren a un hombre le ocurren a todos los hombres”) y el yo que se diluye en todos (“Ser nadie para ser todos los hombres”): el amplio campo de la repetición es, repitámoslo, juego estético de la inversión y la reescritura de las variaciones y las correspondencias. En el objeto mágico confluyen repeticiones y acumulaciones para dar una idea de lo visionario y lo infinito; así en “EL Zahir”, así en “EL disco”, así en uno de sus textos centrales, “El aleph”.

EL JUEGO DE DADOS
Junto a los juegos de repeticiones es necesario referirse a las presuposiciones de la causalidad. En las redes de la causalidad (causa eficiente) y causas finales (teleología) parece jugarse el sentido y la naturaleza de una cultura. Hemos señalado como la causa final, la teleología, se constituye en dominante de las sociedades míticas y religiosas: la causa final, materializada en dios o los dioses, es dadora de una plenitud de sentido. Hemos señalado como la sociedad moderna, desacralizada, parece colocar el sentido en la causa eficiente que engendra como el gran logro de la razón, la verdad objetiva. El “giro lingüístico de la posmodernidad es, sin embargo, una inflexión y una puesta en crisis de la causalidad, por lo tanto de la verdad objetiva; y una colocación en el límite de la pregunta sobre el sentido final. En esta situación límite Leibniz ha señalado, en el optimismo moderno que “Los espíritus obran de conformidad con las leyes de las causas finales; los cuerpos obran de conformidad con las leyes de las causas eficientes” ( Leibniz, 1992: 15); en esta situación límite, parece no plantearse la pregunta central sobre los fundamentos pues, como señala Leibniz, nihil est sine ratione, nada es sin fundamento (Heidegger 1991:67), Einstein, en el estremecimiento mismo de las causas finales, pero en correspondencia con el proyecto divino de la vida que viene del fondo mismo de las religiones, dirá en famosa frase, en el límite mismo de la situación límite, “Dios no juega a los dados” (contemporáneamente Mallarmé había experimentado, desde la experiencia poética, la fisura del sinsentido final; “un golpe de dados jamás abolirá el azar”). ¿Llegará la física cuántica, en el giro más espectacular de la visión de mundo que está construyendo, darse de tope con las causas finales y, de manera absoluta, con el sinsentido? ¿Podrá expresar el “nada es” desde el “sin fundamento”? En el juego estético de la causalidad, la obra de Borges parece cruzarse con tesis fundamentales de la física cuántica, y parece llegar a la misma situación límite de Einstein. Como el científico, el escritor pensará que al final de ese camino de extravíos y señalizaciones que es el laberinto, que detrás de la dura resistencia que es el enigma, es posible pensar alguna forma del sentido; es posible que Dios no juegue a los dados.
Los principios lógicos de identidad, no contradicción y tercero excluido constituyen para Axelos “Las formas principescas de la gramática y la sintaxis” (Axelos, 1969: 27). Leibniz subraya que nuestro razonamiento se funda en estos principios lógicos y en el principio de razón suficiente, donde toda expresión y todo acaecer es un tejido de causas y efectos. Siguiendo al principio el juego carroiano (donde, Alicia; al comer o beber crece o se achica; donde corre hasta desfallecer sin moverse del sitio al lado de la reina, donde ve al gato que ríe y que luego desaparece hasta dejar en el lugar la sonrisa) Borges crea un sorprendente ámbito de juego estético de desplazamiento de la causalidad. Uno es el movimiento de regression in infinitum, que sirve de principio de reconstrucción de muchos cuentos en el arco de fondo de la paradoja. Así “Las ruinas circulares” donde la situación de haber soñado un hombre en el mismo momento de ser soñado por otro abre al infinito la regresión causal; así como se abre a la regresión de mundos Uqbar-Tlön… y que se desvía metonímicamente en la reproducción del “hrönin”; el regressum in infinitum aparece una y otra vez a lo largo de la obra Borgiana. En la Flor de Coleridge señala: “Henry James crea (en the sense of the part) un incomparable regressum in infinitum, ya que su héroe Ralp Pendrel, se traslada al siglo XVIII porque le fascina un viejo retrato, pero ese retrato requiere para existir que Pendrel se halla trasladado al siglo XVIII. La causa es posterior al efecto, el motivo del viaje es una de las consecuencias del viaje” (OC, II: 20) en “Nathaniel Hawthorne” imaginará que el mundo es el sueño de alguien (como ocurría de manera festiva en el mundo de Alicia); en “Un sueño” (OC, II: 320) “En esa celda circular, un hombre que se parece a mí escribe en caracteres que no comprendo un largo poema sobre un hombre que en otra celda circular escribe un poema sobre un hombre que en otra celda circular… el proceso no tiene fin y nadie podrá leer lo que los prisioneros escriben. (O.C.3-320). El poema “ajedrez” dirá: “Dios mueve al jugador, y éste, la pieza / ¿qué dios detrás de dios la trama empieza?. /de polvo y tiempo y sueños y agonías. El regressum in infinitum rompe el límite que hace posible el orden y lo real; crea un movimiento circular que lleva a esa forma de la especularidad y la autoreflexividad que expresa el texto dentro del texto, tal como señala Borges que se produce en el cuento 602 de Las mil y una noches creando un horizonte humano hacia el infinito. Regressum in infinitum y circularidad: desbordamiento del carril de la causalidad y manifestación de la paradoja.
La asunción de la causalidad como “cemento de lo real” como tramado de la objetividad, al enfrentarse al juego de variaciones pone en crisis su horizonte de presuposiciones. Borges ha explorado, desde el relato, esa puesta en crisis.
Para hacer posible la causalidad de la venganza, personajes, en relatos estelares de Borges, crean una “causalidad inmotivada”, paralela a la del acontecimiento real: dos series, según las importantes reflexiones de Enrique Pezzoni (1999), generando, en un momento estelar del texto “cruce de series” en un temprano texto sobre Borges Paul De Man señala: “La infamia desempeña la función de un principio estético, formal. Las ficciones, literalmente, no podrían haberse materializado de no haber sido por la presencia de la maldad en su núcleo del mismo.” (Man, 1996: 216). Podríamos matizar y decir que la infamia es importante en la composición de muchos textos Borgianos, sobre todo la venganza que ofrece una red de posibilidades alternas, en posible cruce con lo real; en Borges el tema de la venganza concurre en la estrategia de la causalidad inmotivada. Señalemos algunos de sus textos mayores donde la causalidad inmotivada hace su aparición: en Tlön, Ugbar, Orbis tertius, el mundo de Tlön tiene una causalidad paralela (basada en la lógica de Berkeley y en la paradoja) con la presuposición lógica del mundo. La presencia de “hrönir” significará el cruce de series (que aquí se expresará en la lógica de lo fantástico); en “Las ruinas circulares” el cruce se produce en el paso del hombre soñado a la realidad. En “El jardín de senderos que se bifurcan” la fina y compleja correspondencia de opciones hace de este texto uno de los textos más interesantes del laberinto. El asesinato inmotivado de Albert, para que se sepa la ciudad que debe ser bombardeada, establece sus correspondencias, con el laberinto de Psiu Pen, y con el propio camino a la quinta del sinólogo. En “La muerte y la brújula” los asesinatos (los verdaderos: el primero y el segundo; el simulacro: el tercero) se constituyen en la causalidad (falaz) para el cuarto asesinato (que será el del propio detective: Lanröt); en Emma Zunz, la buscada y terrible violación se convierte en causa inmotivada para el señalamiento de la culpabilidad de Loewenthal, para que de esta manera pague la vieja culpa (ser responsable de la desgracia del padre (tragedia sobre la que posiblemente tampoco sea culpable). El cruce de “registros”, de series, de causalidades, produce la situación del laberinto y la más alta eficacia de la reflexividad, produciendo lo que Pezzoni ha llamado “una exhibición desaforada de los procedimientos”.
En el amplio espectro de juego de la causalidad Borges habla de “las secretas leyes del azar” (identificando su optimismo con el de Einstein) y todos, hasta dios, estaría subordinado a las leyes causales. Ya Leibniz había reflexionado sobre la paradoja del gran poder de dios, sometido sin embargo a la causalidad: “Dios señala no hace nada fuera del orden e incluso no es posible concebir sucesos que no sean regulares” y señala: “Dios nada hace que no esté sometido al orden. Entonces, lo que pasa por extraordinario, no lo es sino respecto a algún orden particular establecido entre las criaturas. Pues, en cuanto al orden universal, todo concuerda con él” (p. 15). Esta paradoja explica la resistencia divina (sea de dios o de los santos, en producir milagros). Borges hace suya esta reflexión de Leibniz en algunos momentos de su propia reflexión sobre la causalidad y en alguno de sus más importantes relatos. Señalemos “El milagro secreto” y “La otra muerte” donde dios se vale de un artificio para poder “intervenir” una causalidad sin transgredirla, así en “El milagro secreto” de Ficciones (OC, I: 508), situada en pleno fervor nazi (1939: “era el amanecer, las blindadas vanguardias del Tercer Reich entraban en Praga”), despliega una serie paralela (un año que permita al condenado Jaromir Hladik terminar su drama inconcluso) en el minuto de su fusilamiento, en un paralelismo entre tiempos distintos, el objetivo y el subjetivo, que ya encontramos desplegado, por ejemplo, en la escena de la Cueva de Montesinos en la segunda parte de El Quijote: mientras que Sancho atestigua que Don Quijote bajó y subió en poco más de una hora, Don Quijote afirmará que estuvo allí, y que estuvo tres días; lapso en el que le ocurrieron un sinfín de aventuras. Dos tiempos de dos perspectivas que se corresponden relativamente con el tiempo objetivo y el subjetivo, y la posible inadecuación entre ambos tiempos. En “El milagro secreto”, esa inadecuación se mantiene y permite la diferenciación temporal sin entrar en contradicción (las dos series existen de manera simultánea) es el nudo enigmático del texto. Esta dualidad de series (que alcanza su duplicación en la subjetividad del personaje, así pues el texto se adentra en las posibilidades representacionales de la subjetividad) deriva e variaciones al infinito, en repeticiones sin clausura: “No se cansaba de imaginar esas circunstancias: absurdamente procuraba agotar todas las variaciones. Anticipaba infinitamente el proceso, desde el insomne amanecer hasta la misteriosa descarga… Jaromir interminablemente volvía a las trémulas vísperas de su muerte”. La multiplicidad de mundos de la subjetividad se dibuja como un laberinto; y la inadecuación entre mundo objetivo y mundo subjetivo en enigma… y en milagro divino. El “secreto” evita el choque entre las dos series: la modificación de una serie por otra. Dios de esta manera interviene la causalidad (en la subjetividad) sin intervenirla (en los hechos objetivos). La paradoja se abre así a la simultaneidad de mundos posibles.
En “La otra muerte” de El Aleph (OC, I: 571) como en “El milagro secreto” de Ficciones, la proyección de la serie presupone una reflexión sobre la causalidad: la discusión teológica sobre si Dios puede modificar el pasado se convierte aquí en enigma, paradoja e inteligibilidad estética. En “La otra muerte”, en un horizonte de profundas conjeturas, la cobardía y la valentía se convierten en dos ámbitos que se excluyen o se identifican, creando el paradojal destino del personaje. Pedro Damián, habiendo muerto como cobarde en la “sangrienta jornada de Masoller”, tendría sin embargo la mágica oportunidad de otra batalla para mostrar su valentía. Para hacer posible la representación y la inteligibilidad de esta paradoja, el relato se “bifurca” en “series” o versiones. La primera versión es la del coronel Tabares, que presenta a Damián como un cobarde en la batalla de Masoller de 1904. El narrador presenta el perfil de este primer Damián: “En vano me repetí que un hombre acosado de cobardía es más complejo y más interesante que un hombre meramente animoso”. El cuento presenta el imperativo del culto al coraje que atraviesa, con toda su complejidad, parte de la obra de Borges: “Damián, como gaucho, tenía la obligación de ser Martín Fierro…” Así pues, el segundo testigo, el doctor Juan Francisco Amaro de Paysandú, en otro encuentro del narrador en casa de Tabares, atestigua sobre la valentía de Damián en esa misma batalla. El testimonio de Amaro de Paysandú se hace de manera simultánea al “extraño” olvido por parte de Tabares sobre su anterior juicio de cobardía, y en contraste con el testimonio del propio narrador de haber conocido a Damián, quien habría muerto no en 1904 sino en 1946. Se plantea de este modo una compleja red de contradicciones y enigmas: ¿cómo es posible que Damián haya sido a la vez un cobarde y un valiente en la batalla de Masoller de 1904 (donde habría muerto) y que a la vez haya muerto en 1946? El principio de la paradoja borgiana alcanza aquí un punto de vértigo. Borges no resuelve la paradoja (¿acaso es posible “resolver” una paradoja más allá del punto de fuga de la alegoría?), sino que la hace posible al hacer posible una realidad subjetiva (en forma de delirio) que incide sobre la realidad de los hechos (como la flor de Colerigde que irrumpe del sueño a la vigilia); la correlación se establece de este modo: Damián se porta como un cobarde en el campo de Masoller en 1904. Ruega a Dios una segunda oportunidad y de este modo regresa a Entre Ríos como una sombra (realmente como un muerto vivo). Allí vive hasta 1946 apartado de todos; allí lo conoce el narrador. En ese tiempo de más de 40 años también espera la oportunidad de “otra batalla” para “merecerla”. En la hora de su muerte (de su segunda muerte, de 1946) por medio de un delirio asiste de nuevo a la batalla de Masoller, donde muere heroicamente, en acto simultáneo con la muerte de 1946: en rigor, según la lógica del relato, Damián muere no dos veces, como indica el título, sino tres veces.

BORGES Y LA FÍSICA CUÁNTICA
El texto borgiano pone en crisis, desde la paradoja, la causalidad, sin violentarla, sino resolviéndola por la vía de la reproducción de series. La apreciación de la causalidad en Borges hace confluir la teoría del “monstruo laplaciano” y las teorías actuales del azar y de indeterminación que se presentarían no como ausencia de causalidad sino como producto de causalidades complejas y desconocidas. Se señala en “La otra muerte”: “la intrincada concatenación de causas y efectos, es tan vasta y tan íntima que acaso no cabría anular un solo hecho remoto, por insignificante que fuera, sin invalidar el presente. Modificar el pasado no es modificar un solo hecho; es anular sus consecuencias, que tienden a ser infinitas”. La física cuántica ha dado cuenta de esa simultaneidad causal, de ese paralelismo de mundos. Es famoso “el efecto mariposa” como una de las expresiones de la complejidad causal. En el cuento, la insospechada limitación divina para modificar el pasado se desplaza a la capacidad de la estética; de hacer posible la paradoja en el juego de la multiplicidad de series. El cuento se sostiene sobre la paradoja de la causalidad inmodificable y la omnipotencia de Dios (que, por “omnipotente” podría modificar cualquier causalidad). Dado que la tesis de la “causalidad inmodificable” pone en entredicho la omnipotencia de Dios, quien no podría modificar la más mínima causalidad, Dios, para no perder su “omnipotencia” y, a la vez, atender el pedido de Damián, debe valerse de una argucia: la de regresar a Damián como sombra (en una suerte de “serie paralela”) para hacerlo morir como héroe, pero en el delirio, aunque este delirio (de manera inesperada) incide en la realidad (lo recuerda el doctor Amaro) poniendo en peligro si atendemos al principio de la paradoja “la estructura misma del universo”.
La causalidad, su arco objetivo que construye la realidad de nuestras presuposiciones, y la posibilidad de la fisura causal por donde acceder a otros órdenes, a otros saberes, a otros mundos, alcanza significativa recurrencia en Borges. De allí quizás su interés estético antes que como creyente en la cábala y el budismo (pues se convierten en perspectivas posibles para la representación de otros mundos y otras causalidades); de allí la paradoja que, como los tigres, emerge del imaginario borgiano.
En un importante trabajo sobre Borges y la física cuántica, Alberto Rojo señala que es posible ver citas borgianas en textos científicos, así “en la Biblioteca de Babel para ilustrar las paradojas de los conjuntos infinitos y la geometría fractal, referencias a la taxonomía fantástica del Dr. Franz Kuhn, en “El idioma analítico de John Wilkins (un favorito de neurocientíficos y linguisticos), invocaciones a Funes el memorioso para presentar sistemas de numeración; y hace poco me sorprendió una cita a El libro de arena en un artículo sobre la segregación de mezclas granulares” Señala cómo en “El jardín de los senderos que se bifurcan” “se anticipa se anticipa la tesis de Hugh Everestt III, de 1957, sobre “la interpretación de los muchos mundos” (Rojo, 1999: 188).
Borges se presenta como el oráculo de los tiempos modernos.

LA PARADOJA, EL ENIGMA, EL LABERINTO.
Es posible deslindar por lo menos dos modos de representación borgianos: la representación de un orden, de un mundo (asediado sin embargo por la paradoja); y la representación de un afuera, donde otros mundos son posibles; de allí la doble reflexión sobre la causalidad y sobre la paradoja, en arco que va desde las aporías de Zenón de Elea hasta las antinomias de Kant y las mónadas de Leibniz, concurrente en el imaginario borgiano. La visión crítica sobre lo real y el mundo hacen brotar, en Borges, el mundo de las paradojas; los intersticios permiten ver y describir otros mundos donde las paradojas, como en los cuadros de Escher, rigen las representaciones.
El imaginario de los pueblos ha proyectado, en el ámbito enigmático de lo desconocido, del afuera, del más allá de los límites de lo real, representaciones antropomórficas. En este contexto habría que entender la frase de Hermógenes; si los caballos pudiesen imaginar a sus dioses, los imaginarían como caballos. La proyección antropomórfica parece darse en dos vertientes que a veces se identifican: lo divino, la armonía y la perfección de los dioses; y la concurrencia de lo heterogéneo en lo monstruoso. El imaginario de las religiones parece constituirse en el primer expediente de esta dualidad. La representación de mundos por la paradoja parece alejarse de esa visión antropomórfica y sustituir la religiosidad por una visión panteísta: la representación de mundos como expresiones del enigma, irreductible a su disolución.
La representación de mundos como enigmas hace brotar la paradoja como su principio lógico fundamental; así es posible pensarlo, por ejemplo, de Zenón de Elea a Wittgenstein; así es posible verlo en el espectro lógico de otros mundos que de asombro en asombro nos presenta la física cuántica.
Ya Leibniz había imaginado la posibilidad de otros mundos paralelos al mundo en el que vivimos; para él, en el sentido de Newton y Laplace, “vivimos en un universo regulado”, regido por una “armonía universal” y en el que habría muchos mundos paralelos, y que vivimos en el mejor de los mundos posibles. Esta intención, parodiada por Voltaire, en su Candide, guardará sin embargo parentesco con la teoría contemporánea de los mundos de la física cuántica; con la diferencia de que ésta concibe otras lógicas, la de las paradojas a distancia del principio de razón suficiente Leibniziano; de allí que en el borde mismo de estas sorprendente representaciones emerge la más terrible de las preguntas, la pregunta sobre el sin sentido. La noción de “armonía preestablecida”, de Leibniz, como la tesis de que Dios no juega a los dados, según la famosa expresión de Einstein, es testimonio del vértigo que esa pregunta conlleva; y una inmediata asunción de fe en la preservación del sentido de la creación y de la vida. Cuando Carroll nos presenta en las dos Alicias sus representaciones paradojales lo hace en un ámbito de juego estético y en regresos a esas reconstrucciones del sentido que son el humor y la alegoría. “La paradoja señala Deleuze es primeramente lo que destruye el buen sentido como sentido único, pero luego es lo que destruye el sentido común, asignación de identidades fijas” (Deleuze, 1989: 27). La paradoja, al refutar la presuposición lógica de lo real funda otra representación de lo real “imposible” desde la normalidad lógica. Imposibilidad que sin embargo se realiza, es la paradoja y es el más extremo modo de la crítica a lo real y del vértigo del sentido.
La primera manifestación del reto entre sentido y sin sentido es el planteamiento de la solución o lo irresoluble del enigma. La paradoja, señala Roy Sorencen, “es la expresión lógica del enigma” (Sorensen, 207: 21). El enigma desde la figura del esfinge y del oráculo de Delfos en la cultura griega; desde el presagio de las brujas en Macbeth de Shakespeare, coloca su impronta para articular enigma y destino. El imaginario de Dios o de los dioses en las religiones es el intento más prodigioso de las culturas por resolver el enigma de la vida y del universo. La objetividad causal y determinista que va de Newton a Laplace irrumpió como la posibilidad de disolución de todos los enigmas, hasta que la noción de verdad objetiva fue cuestionada de Schopenhauer a Nietzsche y de éste a la hermenéutica contemporánea; y la noción de certeza a demostrar sus limitaciones en el conocimiento del infinito horizonte de lo desconocido.
De Spinoza a Borges la pregunta ante el enigma alcanza una nueva posibilidad: la de su presencia irreductible en el universo, y su despliegue en una visión panteísta. Panteísmo y paradoja crearán un espacio de juego en la obra de Borges.
En el espacio del juego borgiano el infinito y el tiempo se constituyen en enigmas irreductibles; lo real puede ser concebido como la creación de un Dios menor; y toda posibilidad asertiva deviene conjetura. Así dirá: “No hay clasificación del universo que no sea arbitraria y conjetural. La razón es muy simple: no sabemos qué cosa es el universo” (OC, II: 86). La poesía permitirá “recibir lo que no se comprende” y es posible observar cuentos de Borges donde el brote de la solución del enigma va seguido de una decisión sobre su ocultamiento (así en “La escritura de Dios”). La noción de enigma del panteísmo se aparta del determinismo optimista (según Laplace, comprenderíamos el universo si conociéramos todas las causas) y se acerca más a las nociones de incertidumbre e indeterminación de la física cuántica.
Si el enigma tiene su más inmediata expresión en la paradoja, la paradoja la tiene en el laberinto. El laberinto es el modo borgiano de desplegar una consciencia crítica sobre lo real y de representar el enigma y el infinito. Borges hace del laberinto la arquitectura del relato; y la más inmediata de sus correspondencias es, qué duda cabe, el mito del minotauro, tal como se muestra de manera expresa en “La casa de Asterion”(OC,I: 564), donde el mito será una suerte de pizarra para las combinatorias y los desplazamientos: el monstruo espera en Teseo a su salvador, no a su verdugo; la casa-laberinto como un interior que incesantemente se repite y no tiene exterior pues salir es seguir adentro, etc. En ese destello del fuera se vislumbra “El templo de las Hachas, remitiendo el mito a sus fuentes dionisiacas. García Gual, en un ensayo imprescindible (García Gual, 1999: 270-319) ve en el relato borgiano la presencia constante del laberinto, ciertamente, y con ella la presencia plural de la mitología y la cultura griegas. El laberinto guarda estrecha relación con el descrito por Psui Pen, en El jardín de senderos que se bifurcan, y señalado como clave en el título m ismo de este cuento: los acontecimientos excluyentes se hacen simultáneos. En sus juegos de laberintos Borges los concebirá en línea recta, tal el laberinto descrito por Lonrot al final de La muerte y la brújula, como tela de araña, tal como ocurre con impecable perfección y en un desplazamiento de roles en Albajacán el Bojarí, muerto en el laberinto” (OC.I:600), etc. El laberinto en Borges, como la aporía de Zenón de Elea, es el surco de la paradoja y la repetición del infinito.

BORGES Y EL PENSAMIENTO CONTEMPORÁNEO
El juego de paradojas y laberintos; el arco que va de la representación del coraje a las interrogaciones sobre el universo donde las certezas se derrumban y las conjeturas se despliegan del universo al yo, donde el universo irradia un tramado de paradojas y el yo se precipita en el otro, en la nada, en nadie, desplazándose del “soy quien soy” a la reformulación de la pregunta ¿quién soy? Y a la ubicación del ser en ese lugar de paralaje de umbral donde parece situarse la física cuántica; el lugar, a la vez del soy y no soy, del to be or not to be, según la luminosa expresión de Hamlet.
Puede hacerse una historia de la cultura que sea a la vez una historia de la verdad: el paso a las sociedad “encantadas” a la modernidad es el paso de una verdad de dominante teleológica cargada de sentido de diría Lukács a la construcción optimista de una verdad objetiva que diera cuenta de la vida, del mundo y del universo. La “tabla giratoria” de la modernidad que según Habermas es Nietzsche hace confluir en su reflexión una profunda crítica a la verdad, que viene de antiguo, pero que en él alcanzará una síntesis fundamental: No hay verdad sino interpretaciones; toda verdad lo es desde una perspectiva; el mundo es una fábula. El así de la complejidad de la física cuántica parece colocarse en este último horizonte de la verdad; así mismo la obra de Borges. En este sentido hemos distinguido a Poe de Borges en esa narrativa de la verdad que es el relato policíaco. Hemos observado cómo el relato policíaco según la lectura que Borges hace de Poe, se sitúa en el horizonte de la modernidad optimista del triunfo de la razón que tiene sus paradigmas en Descartes, en Newton, en Laplace; de allí que la razón sea capaz de resolver los enigmas tal como sucede en los crímenes de la calle Morgue y “La carta robada” donde la racionalidad es capaz de revelar la verdad oculta en las inflexiones de la paradoja (así, lo demasiado visible que no se ve, en “La carta robada”). Es notable en este sentido el pasaje inicial de “los crímenes…” cuando Dupin camina con Watson por una calle de Londres y de pronto logra deducir o abducir, según la teoría de Pierce lo que piensa en ese momento su acompañante. El género inventado por Poe tendrá un despliegue inusitado en la historia del relato en occidente.
Un relato como “la muerte y la brújula”, parodia del relato policíaco, como lo fuera el Quijote de las novelas de Caballerías, realiza el proceso racional, propio del género, de desvelamiento de la verdad; sólo para mostrar lo falaz de esa verdad: la repetición, verdadera o falaz del crimen, la construcción racional de un laberinto como “tela de araña”, el juego de desplazamiento, inversiones y repeticiones que hace a estos textos y a toda la obra de Borges, esos “borrones”, “toda una literatura”, y constituirse en la gran interrogante sobre las certidumbres del universo de lo real y de la condición humana.

NOTAS
[1] Así en “A quien leyere”, de Fervor de Buenos Aires, dirá: “Si las páginas de este libro consienten algún verso feliz, perdóneme el lector la descortesía de haberlo usurpado yo, previamente”; así en el “Prólogo” a la edición de 1954, de Historia Universal de la infamia: “Estas páginas… son el impensable juego de un tímido que no se animó a escribir cuentos y que distrajo en falsear y tergiversal (sin justificación estética alguna) ajenas historias”. En el “Epílogo” para las obras completas, 1974 cita una nota de la “Enciclopedia sudamericana”, que se publicará en Santiago de Chile el año 2074, donde señala: “El nombre de que Borges gozó durante su vida, documentado por un cúmulo de monografías y de polémicas, no deja de asombrarnos ahora. Nos consta que el primer asombrado fue él y que siempre temió que lo declararan un impostor o un chapucero o una singular mezcla de ambos. Indagaremos las razones, que hoy nos parecen misteriosas”. Solo el hilo de Ariadna podrá ser más orientador que esta actitud reflexiva de leve ironía y delicado humorismo.
[2] Heidegger (1995) utiliza la expresión “La época de la imagen del mundo” para referirse a la modernidad. Podríamos quizás decir que lo que se ha llamado Postmodernidad constituye una distinta “imagen de mundo”.

Bibliografía citada
Borges, Jorge Luís. OBRAS COMPLETAS I-IV (OC.) Buenos Aires, EMECE. 1996.
Axelos, Kostas. El pensamiento planetario. Caracas. Monte Ávila. 1969 (Primera edición en francés: 1964);
Carnap, Rudolf, La construcción lógica del mundo. México. UNAM. 1988 (Primera edición en inglés: 1928);
Davies, Paúl, Otros mundos. El espacio y el universo cuántico. Barcelona, 1994 (Primera edición en inglés: 1992);
Deleuze, Gilles, Lógica del sentido. Barcelona. Paidós. 1989 (Primera edición en francés: 1969;
Foucault, Michel, Les mots et les choses. Paris. Gallimard, 1966;
—— Defender la sociedad, México. FCE, 1997 (Primera edición en francés: 1997);
—— Langage et litterature”, en: Dit et ecrits. Gallimard, 1994;
García Gual, Carlos, “Borges y los clásicos de Grecia y Roma”, en: Sobre el descrédito de la literatura. Barcelona, Península, 1999; pp. 270-319.
Gutierrez, Jorge (1999ª), Juan Moreira. Buenos Aires, Perfil Libros, 1996;
—— (1999b) La hormiga negra. Buenos Aires, Perfil libros, 1999
Habermas, Junger, El pensamiento postmoderno Madrid. Taurus, 1990 (Primera edición en alemán: 1988)
Heidegger, Martín, “La época de la imagen del mundo” (1938), en: Caminos de bosque. Madrid Alianza, 1995;
—— La proposición del fundamento. Barcelona, ODOS, 1991;
Hestfl, Nicolás y Alan Pauls, El factor Borges. Buenos Aires. FCE. 2000;
Lafforgue, Martín (Comp-), Anti-borges. Buenos Aires. Vergara. 1999.
Leibniz, G.W., Tres textos metafísicos. Bogotá. Norma. 1992;
Lipovetsky, Gilles, La era del vacío. Barcelona, Anagrama, 1986 (Primera edición en francés: 1983;
Ludmer, Josefina, El cuerpo del delito. Buenos Aires. Perfil libros. 1999;
Lyotard, Jean, La condición postmoderna. Madrid. Teorema, 1987;
Man, Paul de, “Un maestro de nuestros días: Jorge Luís Borges” (1964), en: Escritos críticos. Madrid. Visor, 1996; pp. 215-222;
Morin, Edgar, El método. La naturaleza de la naturaleza. Madrid, Cátedra, 1988 (Primera edición en francés: 1977);
Pezzoni, Enrique, Lector de Borges. Buenos Aires. Sudamericana, 1994;
Ricoeur, Paul, Teoría de la interpretación. México. Siglo XXI, 1995 (Primera edición en inglés: 1976);
—— Si mismo como otro. México. Siglo XXI 1996 (Primera edición en francés: 1990;
Rojo, Alberto, “El jardín de los mundos que se ramifican: Borges y la mecánica cuántica”, en: VVVV Borges en diez miradas, Buenos Aires, Fundación el libro, 1999; pp. 185-198.
Rorty, Richard, El giro lingüístico. Barcelona. Paidós 1990 (Primera edición en inglés: 1967);
Trías, Eugenio, Lógica del límite. Barcelona. Destino. 1991

  • Víctor Bravo. Escritor venezolano, nacido en Maracaibo el 27 de septiembre de 1949. Magíster en literatura Iberoamericana y Doctor en Letras. Profesor titular jubilado de la Universidad de los Andes. Ha sido profesor visitante de Universidades europeas y americanas. Ha publicado en revistas indexadas y arbitradas de Europa y América. Conferencista. Poeta. Crítico literario. Ha recibo premios, reconocimientos y condecoraciones por su labor docente y su obra literaria.

Entre sus últimas obras publicadas destacan:
Desde lo oscuro.(poesía) ( 2004).
El mundo es una fábula y otros ensayos. (2004).
EL ORDEN Y LA PARADOJA. Jorge Luis Borges y el pensamiento de la modernidad. (2003 y 2004).
EL SEÑOR DE LOS TRISTES Y OTROS ENSAYOS. (2008).
EL NACIMIENTO DEL LECTOR Y OTROS ENSAYOS, (2008).
LEER EL MUNDO, (2009)

Las manifestaciones públicas y la protesta social: consideraciones desde una perspectiva de derechos humanos

Minuta aprobada por el Consejo del Instituto Nacional de Derechos Humanos de Chile el 27 de agosto de 2012. Un rasgo definitorio de las sociedades democráticas está dado por el modo en que resuelven los conflictos que se derivan del ejercicio simultáneo de derechos que pueden colisionar entre sí. El respeto a todos ellos en el marco de su adecuada jerarquización es un desafío permanente, pero especialmente urgente para aquellos países que, como el nuestro, han retomado el cauce democrático después de una larga pausa autoritaria.

Ello es especialmente relevante cuando se trata de derechos asociados al ejercicio de la manifestación de reivindicaciones políticas o sociales por parte de sectores de la ciudadanía que, por su particular exclusión del debate político público, requieren de espacios distintos a los tradicionales para requerir al Estado la realización de ciertos derechos. Por lo mismo,
los Estados deben ser especialmente cuidadosos a la hora de desarrollar legislaciones que puedan llegar afectar el derecho a la manifestación pública.

En ese marco se considera que debe analizarse con particular atención desde la perspectiva de derechos humanos que compete a esta institución el proyecto de ley “que fortalece el resguardo del orden público” 1 presentado por el Poder Ejecutivo.

En efecto, el 27 de septiembre de 2011, el Presidente de la República envió al Congreso Nacional dicho proyecto de ley que consta de cuatro artículos, cada uno con modificaciones a diferentes cuerpos legales. Respecto de las motivaciones del proyecto, destacan las que aparecen para la nueva tipificación del delito de desórdenes públicos.

1 Mensaje N° 196-359, de 27 de septiembre de 2011

Según el Mensaje Presidencial, ella responde al contexto de manifestaciones estudiantiles que se han desarrollado durante el año y a que “la redacción del artículo 269 del Código Penal no responde a los fenómenos sociales ni a los de desórdenes públicos que enfrentamos” dado qué “… los últimos
acontecimientos en nuestro país, han demostrado que el derecho a manifestarse pacíficamente se ha visto limitado y restringido debido a la acción de personas ajenas a las causas que ellas expresan, las que actúan violentamente”.

El proyecto surge en un contexto en que se han desarrollado masivas manifestaciones por parte de diversos grupos y sectores de la sociedad chilena. En el marco de esas protestas se han producido actos delictivos por parte de grupos de personas y, asimismo, se ha denunciado la existencia de restricciones indebidas al ejercicio de derechos fundamentales y la realización de actos de violencia por parte de las fuerzas policiales en contra de
manifestantes.

En el marco del debate parlamentario sobre el proyecto de ley de referencia resulta fundamental desde la perspectiva del INDH identificar los estándares de derechos humanos aplicables para regular las protestas sociales y las manifestaciones públicas de manera de acompañar el debate radicado en el Congreso con elementos que permitan armonizar dicha propuesta con las obligaciones jurídicas contraídas por el Estado de Chile en materia de
derechos humanos.

Para ello se revisarán: a) la regulación general del derecho a la
manifestación pacífica en tratados internacionales de derechos humanos, en los pronunciamientos de sus órganos y en la Constitución Política de la República, y b) algunas cuestiones que han requerido un tratamiento especial desde el Derecho Internacional de los Derechos Humanos y que son pertinentes para el examen del “Proyecto de ley que fortalece
el resguardo del orden público”.

Posteriormente, se formularán algunas consideraciones sobre el proyecto de ley en cuestión a la luz de lo señalado anteriormente en la primera
parte.

2. Las manifestaciones públicas y protestas sociales: el ejercicio de la libertad de expresión y la libertad de reunión
2.1. El derecho a la manifestación en tratados internacionales de derechos
humanos, en los pronunciamientos de sus órganos y en la Constitución Política

El derecho a la manifestación o a la protesta social no se encuentra expresamente reconocido en los tratados internacionales de derechos humanos. No obstante, se ha entendido que es un derecho que se desprende de otros derechos consagrados en los tratados, esto es, del derecho de reunión y de la libertad de expresión 2.

Ambos derechos se encuentran contemplados en los artículos 19 y 21 del Pacto Internacional de Derechos Civiles y Políticos (en adelante el “PIDCP”), en los artículos 10 y 11 de la Convención Europea para la Protección de los Derechos Humanos y las Libertades Fundamentales (en adelante la “Convención Europea”), en los artículos 9 y 11 de la Carta Africana de Derechos Humanos y de los Pueblos (en adelante la “Carta Africana”) y en los artículos 13 y 15 de la Convención Americana sobre Derechos Humanos (en adelante la “Convención Americana”).

Respecto a la libertad de expresión, este derecho se encuentra consagrado en el PIDCP en los siguientes términos:
“1. Nadie podrá ser molestado a causa de sus opiniones.
2. Toda persona tiene derecho a la libertad de expresión; este derecho comprende la libertad de buscar, recibir y difundir informaciones e ideas de toda índole, sin consideración de fronteras, ya sea oralmente, por escrito o en forma impresa o artística, o consideración de fronteras, ya sea oralmente, por escrito o en forma impresa o artística, o por cualquier otro procedimiento de su elección

2 Cfr. Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 131, párr. 8.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

3. El ejercicio del derecho previsto en el párrafo 2 de este artículo entraña deberes y responsabilidades especiales. Por consiguiente, puede estar sujeto a ciertas restricciones, que deberán, sin embargo, estar expresamente fijadas por la ley y ser necesarias para:
a) Asegurar el respeto a los derechos o a la reputación de los demás;
b) La protección de la seguridad nacional, el orden público o la salud o la moral públicas”.

En el caso de la Convención Americana, el artículo 13 dispone que:
“1. Toda persona tiene derecho a la libertad de pensamiento y de expresión. Este derecho comprende la libertad de buscar, recibir y difundir informaciones e ideas de toda índole, sin consideración de fronteras, ya sea oralmente, por escrito o en forma impresa o artística, o por cualquier otro procedimiento de su elección.
2. El ejercicio del derecho previsto en el inciso precedente no puede estar sujeto a previa censura sino a responsabilidades ulteriores, las que deben estar expresamente fijadas por la ley y ser necesarias para asegurar: a. el respeto a los derechos o a la reputación de los demás, o b. la protección de la seguridad nacional, el orden público o la salud o la moral públicas.
3. No se puede restringir el derecho de expresión por vías o medios indirectos, tales como el abuso de controles oficiales o particulares de papel para periódicos, de frecuencias radioeléctricas, o de enseres y aparatos usados en la difusión de información o por cualesquiera otros medios encaminados a impedir la comunicación y la circulación de ideas y opiniones.
4. Los espectáculos públicos pueden ser sometidos por la ley a censura previa con el exclusivo objeto de regular el acceso a ellos para la protección moral de la infancia y la adolescencia, sin perjuicio de lo establecido en el inciso 2.
5. Estará prohibida por la ley toda propaganda en favor de la guerra y toda apología del odio nacional, racial o religioso que constituyan incitaciones a la violencia o cualquier otra acción ilegal similar contra cualquier persona o grupo de personas, por ningún motivo, inclusive los de raza, color, religión, idioma u origen nacional”.

Por último, nuestra Constitución Política de la República se refiere a la libertad de expresión en los siguientes términos:
“La Constitución asegura a todas las personas: Artículo 19 Nº 12: La libertad de emitir opinión y la de informar, sin censura previa, en cualquier forma y por cualquier medio (…).

De este modo, uno de los derechos en los que se fundamenta la protesta social es la libertad de expresión. Su propósito es ser un instrumento mediante el cual las personas colectivamente pueden expresar y manifestar su conformidad o disconformidad en torno a un tema que generalmente es de interés público.

En palabras de la Corte IDH, la “libertad de expresión es un medio para el intercambio de ideas e informaciones entre las personas; comprende su derecho a tratar de comunicar a otras sus puntos de vista, pero implica
también el derecho de todas a conocer opiniones, relatos y noticias”3

La libertad de expresión, en el caso específicamente del ejercicio de la manifestación o protesta social, juega el rol de exigir al Estado respuestas concretas ante sus demandas. En otras palabras, sirve como rendición de cuentas por parte de la autoridad hacia la ciudadanía. La protesta social –pacífica y sin armas- es un medio legítimo de presión hacia la autoridad y una forma de control democrático legítimo.

Así lo han señalado los órganos de los tratados a nivel interamericano. La Relatoría Especial para la Libertad de Expresión de la CIDH ha afirmado que la protesta social es importante para la consolidación de la vida democrática y que, en general, dicha forma de participación en la vida pública, en tanto ejercicio de la libertad de expresión, reviste un interés social imperativo.

En un sentido similar la Corte Interamericana ha manifestado: “[l]a libertad de expresión se inserta en el orden público primario y radical de la democracia, que no es concebible sin el debate libre y sin que la disidencia tenga pleno derecho de manifestarse”4

3 Corte IDH: Sentencia caso Última Tentación de Cristo, (Olmedo Bustos vs. Chile), de 5 de febrero de 2001, párrafo 66INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

En el caso del derecho de reunión, el artículo 21 del PIDCP dispone que: “Se reconoce el derecho de reunión pacífica. El ejercicio de tal derecho sólo podrá estar sujeto a las restricciones previstas por la ley que sean necesarias en una sociedad democrática, en interés de la seguridad nacional, de la seguridad pública o del orden público, o para proteger la salud o la moral públicas o los derechos y libertades de los demás”.

El artículo 15 de la Convención Americana tiene una regulación similar, al establecer que:
“Se reconoce el derecho de reunión pacífica y sin armas. El ejercicio de tal derecho sólo puede estar sujeto a las restricciones previstas por la ley, que sean necesarias en una sociedad democrática, en interés de la seguridad nacional, de la seguridad o del orden públicos, o para proteger la salud o la moral públicas o los derechos o libertades de los
demás”.

La Constitución chilena, también consagra el derecho de reunión al establecer que:
“La Constitución asegura a todas las personas: Artículo 19 Nº 13: El derecho a reunirse pacíficamente sin permiso previo y sin armas. Las reuniones en las plazas, calles y demás lugares de uso público, se regirán por las
disposiciones generales de policía”.

En el caso del derecho de reunión, se trata de un derecho estrechamente conectado con la libertad de expresión. De acuerdo con la Corte Europea de Derechos Humanos, la expresión de opiniones constituye uno de los objetivos del derecho de reunión pacífica. En este sentido, la Corte Europea ha insistido en que el derecho a manifestar está protegido tanto por el derecho a la libertad de expresión como por el derecho a la libertad de reunión5
4 Corte IDH, Colegiación Obligatoria de Periodistas, Opinión Consultiva OC 5/85, Serie A, No. 5, del 13 de noviembre de 1985, párr. 69.
5 Véase, por ejemplo, Corte EDH, Caso Vogt c. Alemania, Sentencia del 26 de septiembre de 1995, Serie A, No. 323, párr. 64; Corte EDH, Caso Rekvényi c. Hungría, Sentencia del 20 de mayo de 1999, Informe de Sentencias y Decisiones 1999-III, párr. 58; Corte EDH, Caso Young, James y Webster c. Inglaterra, Sentencia del 13 de agosto de 1981, Serie A,
No. 44, párr. 57; Corte EDH, Caso Refah Partisi (Partido de la Prosperidad) y otros c. Turquía, Sentencia del 31 de julio de 2001, párr. 44, disponible en http://www.echr.coe.int; Corte EDH, Caso Partido Unido Comunista Turco y otros c. Turquía, Sentencia del 30 de enero de 1998, Informe 1998-I, párr. 42.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012
Así, el derecho fundamental a la libertad de reunión debe entenderse como la posibilidad de un grupo de personas de juntarse en un lugar determinado y también como la posibilidad de manifestar opiniones de forma colectiva aprovechando la posibilidad de organizar reuniones. Se trata, en consecuencia, también de un derecho fundamental de la comunicación política por su relación con el proceso de formación de la opinión pública.

Los órganos de tratados que supervisan el cumplimiento de los pactos y convenciones así como varios procedimientos especiales han afirmado la relevancia de las manifestaciones públicas como expresión del ejercicio de la libertad de expresión y del derecho de reunión.

Al respecto, la Relatoría Especial para la Libertad de Expresión de la CIDH ha señalado que la participación de las sociedades a través de la manifestación pública es importante para la consolidación de la vida democrática de las sociedades6

La Relatoría ha expresado también que en muchos países del hemisferio, la protesta y la movilización social se han constituido como herramienta de petición a la autoridad pública y también como canal de denuncias públicas sobre abusos o violaciones a los derechos humanos7

Más aún, la misma Relatoría, en su Informe Anual del año 2005, alertaba respecto de la tendencia a criminalizar la protesta social al señalar que, en primer lugar, ésta implica el ejercicio de derechos (a la libertad de expresión y de reunión) y, en segundo término, que la protesta es muchas veces el único mecanismo al cual ciertos grupos sociales pueden recurrir para expresar sus demandas8
.
Existen además estándares en el Derecho Internacional de los Derechos Humanos en relación a algunas temáticas especificas a partir del derecho a reunión y la libertad de expresión, como son la relación entre el ejercicio de las manifestaciones públicas con el resguardo del orden público y la seguridad ciudadana; los límites a las manifestaciones públicas y la especial referencia que se ha realizado al derecho de manifestación de las niñas, niños y adolescentes, las que se verán a continuación.

2.2. El derecho de manifestación vs. el resguardo del orden público

El derecho a manifestar en lugares públicos o “derecho a la protesta social” puede entrar en conflicto con otros derechos –especialmente la libertad de circulación de otras personas- o con otros bienes jurídicos protegidos constitucionalmente –el orden público-.

Al respecto, cabe señalar que, en materia de interpretación y aplicación de los derechos fundamentales, la doctrina mayoritaria ha afirmado que, en los casos en que el valor o bien jurídico protegido entra en colisión con otros valores o bienes jurídicos, debe buscarse el justo punto de equilibrio entre los valores, no necesariamente sacrificando uno a favor del otro.

No hay norma alguna de la Carta Fundamental que autorice a negar un derecho o a desnaturalizarlo para favorecer a otro. Cada uno y todos los derechos pertenecen a un sistema, gozando de igual valor en términos materiales y axiológicos. A través de la técnica de la ponderación se debe optimizar el valor o bien jurídico y darle la mayor efectividad posible habida cuenta de las circunstancias del caso9.

El derecho de manifestación puede efectivamente implicar alguna afectación del orden público, especialmente en consideración a que la protesta social es ejercida frecuentemente por grupos y colectivos que se encuentran marginados del debate público.

6 Cfr. Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 149, párr. 91.
7 Cfr. Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 129, párr. 1.
8 Cfr. Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 131, párr. 1.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

Al respecto la CIDH ha manifestado que “los gobiernos no pueden sencillamente invocar una de las restricciones legítimas de la libertad de expresión, como el mantenimiento del “orden público”, como medio para suprimir un “derecho garantizado por la Convención o para desnaturalizarlo o privarlo de contenido real”. Si esto ocurre, “la restricción aplicada de
esa manera no es legítima”10.

Tal como ha señalado la Corte Constitucional de Colombia, no se puede considerar el derecho de reunión y manifestación como sinónimo de desorden
público para restringirlo per se11.

La Comisión Interamericana ha señalado, además, la íntima relación entre el derecho de reunión y la libertad de expresión, al afirmar que “(…) en el momento de hacer un balance sobre el derecho de tránsito, por ejemplo, y el derecho de reunión, corresponde tener en cuenta que el derecho a la libertad de expresión no es un derecho más sino, en todo caso, uno de los primeros y más importantes fundamentos de toda la estructura democrática: el
socavamiento de la libertad de expresión afecta directamente al nervio principal del sistema democrático”12.

En una misma línea argumentativa, la Corte Europea de Derechos Humanos ha expresado en una reciente sentencia que “en ausencia de actos de violencia por parte de los manifestantes, al menos antes de utilizar la fuerza por parte de la policía, es importante que los poderes públicos demuestren una especial tolerancia hacia las concentraciones pacíficas para no privar de contenido la libertad de reunión garantizada por el artículo 11 de la Convención Europea de Derechos Humanos”13. Así, las restricciones al derecho a protestar pacíficamente no deben constituir un obstáculo para el orden público.

9Sobre la ponderación en caso de conflicto entre derechos fundamentales o entre éstos y otros bienes constitucionalmente tutelados, Vid. BERNAL PULIDO, C., El principio de proporcionalidad y los derechos fundamentales, Madrid, Centro de Estudios Constitucionales, 2007.
10 CIDH, Capítulo V, Informe Anual 1994, “Informe sobre la compatibilidad entre las leyes de desacato y la Convención Americana sobre Derechos Humanos”, OEA/Ser. L/V/II.88, Doc. 9 rev.
11 Corte Constitucional de Colombia, T-456-92, Sentencia del 14 de julio de 1992, disponible en http://ramajudicial.gov.co.
12 Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 93, párr. 150.
13 Corte Europea de Derechos Humanos: Gulizar Tuncer vs. Turquía, 8 de febrero de 2011, párr. 30. Es pertinente recordar que, en España, el gobierno, ante el llamado movimiento “de los indignados del 15 M” que ocupó plazas de
diversos lugares del país días antes de una elección a nivel de Municipios y Comunidades Autónomas, decidió no desalojar las plazas precisamente en base a la citada jurisprudencia de la Corte Europea de Derechos Humanos. Vid. [http://www.lavanguardia.com/politica/elecciones
2011/20110520/54158192830/rubalcaba-no-aclara-si-ordenaradesalojar-las-acampadas-de-los-indignados.html], [http://www.abc.es/20110520/espana/abci-policia-201105201955.html],
[http://www.elmundo.es/elmundo/2011/05/20/madrid/1305917126.html].

Por otra parte, también es importante señalar que los instrumentos internacionales y los órganos de los tratados han señalado algunas pautas respecto a cómo deben actuar las fuerzas policiales cuando deben hacer uso de la fuerza. Ello debe ser especialmente considerado en los casos de intervención para restablecer el orden público.

De acuerdo a la jurisprudencia de la Corte Interamericana de Derechos (Corte IDH), la facultad del uso de la fuerza por parte de agentes del Estado no es ilimitada y está sometida a estrictos estándares de proporcionalidad, sobre todo en consideración a que los derechos comúnmente afectados son el derecho a la vida y a la integridad física.

En este sentido la Corte IDH ha señalado que “[e]stá más allá de toda duda que el Estado tiene el derecho y el deber de garantizar su propia seguridad. Tampoco puede discutirse que toda sociedad padece por las infracciones a su orden jurídico. Pero, por graves que puedan ser ciertas acciones y por culpables que puedan ser los reos de determinados delitos, no cabe admitir
que el poder pueda ejercerse sin límite alguno o que el Estado pueda valerse de cualquier procedimiento para alcanzar sus objetivos, sin sujeción al derecho o a la moral. Ninguna actividad del Estado puede fundarse sobre el desprecio a la dignidad humana”14.

El test de proporcionalidad aplicado a la fuerza pública considera los hechos específicos de cada caso donde la peligrosidad de las personas que son afectadas por una acción estatal y la conducta asumida por ellas constituyen un elemento relevante para determinar la licitud de la interferencia al derecho a la vida e integridad física. Es así como las necesidades de la situación y el objetivo que se trata de alcanzar son relevantes para determinar la legalidad y
proporcionalidad de la medida.

Sin embargo, es importante tener presente que el hecho de enfrentar una conducta o acción adversa de sujetos “supuestamente peligrosos” no otorga
al Estado la posibilidad de usar la fuerza más allá de lo estrictamente necesario15.

Por el contrario, el Derecho Internacional contempla distintos instrumentos que establecen ciertos parámetros a los que debe sujetarse la acción estatal. En efecto el artículo 3 del Código de conducta para funcionarios encargados de hacer cumplir la ley,16 establece que
“Los funcionarios encargados de hacer cumplir la ley podrán usar la fuerza sólo cuando sea estrictamente necesario y en la medida que lo requiera el desempeño de sus tareas”.

2.3. El derecho de manifestación y las políticas públicas sobre seguridad
Ciudadana

Las manifestaciones públicas son abordadas frecuentemente por las autoridades administrativas de cada país desde una perspectiva de seguridad ciudadana, especialmente por la posibilidad que en el contexto de las manifestaciones se realicen actos delictivos que afecten la integridad física de las personas así como la propiedad pública y privada. En caso de colisión de derechos deben sopesarse las razones a favor de cada valor o bien jurídico y buscarse el justo punto de equilibrio entre los valores, no necesariamente sacrificando uno a favor del otro.

14 Corte IDH: Caso Velásquez Rodríguez, Sentencia de 29 de julio de 1988. Serie C No.4, párr. 154, Caso Godínez Cruz, Sentencia de 20 de enero de 1989. Serie C No. 5, párr. 162, Caso Neira Alegría y otros, Sentencia del 19 de julio de 1995. Serie C No. 20 párrafo 75.
15 Es así como en el caso Neira Alegría y Otros la Corte IDH estableció “la alta peligrosidad de los detenidos en el Pabellón Azul del Penal San Juan Bautista y el hecho de que estuvieren armados, no llegan a constituir, en opinión de esta Corte, elementos suficientes para justificar el volumen de la fuerza que se usó”.
16 Adoptado por la Asamblea General en su resolución 34/169, de 17 de diciembre de 1979.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

De acuerdo a la Corte IDH, en la relación entre la protección del derecho de reunión y la necesidad de compatibilizar su ejercicio respecto a la prevención de situaciones de violencia, resulta en principio inadmisible la penalización per se de las demostraciones en la vía pública cuando se realizan en el marco del derecho a la libertad de expresión y del derecho de reunión.

En otras palabras: se debe analizar si la utilización de sanciones penales encuentra justificación bajo el estándar de la Corte Interamericana que establece la necesidad de comprobar que dicha limitación (la penalización) satisface un interés público imperativo necesario para el funcionamiento de una sociedad democrática17.

Así mismo, un Informe de la Comisión Interamericana de Derechos Humanos sobre Seguridad y Derechos Humanos del año 2009 afirmó que las autoridades estatales tienen la obligación de prevenir y, en su caso, controlar cualquier forma de conducta violenta que vulnere los derechos de cualquier persona bajo su jurisdicción.

No obstante, en la adopción de medidas dirigidas a cumplir con esa obligación debe considerarse, según la Comisión, que en algunas ocasiones el ejercicio del derecho de reunión puede distorsionar la rutina de funcionamiento cotidiano, especialmente en las grandes concentraciones urbanas, y que, inclusive, puede llegar a generar molestias o afectar el ejercicio de otros derechos que merecen de la protección y garantía estatal como, por ejemplo, el derecho a la libre circulación.

Sin embargo, señala, esas alteraciones son parte de la mecánica de una
sociedad plural, donde conviven intereses diversos, muchas veces contradictorios y que deben encontrar los espacios y canales mediante los cuales expresarse18.

2.4. Requisitos de las restricciones al derecho de manifestación

El Comité de Derechos Humanos ha sostenido que se pueden imponer restricciones a las manifestaciones públicas siempre que tengan como objetivo proteger alguno de los intereses enumerados en los artículos 19 (derecho a la libertad de expresión) y 21 (derecho a la libertad de reunión) del Pacto Internacional de Derechos Civiles y Políticos.

Acerca de la posibilidad de restricciones, dicho Comité ha sostenido que el derecho a la libertad de expresión es de suma importancia en una sociedad democrática y que toda restricción impuesta al ejercicio de ese derecho debe responder a una rigurosa justificación19.

En un sentido similar, en el Sistema Interamericano de Derechos Humanos, la Relatoría para la Libertad de Expresión señala que, a pesar de la importancia otorgada tanto a la libertad de expresión como a la libertad de reunión pacifica para el funcionamiento de una sociedad democrática, esto no las transforma en derechos absolutos. En efecto, los instrumentos de protección de los derechos humanos establecen limitaciones a ambos derechos.

17 Cfr. Corte IDH: Caso Ricardo Canese Vs. Paraguay. Sentencia de 31 de agosto de 2004, Serie C No. 111, párrafos 96 a 98.
18 Cfr. CIDH, “Informe sobre seguridad ciudadana y derechos humanos”, OEA/Ser.L/V/II., Doc. 57, 31 de diciembre de 2009, pág. 91, párr. 198.
19 Cfr. Comité DH, Caso Tae-Hoon Park c. República de Corea, Decisión del 3 de noviembre de 1998, disponible en http://www.unhchr.ch/tbs/doc.nsf/. Comunicación No. 628/1995: República de Corea. 03/11/98. CCPR/C/64/D/628/1995 (jurisprudencia), párr. 10.3.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

2.4.1. Restricciones al derecho de reunión deben ser establecidas por ley

Los requisitos de las restricciones al derecho de reunión son consignados en al artículo 15 de la Convención Americana de Derechos Humanos, exigiéndose que deben estar establecidas en la ley y ser necesarias para asegurar el respeto a los derechos de los demás o la protección de la seguridad nacional, el orden público o la salud o la moral pública20.

Es pertinente mencionar que, respecto al significado que debe atribuirse al término ley en relación a la restricción de derechos fundamentales, la Corte IDH ha expresado en una Opinión Consultiva que “en tal perspectiva no es posible interpretar la expresión leyes, utilizada en el artículo 30, como sinónimo de cualquier norma jurídica, pues ello equivaldría a admitir que los derechos fundamentales pueden ser restringidos por la sola determinación del poder público, sin otra limitación formal que la de consagrar tales restricciones en disposiciones de carácter general. Tal interpretación conduciría a desconocer límites que el derecho constitucional (…)”21.

También la Corte señaló que “en cambio, el vocablo leyes cobra todo su sentido lógico e histórico si se le considera como una exigencia de la necesaria limitación a la interferencia del poder público en la esfera de los derechos y libertades de la persona humana. La Corte concluye que la expresión leyes, utilizada por el artículo 30, no puede tener otro sentido que el de ley formal, es decir, norma jurídica adoptada por el órgano legislativo y promulgada por el Poder Ejecutivo, según el procedimiento requerido por el derecho interno de cada Estado”22.

La regulación del derecho de reunión en Chile no cumple con el requisito establecido en la Convención Americana de Derechos Humanos respecto a que las eventuales restricciones estén establecidas por ley. En efecto, la normativa que regula el ejercicio del derecho de reunión en lugares de uso público es el Decreto 1086, de 16 de septiembre de 1983, y el objeto de ese decreto es cumplir con el mandato constitucional consignado en el artículo 19
Nº 13 inciso 2 de la Constitución, el cual establece que “Las reuniones en las plazas, calles y demás lugares de uso público, se regirán por las disposiciones generales de policía”.

Además, la regulación del derecho de reunión se contradice también con el principio de reserva legal de los artículos 19 N° 26 y 63 N° 20 de la Constitución Política, en virtud de los cuales la regulación y limitación de los derechos fundamentales debe establecerse por ley 23.

20 Cfr. Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 130, párr. 2.
21 Corte Interamericana de Derechos Humanos: La Expresión “Leyes” en el Artículo 30 de la Convención Americana sobre Derechos Humanos. Opinión Consultiva 6-86/1986, 9 de mayo de 1986, párr. 26.
22 Corte Interamericana de Derechos Humanos: La Expresión “Leyes” en el Artículo 30 de la Convención Americana sobre Derechos Humanos. Opinión Consultiva 6-86/1986, 9 de mayo de 1986, párr. 27.
23 No obstante, cabe señalar que tanto el Tribunal Constitucional como la Contraloría General de la República han señalado que el derecho de reunión constituye una excepción al principio de reserva legal de acuerdo a lo establecido en la constitución. Ambos pronunciamiento no se refieren a lo dispuesto por los tratados internacionales de derechos humanos.

2.4.2. Notificación o aviso previo en las manifestaciones públicas
Otro aspecto vinculado a las restricciones al derecho de manifestación al que han hecho referencia los órganos de los tratados, es el de la necesidad de una notificación o autorización previa. Sobre este punto, el Comité de Derechos Humanos de la ONU ha sostenido que el requisito de que se notifique a la policía antes de realizar una manifestación no es incompatible con el artículo 21 del PIDCP (derecho de reunión)24.

Sin embargo, la exigencia de una notificación previa no debe transformarse en la exigencia de un permiso previo otorgado por un agente con facultades ilimitadamente discrecionales.

En el mismo sentido, la Comisión Interamericana de Derechos Humanos ha señalado que “la finalidad en la reglamentación del derecho de reunión no puede ser la de crear una base para que la reunión o la manifestación sea prohibida. Por el contrario, la reglamentación que establece, por ejemplo, el aviso o notificación previa, tiene por objeto informar a las autoridades para que tomen las medidas conducentes a facilitar el ejercicio del derecho sin entorpecer de manera significativa el desarrollo normal de las actividades
del resto de la comunidad”25.

Un agente por lo tanto, no puede denegar un permiso porque considere que es probable que la manifestación ponga en peligro la paz, la seguridad o el
orden público, sin tener en cuenta si se puede prevenir el peligro a la paz o el riesgo de desorden alterando las condiciones originales de la manifestación (hora, lugar, etc). Como ha señalado también la Relatoría para la Libertad de Expresión de la OEA, las limitaciones a las manifestaciones públicas sólo pueden tener por objeto evitar amenazas serias e inminentes, no bastando un peligro eventual26.

En Chile, el artículo 2 del citado Decreto 1086 regula la notificación o permiso para la realización de reuniones en lugares de uso público de la siguiente manera:
“Para las reuniones en plazas, calles y otros lugares de uso público regirán las siguientes disposiciones:
a) Los organizadores de toda reunión o manifestación pública deben dar aviso con dos días hábiles de anticipación, a lo menos, al Intendente o Gobernador respectivo. Las Fuerzas de Orden y Seguridad Pública pueden impedir o disolver cualquier manifestación que no haya sido avisada dentro del plazo fijado y con los requisitos de la letra b).
b) El aviso indicado deberá ser por escrito y firmado por los organizadores de la reunión, con indicación de su domicilio, profesión y número de su cédula de identidad. Deberá expresar quiénes organizan dicha reunión, qué objeto tiene, dónde se iniciará, cuál será su recorrido, donde se hará uso de la palabra, qué oradores lo harán y dónde se disolverá la manifestación

Cfr. Tribunal Constitucional, sentencia Rol 239, de 16 de julio de 1996; Contraloría General de la República, Dictamen 78.143, de 14 de diciembre de 2011.
24 Comité DH, Caso Kivenmaa c. Finlandia, Decisión del 10 de junio de 1994, disponible en http://www.unhchr.ch/tbs/doc.nsf/. Comunicación No. 412/1990: Finlandia. 10/06/94. CCPR/C/50/D/412/1990 (jurisprudencia), párr. 9.2.
25 CIDH, Informe sobre la situación de las defensoras y defensores de los derechos humanos en las Américas, 2006, párr. 57
26 CIDH, Capítulo IV, Informe Anual 2002, Vol. III “Informe de la Relatoría para la Libertad de Expresión”, OEA/Ser. L/V/II. 117, Doc. 5 rev. 1, párr. 34.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

c) El Intendente o Gobernador, en su caso, pueden no autorizar las reuniones o desfiles en las calles de circulación intensa y en calles en que perturben el tránsito público;
d) Igual facultad tendrán respecto de las reuniones que se efectúen en las plazas y paseos en las horas en que se ocupen habitualmente para el esparcimiento o descanso de la población y de aquellas que se celebraren en los parques, plazas, jardines y avenidas con sectores plantados;
e) Si llegare a realizarse alguna reunión que infrinja las anteriores disposiciones, podrá ser disuelta por las Fuerzas de Orden y Seguridad Pública;
f) Se considera que las reuniones se verifican con armas, cuando los concurrentes lleven palos, bastones, fierros, herramientas, barras metálicas, cadenas y, en general, cualquier elemento de naturaleza semejante. En tal caso las Fuerzas de Orden y Seguridad Pública ordenarán a los portadores entregar esos utensilios, y si se niegan o se producen situaciones de hecho, la reunión será disuelta”.

No obstante, en el precepto transcrito los requisitos que se exigen en la presentación a la autoridad lo convierten prácticamente en una solicitud de autorización quedando incluso facultada la Intendencia o Gobernación para “no autorizar” en determinados casos las reuniones en lugares de uso público.

2.5. Especial referencia al derecho de manifestación de las niñas, niños y
Adolescentes

Considerando que el “Proyecto de ley que fortalece el resguardo del orden público” surge en un contexto y como respuesta a varios meses de movilizaciones de estudiantes secundarios y universitarios, es pertinente también señalar algunas consideraciones sobre el derecho a la manifestación de niños, niñas y adolescentes.

En el Derecho Internacional de los Derechos Humanos se ha entendido que la niñez requiere de una especial regulación y protección. En especial, el “principio del Interés superior del niño” exige que al decidir una medida, de cualquier índole, que vaya a afectar a un niño, una consideración primordial a la que se atienda sea el interés superior del niño (Art. 3, Convención Internacional sobre Derechos del Niño, CDN).

Por otra parte, el artículo 12 de la Convención de los Derechos del Niño, establece el derecho de los niños, niñas y adolescentes a ser escuchados en los procedimientos que los afecten, pues “se relaciona con el derecho a expresar opiniones concretamente acerca de asuntos que afectan al niño y su derecho a participar en las medidas y decisiones que afecten su vida”27.

La Corte Suprema de Justicia en Chile por su parte ha aplicado directamente varias disposiciones de la Convención de los Derechos del Niño, señalando que las sanciones a estudiantes que participan del movimiento estudiantil pueden significar una vulneración de la libertad de expresión de los niños, niñas y adolescentes.

27 Comité de Derechos del Niño: Comentario General Nº 12, El derecho del niño a ser escuchado, CRC/C/GC/12, 20 de julio de 2009, párr. 81.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

Por su relevancia, transcribimos parte de su argumentación:
“Que, aunque es evidente que el estudiante postulaba acción política entre sus compañeros y criticaba fuertemente el régimen legal de enseñanza y a su colegio (fs. 21), el comportamiento de la recurrida contraría la libertad de expresión asegurada a todos en el numeral 12 del artículo 19 de la Constitución Política de la República, porque sanciona la legítima comunicación de ideas. Pero, además, transgrede el ordenamiento jurídico
internacional de carácter particular de los niños vigente en el país de conformidad con lo previsto en el artículo 5º del mismo texto en cuanto reconoce la existencia de los derechos humanos de los niños, y entre éstos, los derechos de carácter político. La Convención de las Naciones Unidas sobre Derechos del Niño en sus artículos 12, 13, 14, 15 y 17 previene las libertades de opinión, expresión, asociación, conciencia y de religión; y es incuestionable que se trató de impedir que el estudiante manifestara sus convicciones.

Finalmente a este respecto es necesario señalar que en la especie no se advierte ningún motivo que justifique el proceder del establecimiento educacional, puesto que es evidente que no se tuvo en cuenta razones relacionadas con el interés superior del niño, en la especie con la preservación y fortalecimiento de su desarrollo formativo, sino únicamente la negativa valoración de sus posiciones”28.

Diversos organismos internacionales manifestaron a su vez su preocupación por la excesiva violencia policial utilizada en las manifestaciones de los estudiantes afirmando que no debe utilizarse la fuerza pública de manera desproporcionada de una manera que afecte el derecho de reunión de los jóvenes.

Así, UNICEF 29 y la Comisión Interamericana de Derechos Humanos30
comunicaron al Gobierno de Chile su preocupación por la acción
gubernamental en contra de la movilización estudiantil, pues el “uso excesivo de la fuerza” ha implicado un atentado en contra de los derechos de los jóvenes, niños y niñas que se manifiestan para rechazar el actual modelo educativo, afectando, entre otros, su derecho de reunión y de manifestación.

2.6. La protesta social y las “tomas”: la necesidad del elemento “pacifico”, la colisión con derechos de terceras personas y los pronunciamientos de los Tribunales Superiores de Justicia

Uno de los aspectos más debatidos del proyecto de ley desde una perspectiva de estándares de derechos humanos es que penaliza la invasión, ocupación o saqueo de “viviendas, oficinas, establecimientos comerciales, industriales, educacionales, religiosos o cualquier otro, sean privados, fiscales o municipales”, dentro de las que estarían comprendidas las
llamadas “tomas”, cuestión que podría afectar, tanto la libertad de reunión como la libertad de expresión.

28Corte Suprema de Justicia: sentencia Rol Nº 1.740-2009, de 23 de abril de 2009, Considerando Jurídico 4.
29 Se encuentra la declaración en http://www.unicef.cl/unicef/index.php.
30 El día 6 de agosto la CIDH, y sus relatorías de Derechos de la Niñez y de Libertad de Expresión manifestaron su preocupación por los graves hechos de violencia ocurridos en las manifestaciones estudiantiles llevadas a cabo en Chile, el jueves 4 de agosto, que habrían significado la detención y uso desproporcionado de la fuerza en contra de centenares de manifestantes, entre ellos estudiantes secundarios y universitarios. Entre otras cosas, la Comisión instó al Estado chileno a “adoptar las medidas necesarias para asegurar el pleno respeto por los derechos a la libertad de expresión, a la reunión y a
la manifestación, imponiendo solamente aquellas restricciones que resulten estrictamente necesarias y proporcionales y que toman en cuenta la obligación especial del Estado de garantizar los derechos de los estudiantes secundarios y
universitarios”. El comunicado completo de la Comisión se encuentra en
http://www.cidh.oas.org/Comunicados/Spanish/2011/87-11sp.htm.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

A continuación, revisaremos los siguientes aspectos referentes a las tomas u ocupaciones:
a) el carácter de pacifico como requisito
b) la posible colisión con derechos de terceras personas;
c) la distinción entre la utilización de fuerza y la existencia de violencia en las
tomas u ocupaciones;
d) y la jurisprudencia de los Tribunales Superiores de Justicia respecto a las tomas ocupaciones de establecimientos educacionales.

2.6.1. El carácter de pacifico como requisito de las “tomas” para que constituyan un ejercicio del derecho a la protesta

Como principio general, todo ejercicio de los derechos a la libertad de reunión y libertad de expresión, incluida las manifestaciones y las tomas u ocupaciones, tienen como requisito en el ámbito de los derechos humanos, que se realicen de manera pacífica. Se trataría en estos casos de acciones colectivas que buscan expresar una posición respecto de un tema sobre el
cual se pretende llamar la atención general o de determinadas autoridades, y que para estar amparada desde el punto de vista de derechos humanos, debe cumplir con dicho requisito.

El Relator especial sobre el derecho a la libertad de reunión y de asociación pacíficas de las Naciones Unidas, Sr. Maina Kiai, en 2012 31 se ha referido a lo que se debería entender como reunión y como reunión pacífica.
“Se entiende por “reunión” la congregación intencional y temporal de personas en un espacio privado o público con un propósito concreto. Por lo tanto, el concepto abarca manifestaciones, asambleas en el interior de locales, huelgas, procesiones, concentraciones, e incluso sentadas. Las reuniones desempeñan un papel muy dinámico en la movilización de la población y la formulación de sus reclamaciones y aspiraciones, pues facilitan la celebración de eventos y, lo que es más importante, ejercen influencia en la política pública de los Estados”.

“El Relator Especial está de acuerdo en que las normas internacionales de derechos humanos amparan únicamente las reuniones pacíficas, o sea, las de carácter no violento y cuyos participantes tienen intenciones que se presumen pacíficas. De conformidad con el Tribunal Europeo de Derechos Humanos, “una persona que mantenga un comportamiento o intenciones pacíficas no perderá el derecho a la libertad de reunión pacífica como consecuencia de actos esporádicos de violencia u otros actos punibles cometidos por otras
personas durante una manifestación”.

En cuanto a una caracterización, incluso más específica, de lo que se considera pacífico, podemos resaltar lo desarrollado por la Organización para la Seguridad y la Cooperación en Europa (OSCE) en sus “Directrices sobre Libertad a la Reunión Pacífica”32

31 Relator Especial sobre el derecho a la Libertad de Reunión y Asociación Pacífica. Informe anual ante el Consejo de Derechos Humanos, 21 de mayo de 2012. Párr. 24 y 25. Disponible en: http://daccessods.un.org/TMP/3973153.82957459.htmlINDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

“(…) El término “pacífico” debería ser interpretado para incluir conductas que puedan incomodar u ofender a personas opuestas a las ideas o demandas que la reunión promueve, incluso conductas que deliberadamente impiden u obstruyen las actividades de terceros. Así, por ejemplo, las reuniones que involucran solamente resistencia pacífica, o los bloqueos con sentadas (sit-down blockades) deberían ser caracterizadas como pacíficas (…)”33.

2.6.2. Las tomas y la colisión con los derechos de terceras personas

Aun considerando lo expresado anteriormente, las manifestaciones y/o las tomas u ocupaciones pueden afectar intereses o derechos de terceros; la libertad personal, la propiedad privada, la libertad de enseñanza, el derecho a la salud y al trabajo, entre otros, puesto que alteran y/o rompen con el normal desarrollo de las actividades de los particulares no involucrados en las mismas.

Será necesario que los tribunales de justicia realicen un ejercicio de ponderación de derechos para determinar en qué situaciones esas acciones pueden ser permitidas o, en cambio, pudieran ser objeto de restricciones o, incluso, ser prohibidas – no necesariamente penalizadas-. En ese ejercicio de ponderación, sin duda habrá que sopesar el carácter pacífico o no de la ocupación, si se trata de un lugar público o privado, y también quienes
son los actores involucrados en la toma u ocupación, considerando que hay grupos y colectivos que presentan mayores dificultades que otros para ejercer su libertad de expresión dada su condición de mayor marginalidad o vulnerabilidad.

Así lo ha expresado la Relatoría de la CIDH al señalar que en muchos países la protesta es muchas veces el único mecanismo al cual ciertos grupos sociales pueden recurrir para expresar sus demandas 34.

La ponderación entre derechos debe considerar además el contenido mismo
de las demandas y su grado de legitimidad. Desde el punto de vista de terceras personas que puedan verse afectados por las acciones de protesta, también hay algunas consideraciones relevantes que realizar, como por ejemplo, si
se trata de un lugar de libre acceso público, de un lugar de propiedad pública o si se trata de un lugar privado. En este último caso la protección de la propiedad cobra mayor énfasis que en el caso de las primeras porque evidentemente los privados soportarían una carga excesivamente gravosa en comparación a los demás. Sin embargo, cuando el peso de la
restricción la soporta el Estado deben hacerse otro tipo de consideraciones a fin de determinar la proporcionalidad de la restricción de los derechos. Así, en primer lugar, debe analizarse quienes son las principales personas afectadas por acciones como las tomas.

32 OSCE-ODIHR. Guidelines on Freedom of Peaceful Assembly. 2007. Disponible en: http://www.osce.org/odihr/24523
33 “(…) The term “peaceful” should be interpreted to include conduct that may annoy or give offence to persons opposed to the ideas or claims that an assembly is promoting, and even conduct that deliberately impedes or obstructs the activities of third parties. Thus, by way of example, assemblies involving purely passive resistance, or sit-down blockades, should be characterized as peaceful. (…) ” OSCE-ODIHR. Guidelines on Freedom of Peaceful Assembly. 2007. Disponible en: http://www.osce.org/odihr/24523. Párr. 22.
34 Cfr. Comisión Interamericana de Derechos Humanos: Informe Anual de la Comisión Interamericana de Derechos Humanos 2005, Volumen II, Informe de la Relatoría para la Libertad de Expresión, OEA/Ser.L/V/II.124 Doc. 7, 27 de febrero de 2006, página 131, párr. 1.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

En todo caso, los conflictos con terceros que se produzcan por el ejercicio práctico de la libertad de expresión y el derecho de reunión deben resolverse de manera casuística. La determinación a priori sobre qué derecho prevalece en caso de colisión puede conducir a situaciones injustas o violatorias de derechos humanos 35 por lo que será de competencia de los tribunales de justicia hacer dicho ejercicio.

2.6.3. La distinción entre la utilización de fuerza y la existencia de violencia en las tomas u ocupaciones

36 Si llevamos el concepto de fuerza a las manifestaciones pacíficas en el marco de la definición sobre reunión pacífica desarrollada por el Relator Especial para la Libertad de Reunión y Asociación de la ONU, es posible considerar que se ejerza fuerza en las manifestaciones -ya que deliberadamente obstaculizan y entorpecen actividades de terceros
contra su voluntad, o incluso, ejercen poder físico sobre algo-, y que aún así sean protegidas por el derecho de los derechos humanos.

En estas situaciones la fuerza no es el ingrediente central de la manifestación u ocupación y se la considera aceptable en pos del ejercicio del derecho a la reunión y libertad de expresión. Además, habrá que aplicar caso a caso el test
de ponderación en relación a los derechos de terceros que se pudieran ver afectados.

Lo que no resulta protegido por el derecho de los derechos humanos es el ejercicio de la libertad de reunión con o a través de acciones de violencia. La violencia ha sido definida por la Organización Mundial de la Salud, 37
como “El uso intencional de la fuerza o el poder físico, de hecho o como amenaza, contra uno mismo, otra persona o un grupo o comunidad, que cause o tenga muchas probabilidades de causar lesiones, muerte, daños
psicológicos, trastornos del desarrollo o privaciones”.

Este concepto contempla el concepto de fuerza como una de las formas de ejercer violencia, sin embargo, para que esa fuerza sea “violencia” también debe concurrir una intencionalidad y sobre todo, afectar la integridad física y psíquica de las personas. La fuerza es uno de los medios por los cuales la
violencia puede ejercerse, pero no toda fuerza es necesariamente violencia.

35 Como lo señala Villaverde en relación al clásico conflicto entre libertad de expresión y derecho al honor “Cuando se pondera para resolver el conflicto, no se parte de la existencia de límites a uno y otro derecho fundamental. No se parte, por ejemplo, de que el insulto no puede ser el objeto de la libertad de expresión porque de serlo privaría al insultado de su derecho al honor y no lo sometería tan sólo a un límite –límite que bien podría derivar del interés general que cabe predicar de la opinión sujeta a examen-. Cuando se pondera en casos como éste, lo que se indaga es cuál de los dos derechos merece en el caso concreto una protección preferente, de manera que, una vez optado por uno de ellos –por ejemplo, la capital importancia que para el sistema democrático tiene un debate libre y robusto de ideas- justificaría la
prevalencia del insulto proferido por un político en campaña electoral dirigido contra su adversario, o la de un espectador presente en un mitin político que se alza para insultar a uno de los oradores, sobre el derecho al honor del injuriado. El límite ya no deriva de una norma constitucional –la que dice que el insulto no es objeto de la libertad de expresión–, sino de la resolución ponderada del caso concreto”. Villaverde, “La resolución de conflictos entre derechos fundamentales. El principio de proporcionalidad”, Carbonel (ed), El principio de proporcionalidad y la interpretación constitucional, Ministerio de Justicia y Derechos Humanos de Ecuador, Quito, Ecuador, 2008, página 177.
36 La distinción entre “fuerza” y “violencia” sirve para los efectos del análisis de estándares de derechos humanos y no responde necesariamente a las distinciones que se realizan en el ámbito penal.
37 WHO Global Consultation on Violence and Health. Violence: a public health priority. Ginebra, Organización Mundial de la Salud, 1996 (documento WHO/EHA/SPI.POA.2). Citado y explicado además en OPS. Informe mundial sobre la violencia y la salud. 2003, p. 5. Disponible en: http://www.paho.org/Spanish/AM/PUB/capitulo_1.pdfINDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

Llevado esto al ámbito de las manifestaciones, una reunión o acción colectiva en la que se ejerza violencia sobre las personas o se pretenda hacerlo es ilegitima y carece de la protección de que gozan las manifestaciones pacíficas.

En el caso concreto de las ocupaciones de establecimientos, en estas manifestaciones evidentemente se ejerce un grado de uso de la fuerza (a personas y cosas), que no implica necesariamente la pérdida de la condición de pacífica, siempre y cuando, la fuerza se ejerza dentro de marcos razonables que no alteran sustancialmente el objetivo pacífico que estas buscan. Muy distinto es el uso de violencia en una toma, que implica intenciones de daño
hacia personas, pudiendo calificarse la manifestación como no-pacífica.

Incluso, se podría entender la existencia de violencia contra cosas, cuando justamente se pretende dañarlas, sin ninguna finalidad legítima o amparada por los estándares internacionales.

El factor tiempo y los motivos que justifican la “toma” deben ser considerados a la hora de evaluar su legitimidad. Es diferente la situación cuando la acción de fuerza es realizada por un período de tiempo suficiente para llamar la atención a las autoridades respecto de situaciones de interés público que no han encontrado satisfacción dentro de los canales institucionales, que cuando la acción de fuerza se extiende en el tiempo provocando graves trastornos que pueden afectar o interferir en el legítimo ejercicio de otros derechos.

2.6.4. La jurisprudencia de los Tribunales Superiores de Justicia respecto a las tomas u ocupaciones de establecimientos educacionales

La jurisprudencia de los Tribunales Superiores de Justicia chilenos, en esta materia, ha sido variada. Frente a solicitudes de desalojo de establecimientos educacionales, hay fallos que consideran que las “tomas” son “acciones ilícitas” y sus dirigentes “instigadores”38, y fallos que plantean que se trata de manifestaciones de un conflicto político.

De acuerdo a esta última, y en particular en los últimos meses del 2011 varias cortes de apelaciones del país planteaban que la solución no debía ser judicial y que es la autoridad política quien debiera generar los canales y mecanismos de solución.

Así por ejemplo, en una de dichas sentencias, la Corte de Apelaciones de Antofagasta, al referirse al conflicto educacional del año 2011, afirmó que “dada la naturaleza política del conflicto del que son
parte estos estudiantes, del que estas tomas son una manifestación, conflicto político de carácter nacional, y dado que éste tiene como actores principales a los estudiantes y al Gobierno, representado por el Ministro de Educación, su solución escapa del ámbito, entre otros, de sus sostenedores, toda vez que ha sido la acción de los estudiantes de sus propios establecimientos educacionales apoyados, incluso, por sus profesores, padres y apoderados, quienes han interrumpido la continuidad del proceso de enseñanza aprendizaje, interrupción que logran con estas tomas y retomas”39.

38 Cfr. Corte de Apelaciones de Santiago: sentencia Rol Nº 2955-2007, de 22 de agosto de 2007, Considerando Jurídico 7.
39 Corte de Apelaciones de Antofagasta: sentencia Rol N° 578-2011, de 29 de septiembre de 2011. INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

En un sentido similar se pronunció también la Corte de Apelaciones de Arica que rechazó un recurso judicial que solicitaba el desalojo de un establecimiento educacional. La Corte consideró que “las conductas aludidas son el reflejo, a nivel de un ámbito estudiantil determinado, cual es el Colegio Cardenal Antonio Samoré, de lo que ha sucedido con el movimiento estudiantil nacional secundario y la programación de los llamados “Paros” o
“Protesta”; en efecto es un hecho público y notorio que las tomas de los establecimientos educacionales se han mantenido en el tiempo, como una forma de presionar al gobierno en el marco de una movilización nacional”
40.

La Corte de Apelaciones de Valdivia rechazó la solicitud de desalojo de la Municipalidad de La Unión, señalando que “es necesario precisar que la toma por los estudiantes de los liceos municipales, a que se refiere este recurso, y muchos otros interpuestos a lo largo del país, es una acción de los propios estudiantes quienes, organizados a nivel nacional, son actores principales en un conflicto que afecta a la sociedad en su conjunto y, en especial, a las instituciones políticas y que se ha expresado de variadas formas, como tomas y marchas, manifestadas tanto a nivel local y regional como nacional, lo que constituyen hechos evidentes o de público conocimiento. Así, se observa que estamos en presencia de un conflicto de naturaleza política, siendo ese el ámbito en que procede darle la debida solución y no en sede jurisdiccional como se pretende en este caso en que están involucrados estudiantes, padres, apoderados y autoridades municipales, de gobierno, tanto provincial, regional y nacional” 41.

Señala también la Corte que “cabe reflexionar, sobre el actuar de los estudiantes, movilizados a nivel nacional, ya por largos meses y la
falta de respuesta al conflicto, por quien es el llamado a responder. Claro está, pues dada la naturaleza política del conflicto del que son parte estos estudiantes, del que estas tomas son una manifestación, conflicto político de carácter nacional, y observando que éste tiene como actores principales a los estudiantes y al Gobierno, representado por el Ministro de Educación, su solución debe ser acordada por quienes intervienen en el conflicto”42.

En estos casos sólo es comprensible la falta de ejercicio de ponderación entre derechos por parte de los jueces por la magnitud del problema y la necesidad de que su solución sea de carácter integral y general. No obstante, se encuentra pendiente un mayor desarrollo por parte de los tribunales de justicia en estas materias a futuro.

Por otra parte, recientemente ha habido varios fallos en un sentido contrario, es decir, que acogen recursos de protección presentados por las municipalidades, fundamentándose en la existencia de una violación al derecho de propiedad de la Municipalidad respecto de los bienes inmuebles municipales43.

40 Corte de Apelaciones de Arica: sentencia Rol Nº 287-2011, de 29 de septiembre de 2011, Considerando Jurídico 7.
41 Corte de Apelaciones de Valdivia: sentencia Rol Nº 421-2011, de 18 de octubre de 2011, Considerando Jurídico Séptimo.
42 Corte de Apelaciones de Valdivia: sentencia Rol Nº 421-2011, de 18 de octubre de 2011, Considerando Jurídico Octavo.
43 Cfr. Corte de Apelaciones de Valdivia: sentencia Rol Nº 583-2011, de 10 de enero de 2011; Corte de Apelaciones de Arica: Sentencia Rol N° 341-2011, de 4 de noviembre de 2011; Corte de Apelaciones de Valparaíso: Sentencia Rol N°460-2011 de 14 de septiembre de 2011 de 20 de marzo de 2012. En algunos de estos casos los fallos fueron revocados por la Corte Suprema pero ello fue porque al momento de conocerse la apelación ya las tomas habían cesado.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

Otro fallo contrario a la toma que muestra un razonamiento diferente del descrito en los párrafos anteriores se encuentra en la sentencia Rol Nº 955-2007 de la Corte de Apelaciones de Santiago. En el se sostiene frente a una toma que: “Que sin perjuicio de que la acción adoptada por los afectados fue ilegal en sí misma (la toma), pues no existe texto legal ni doctrina alguna que legitime una toma, resulta del texto del libelo que además tal acción fue indudablemente arbitraria, es decir caprichosa y ajena a toda razón, pues de su lectura se desprende nítidamente que la toma que originó estos autos, no fue
provocada por cargos o quejas que los afectados o el resto de los alumnos tuvieren contra los recurridos sino que se fundamentó en elementos por completo ajenos a la voluntad de éstos como son el hecho de existir algunos trabajos pendientes de la construcción del edificio y el hecho que las autoridades nacionales no hayan dado completa satisfacción al
Acuerdo Nacional, hechos que escapan por completo a la responsabilidad del Director del Establecimiento y de la Corporación Municipal de La Florida, por lo que no cabe sino concluir que la actitud de los propios afectados constituyó no sólo un acto ilegal, como ya se ha dicho, sino también arbitrario, que constituyó la causa inmediata de la sanción administrativa que, vía Reglamento, se aplicó a los líderes de tal ilícito”44.

Aunque no directamente relacionado, es importante señalar que entre julio del año 2011 y julio del año 2012 la jurisprudencia ha sido mayoritaria en acoger los recursos de protección interpuestos por los apoderados(as) de estudiantes expulsados(as) por participar de tomas, en base a vulneraciones de diversos derechos constitucionales, especialmente el principio de igualdad y el derecho a un debido proceso, además de hacer referencia en varios fallos al Interés Superior del Niño consagrado en la Convención de Derechos del Niño45.

En estos casos, si bien no existe un análisis explícito de la legitimidad o no de las ocupaciones de los recintos educacionales, la argumentación desarrollada por los Tribunales Superiores de Justicia deja de manifiesto que la toma por sí misma, no es causal de expulsión inmediata, sino que debe desarrollarse un proceso con todas las garantías para evaluar si el hecho de la ocupación puede o no incumplir con las normas de convivencia del recinto.

44 Corte de Apelaciones de Santiago Sentencia Rol 2955-2007. Considerando 3º,
45 Vid. al respecto, a modo ejemplar:
Corte de Apelaciones de San Miguel: sentencia Rol N° 187-2011, de 5 de septiembre de 2011, considerando 12 (confirmada por sentencia de la Corte Suprema de Justicia, Rol N° 8880-2011, de 30 de septiembre de 2011); sentencia Rol N° 244-2011, de 14 de octubre de 2011, Sentencia Rol N° 296-2011 de 19 de noviembre de 2011 (Confirmada con dos votos en contra por la Corte Suprema, Rol 11469-2011, 11 de enero de 2012); sentencia Rol Nº 219-2011, de 13 de octubre de 2011, sentencia Rol N°194-2011, de 12 de septiembre de 2011 (confirmada por unanimidad por sentencia de la
Corte Suprema de Justicia, Rol N° 9105-2011 de 19 de octubre de 2011); sentencia Rol Nº 252-2011, de 28 de octubre de 2011.Corte de Apelaciones de Valparaíso: sentencia Rol N° 508-2011, de 18 de octubre de 2011;
Corte de Apelaciones de Puerto Montt: sentencia Rol N° 343-2011, de 31 de enero de 2012 (confirmada por sentencia de la Corte Suprema de Justicia con un voto en contra, Rol N° 1772-2012, de 27 de febrero de 2012).Corte de Apelaciones de Santiago: sentencia Rol Nº 933-2012, de 4 de junio de 2012; sentencia Rol Nº 15876-2011, de 14 de noviembre de 2011, sentencia Rol 808-2012 de 12 de abril de 2012 (confirmada por unanimidad por sentencia de la Corte Suprema de Justicia, Rol N° 3279-2012 de 10 de julio de 2012), Sentencia Rol N° 2266-2012, de 4 de mayo de 2012 (confirmada por unanimidad por sentencia de la Corte Suprema de Justicia, Rol N° 4001-2012 de 8 de junio de
2012), Sentencia Rol 3533-2012 de 16 de mayo de 2012. Corte de Apelaciones de Temuco: sentencia Rol Nº 232-2011, de 3 de noviembre de 2011. Corte de Apelaciones de Iquique: sentencia Rol N° 473-2011, de 12 de diciembre de 2011. Corte de Apelaciones de Concepción: sentencia Rol Nº1125 -2011, de 3 de octubre de 2011.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

3.Consideraciones sobre el Proyecto de ley que fortalece el resguardo del orden público (mensaje 196-359)

El INDH es de la idea que no se requiere modificar los tipos penales que ya existen puesto que esto permite abordar las situaciones de violencia que se producen en materia de orden público. El derecho a manifestar así como la realización de tomas y ocupaciones de manera pacífica se encuentran amparadas por el derecho de los derechos humanos y son expresión
del ejercicio del derecho a reunión y de la libertad de expresión de acuerdo a los estándares de derechos humanos descritos en la presente minuta.

No obstante, es deseable avanzar en la regulación legal del derecho a reunión y en ese marco introducir causales específicas y aceptables que puedan limitar el ejercicio de dicho derecho. En efecto, tal como se ha dicho, el derecho a reunión no es absoluto y puede ser restringido en función de ciertas
causales como el orden público, a través de una norma legal y no de un decreto como lo es en la actualidad.

En relación al proyecto, y no obstante la opinión anterior, cabe analizar desde el punto de vista del cumplimiento de estándares de derechos humanos el proyecto de ley que fortalece el resguardo del orden público 46.

Este consta de cuatro artículos, cada uno con modificaciones a diferentes cuerpos legales. El artículo 1° introduce modificaciones al Código Penal y es el artículo que central del proyecto por cuanto modifica la figura penal de desórdenes públicos, creando una figura considerablemente más amplia al tipo penal preexistente.

Se tipifica como delito los desórdenes o cualquier otro acto de fuerza o violencia que importen la realización de “tomas”, paros en servicios públicos, cortes de tránsito, entre otras hipótesis, junto con los saqueos y otras figuras delictivas como el porte de armas, y se eleva la pena para el delito de atentado contra la autoridad, eliminando la multa como posible sanción.
El artículo 2° del proyecto introduce dos modificaciones al Código Procesal Penal, una para dar atribuciones a la policía para reunir pruebas en situaciones como las planteadas por el nuevo delito de desórdenes y otra moción que viene a complementar la Ley 20.253 conocida como “Ley Corta contra la Delincuencia” y que permite la revisión por vía de apelación de ciertas decisiones judiciales en casos de delitos muy graves.

El artículo 3° amplía la facultad de presentar querellas al Ministerio del Interior en una serie de casos, entre los cuales se encuentran los relativos a desórdenes y atentados contra la autoridad, así como los delitos contra policías y gendarmes en funciones.

46 Mensaje N° 196-359, de 27 de septiembre de 2011.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

Finalmente, el artículo 4° contiene una modificación a la Ley de Control de Armas, relativa a ampliar los verbos rectores en el delito de porte de armas.

A continuación, nos referiremos a algunos aspectos del proyecto de ley que merecen particular atención desde la perspectiva de los estándares internacionales de derechos humanos. En cada punto, haremos referencia a una comunicación conjunta de diversas relatorías del sistema universal de derechos humanos 47 que han expresado su preocupación acerca de algunas cuestiones del proyecto de ley que fortalece el resguardo del orden
público porque pudieran afectar la libertad de expresión y el derecho de reunión.

3.1. Amplitud del tipo penal de desórdenes. Penalización de actos de
protesta.

El tipo penal de desórdenes que propone el proyecto y que sustituiría al Art. 269 asimila desordenes con cualquier acto de fuerza o violencia. En este entendido, toda protesta o manifestación que de suyo lleva aparejado algún grado de desorden es equivalente a los actos de violencia que se puedan cometer. No hay por tanto una gradación en el tipo penal que permita hacer las distinciones mínimas que salvaguarden el ejercicio del derecho a la libertad de expresión y del derecho a reunión expresados en la protesta social.

Por tanto cualquier desorden “o cualquier otro acto de fuerza o violencia” serán sancionados en el caso que “importen” una de las seis hipótesis planteadas.

Algunas de estas hipótesis también resultan muy amplias:
• El N° 1 incluye el paralizar o interrumpir algún servicio público, tales como los hospitalarios, los de emergencia y los de electricidad, combustible, agua potable, comunicaciones o transporte. Si no se exige gravedad de los desórdenes o actos de fuerza o violencia, las diferentes paralizaciones o huelgas (legales o no) de funcionarios fiscales, de la salud, de servicios de comunicación, etc, podrían constituir delito.

• N°2 Invadir, ocupar, saquear viviendas, establecimientos comerciales, industriales, educacionales, religiosos o cualquiera otro, sean privados, fiscales o municipales. De acuerdo al proyecto, las diversas formas de expresión de la protesta social resultarían penalizadas y se mezclarían situaciones que hasta ahora no son delictivas, ni aparece justificado que lo sean en tanto limitan peligrosamente con el ejercicio de derechos
fundamentales, como la toma de un Liceo o universidad, con otras situaciones que claramente constituyen delito, como los saqueos que hasta ahora están regulados a través del tipo penal de robo con fuerza en las cosas (artículos 440 a 445 del Código Penal).

En este sentido, cabe señalar que la Comisión Interamericana se ha referido expresamente a lo problemático que puede resultar la penalización de actos realizado en un contexto de protesta social. y ha sostenido que “es necesario valorar si la imposición de sanciones penales se constituye como el medio menos lesivo para restringir la libertad de expresión practicada a través del derecho de reunión manifestado en una demostración en la vía pública o en espacios públicos.

47 Se trata del Relator Especial sobre la promoción y la protección del derecho a la libertad de opinión y de expresión, del Relator Especial sobre el derecho a la libertad de reunión y asociación pacificas y de la Relatora Especial sobre la situación de los defensores de derechos humanos.INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

Es importante recordar que la criminalización podría generar en estos casos un efecto amedrentador”48.

En efecto, la penalización de la protesta social en los términos que se establece podría constituirse en factor de inhibición por temor del ejercicio del derecho de reunión y la libertad de expresión.

A lo anterior es oportuno agregar que la comunicación realizada al Estado de Chile por parte de las Relatorías Especiales sobre Libertad de Expresión, Reunión y Defensores de Derechos Humanos de Naciones Unidas ha señalado su preocupación precisamente por la excesiva amplitud que se le daría a los desordenes públicos del artículo 269 del Código Penal, señalando que “esto podría resultar en la restricción de un gran número de protestas públicas que puedan reunir a personas en lugares públicos y, por tanto, puedan ocasionar interrupciones del transporte público y la libre circulación de personas y
vehículos”49.

3.2. Penalización de actos de incitación

Por otra parte, el proyecto sanciona a quienes hubieren incitado, promovido o fomentado los desórdenes, contemplando tres verbos rectores para ampliar el sujeto activo del delito, más allá de las normas generales de autoría. Ello abre la opción de sancionar penalmente a quienes convoquen u organicen movilizaciones masivas, siempre que “la ocurrencia de los mismos (desórdenes) haya sido prevista por aquéllos”.

Lo anterior implicaría trasladar o compartir la responsabilidad respecto de la mantención del orden público a quienes convocan a manifestaciones y que corresponde a las fuerzas policiales en tanto garantes del orden público, incluidos por cierto el respeto y garantía de los derechos de todas las personas.

Es más, se atribuye a los convocantes la responsabilidad respecto de eventuales actos de violencia en el marco de las manifestaciones, aún cuando
no tengan ninguna relación o vínculo con ellas.

En la misma comunicación antes referida de los Relatores se expresa al respecto que “una persona que convoque una protesta pacífica sin intención de promover ni incitar a actos de fuerza o violencia, podría ser criminalizada en el caso de que la protesta se volviera violenta”50.

48 CIDH, Capítulo IV, Informe Anual 2002, Vol. III “Informe de la Relatoría para la Libertad de Expresión”, OEA/Ser. L/V/II. 117, Doc. 5 rev. 1, párr. 35
49 CONSEJO DE DERECHOS HUMANOS, Comunicación conjunta de procedimientos especiales del Relator Especial sobre la promoción y la protección del derecho a la libertad de opinión y de expresión, Relator Especial sobre el derecho a la libertad de reunión y asociación pacificas y por la Relatora Especial sobre la situación de los defensores de derechos humanos, en relación al proyecto de ley que fortalece el resguardo al orden público (boletín 7975 – 25), 23 de enero de 2012, pág. 2.
50 Ibíd. INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

3.3 Penalización de las ocupaciones o “tomas” y de los cortes de tránsito

Una de las reformas al Código Penal que se introduce con el proyecto de ley, es la penalización de las tomas u ocupaciones de establecimientos públicos”
51.

De acuerdo al artículo 269 n° 1, 2 y 3 que se proponen, cualquier tipo de toma u ocupación de los lugares señalados sería penalmente sancionado requiriéndose para ello que éstas sean realizadas con fuerza o con violencia.
En el caso de las “tomas” u ocupaciones cabe señalar que, como se indicó en los párrafos anteriores, si bien constituyen en principio una forma de manifestarse, éstas deben desarrollarse de manera pacífica, es decir dentro de ciertos parámetros de normalidad que habrá que evaluar caso a caso.

El proyecto de ley enuncia éstas dos condiciones (violencia y fuerza), bastando solo la concurrencia de una de ellas para que se tipifique el delito. Esto quiere decir que una manifestación que utilice algún grado de fuerza pero no la violencia puede ser una conducta típica, incluyéndose como delito manifestaciones que podrían estar amparadas por la libertad de reunión y expresión, como lo es una toma pacífica.

Llama, además, especialmente la atención la regulación conjunta de este tipo de acciones con los saqueos, estableciendo la misma pena para ambos delitos.
En cuanto a los cortes de tránsito, la penalización de la conducta sólo exige que se impida o altere “la libre circulación de las personas o vehículos” en diversos lugares públicos. En el proyecto cualquier alteración del tránsito constituiría un delito aun cuando este no se desarrolle con violencia por parte de los manifestantes o personas que realicen esta acción como por ejemplo los sittings. Cabe subrayar que, por definición, la libertad de reunión se
ejerce en lugares de uso público y sin permiso previo.

Ello significa que, inevitablemente, cuando ella es masiva, irroga naturalmente la alteración o interrupción de la libre circulación de las personas. En estos términos, la construcción del tipo penal propuesto
choca frontalmente con la norma del inciso primero del artículo 10 N°13 de la
Constitución.

En el año 2008, la Relatoría Especial para la Libertad de Expresión de la CIDH se pronunció en relación con actos concretos que hacen parte de la protesta y señaló que: “Las huelgas, los cortes de ruta, el copamiento del espacio público e incluso los disturbios que se pueden presentar en las protestas sociales pueden generar molestias o incluso daños que es necesario prevenir y reparar. Sin embargo, los límites desproporcionados de la protesta, en particular cuando se trata de grupos que no tienen otra forma de expresarse
públicamente, comprometen seriamente el derecho a la libertad de expresión”.

Junto con ello, en la comunicación conjunta de las relatorías del sistema de la ONU, además de señalar sobre este punto que la nueva tipificación de los desordenes públicos “podría resultar en la restricción de un gran número de protestas públicas que puedan reunir a personas en lugares públicos” también “el Proyecto de Ley podría resultar en restricciones excesiva en cuanto a las posibles ubicaciones de las protestas”52

51Vid. [http://www.latercera.com/noticia/nacional/2011/10/680-397308-9-hinzpeter-dice-que-ley-antitomas-representa-ala-gran-mayoria-de-chilenos.shtml]. INDH – Las Manifestaciones Públicas y la Protesta Social – 27 de agosto de 2012

La comunicación de las relatorías es de esta forma coherente con todo lo expresado anteriormente acerca de que las protestas en lugares públicos deben tener una especial protección.