Cazadores blancos, corazones negros

«…nuestra determinación es fortalecer más nuestra flota en el futuro, de manera que aseguraríamos que nadie disputaría con nosotros un lugar privilegiado en el mundo…»
Guillermo II de Alemania, discurso del 18 de junio de 1901 en Hamburgo, citado en McLaren, A. D. , Germanism from Within, Nueva York, E.P. Dutton and Company, 1916, pág. 31

«Para asegurar el asentamiento blanco contra la tribu nativa mala, inepta culturalmente y predadora, es posible que su erradicación pueda ser necesaria en determinadas circunstancias»
Comisionado de Asentamiento Dr. Paul Rohrbach, en Lusane, C. , Hitler’s Black Victims: The Historical Experiences of Afro-Germans, European Blacks and African Americans in the Nazi Era, New York, Routledge, 2003, pág. 43

El colonialismo como página negra del capitalismo occidental ha dado pie a una bibliografía abultada, polémica, donde se enfrentan los puntos de vista del viejo marxismo económico con la nueva historia que lo concibe como un método de prestigio político. Tan pronto como a inicios del siglo XX Hobson o Hilferding se preguntaron por este sistema como propia corrupción del capitalismo, mientras que Lenin fue más allá con el conocido panfleto “El Imperialismo, fase superior del capitalismo” de 1916.

De manera más reciente, en los 60 y 70 del siglo pasado, Charles André Julien en su historia de Algeria ha vuelto a defender esta tesis de explotación, mientras que Bruncschwig fue agudo al establecer de manera precisa cómo el sistema colonial no era exclusivamente económico –y éste como nos recuerda Wallerstein ya dataría del siglo XV- sino que equivalía a unos valores de prestigio político. En este sentido, Hobsbawm dejó claro cómo este sistema de subyugación económica tenía consecuencias sociales vinculadas a un ideología jingoísta:

“El sentimiento de superioridad que unía a los hombres blancos occidentales, tanto a los ricos como a los de la clase media y a los pobres, no derivaba únicamente del hecho de que todos ellos gozaban de los privilegios del dominador, especialmente cuando se hallaban en las colonias. En Dakar o Mombasa, el empleado más modesto se convertía en señor y era aceptado como un “caballero” por aquellos que no habrían advertido su existencia en París o en Londres; el trabajador blanco daba órdenes a los negros.”

Una de las potencias coloniales menores, apenas conocida para el gran epífagre histórico o la clásica propia novela colonial (Kipling, Conrad, los relatos cortos africanos de Maupassant…), es el propio II Reich Alemán, que primero de manera incipiente bajo Bismarck y especialmente bajo Guillermo II dejará su efímera huella colonial en el mundo. Anteriormente hubo colonias en alemanas en América, a inicios del siglo XVI, en la intitulada Klein-Venedig (Pequeña Venecia) situada en la actual Venezuela y Nicaragua. Ninguna de ellas hubo de prosperar, y los alemanes se encontraron a inicios del XIX sin ningún territorio de importancia, mientras que sus rivales, unidos ya políticamente, se repartían de manera progresiva el globo.

Sedán, en 1870, supuso la consagración del poder continental alemán en Europa, y aceleró para la década posterior la creación limitada de un primer Imperio colonial alemán, que contará en África con territorios en Mozambique, Burundi, Camerún o Namibia, mientras que en Oceanía a finales del siglo XX se verá un dominio circunstancial asociado a las Islas Salomón, las Marianas y Samoa. Ninguno de los territorios, ni siquiera las ciudades chinas con intereses comerciales, tuvieron un gran rendimiento económico, y este Imperio palideció respecto a sus rivales franco británicos.

Era una consecuencia del sistema de Realpolitik, instigado por Bismarck, que desconfiaba del esfuerzo colonial, y que había dicho en el Reichstag para 1884:

“…Me opongo absolutamente a la creación de colonias según un plan que considero
negativo y que consiste en adquirir un territorio y colocar en él una guarnición y funcionarios, y luego invitar a las gentes a que vayan a vivir en él.”

Hubo de ceder, claro está, bajo las peticiones de comerciantes de Hamburgo, e ir fundando pequeñas colonias, factorías más bien, en Gabón, Camerún bajo Woermann y en Guinea bajo la Badische Anilin. Para julio de 1884 se confirmará el protectorado en Camerún y Togo y en agosto será reconocido el sudoeste africano. Los 90 es la época de expansión del pacífico, del inicio de la Weltpolitik de Guillermo II, y se obtendrán las Marianas luego de la debacle española, Samoa y las Islas Salomón.

En términos económicos, en los que se analizaba el colonialismo los autores socialistas, las colonias fueron un rotundo fracaso, y para la década de los 90 del siglo XIX las exportaciones no llegaban al 0.17% del comercio exterior del Reich. Más aún, bajo Guillermo II se pasará de una explotación comercial, a la holandesa y centrada en pioneros, a una territorial, con el ejército en el terreno e implantación de población exógena alemana.

He ahí, para 1904, cuando los alemanes se enfrenten a una gran rebelión a su dominio incontestado: los Hereros.

Nación árida; habitantes belicosos

El territorio colonizado por Alemania en África no fue de gran riqueza, y destacaba por ello el sudoeste africano, explorado bajo el comerciante Adolf Lüderitz para abril de 1884. A finales del año será reconocido legalmente y la oficialidad colonial del Reich se hará cargo poco a poco del territorio. La tropa de seguridad colonial, la Schutztruppe, llegará para 1888. Con el crecimiento al norte, la colonia hubo de prosperar, pero siempre bajo inferioridad numérica de los colonos respecto a los autóctonos.

La colonia, de difícil acceso a través de la Bahía de Walvis –posesión británica– estaba mal comunicada también en el interior, aunque se construyó un ferrocarril de Swakopmund a Windhoek para 1902, que sería llave en la revuelta de los Hereros, por su incipiente territorialidad. El territorio era árido, malo para las granjas, y las tribus se encontraban en zonas fronterizas. Una fuente del tiempo afirmaba sobre el territorio:

“Todo parece muerto, sombrío, desierto (…) no había palmeras, ni madera, ni árboles, ni arbustos: sólo piedras, ricas y arenas…”

Sin ríos, apenas cartografiado el interior, el dominio era todavía relativo, más rodeada de tribus indígenas en ocasiones belicosas. Los Ovambos, en el norte de la colonia, son agricultores y tienen animales domésticos, con recolección. Tenían fama de belicosos, y los alemanes no intervinieron en su colonia de Ethosa Pan, aunque mantuvieron un fuerte en Namutoni. Al sur, los Khoikhoi –más conocido como los Hotentotes o Namas– dirigidos por Hendrik Witbooi, se rebelaron para 1893 hasta 1895, año en el que se pacificaba el territorio con una rendición pactad. Ganaderos, nómadas, solían estar peleados con los Hereros, aunque contribuyeron a su rebelión de 1904.

Estos últimos fueron, sin duda, el verdadero dolor de cabeza de los alemanes en esta colonia. Eran conocidos como los Damara por los Hotentotes, del grupo etnolingüístico Bantú, y se les acusaba de crueles y mentirosos. Se recoge un testimonio de los Herero sobre una razzia contra unos Hotentotes ladrones:

“Volviendo de Hornkranz nos cruzamos unos pocos Hotentotes los cuales por supuesto matamos. Ayudé personalmente a matar uno de ellos. Primero cortamos sus orejas diciéndole “Nunca volverás a oír el ganado de Damara mugir.”. Luego cortamos su nariz diciendo “Nunca volverás a oler a los Bueyes de Damara” Y entonces cortamos sus labios diciendo “Nunca probarás de nuevo a los Bueyes de Damar” Y entonces, finalmente, cortamos su garganta.”

Son ganaderos, aunque no matan a los animales; se alimentaban de productos lácteos y la recolección. Están virando, por aquel primer encuentro en 1880, de una sociedad tribal a una jefatura, aunque todavía el cabecilla se considera un igual. Afirma Marvin Harris sobre este sistema:

“Los cabecillas son líderes de aldeas o bandas autónomas. Los jefes son líderes de grupos de bandas y aldeas, aliadas más o menos permanentemente, que se denominan jefaturas. La diferencia principal entre las bandas y las aldeas autónomas, por una parte, y las jefaturas, por otra, es que estas últimas constan de varias comunidades o asentamientos. Los jefes tienen más poder que los cabecillas; sin embargo, los cabecillas que son redistribuidores prestigiosos son difíciles de distinguir de los líderes de las pequeñas jefaturas. Mientras que los jefes heredan su cargo y se mantienen en él aunque durante un tiempo sean incapaces de proporcionar a sus seguidores redistribuciones generosas.”

Todavía no existe una propiedad privada, pero sí un sistema proto-feudal de dependencias para los observadores alemanes del tiempo. Son patrilineales, aún con matrilinealidad en algunos casos de grupos (oruzo, su particular gens latina…), y tienden a alianzas entre ellos. Tienen cientos de palabras para el ganado, al que lo vinculan dentro de un sistema político-religioso.

La aparición de los alemanes quebrará este sistema económico, y pronto los Herero adquirirán un sistema de clase, con conversión a ranchos y la aparición de grandes propietarios y asalariados. Theodor Leutwein contribuyó a una administración pactista, y llevó a la rápida expansión colonial. Estableció tres centros regionales, Windhoek, Otjimbingwe, y Keetmanshoop, y mejoró la comunicación con las costa. Mantuvo una tensión entre la violencia y el pacto, pero fue sorprendido y relevado por la rebelión de los Hereros.

Con la expansión colonial, los alemanes extinguieron los fuegos sagrados y los integraron progresivamente como trabajadores en las haciendas blancas. A ello se unía el problema de la deuda, y anteriormente a 1903 los Hereros debían créditos a los comerciantes blancos por objetos valiosos o ganado. Este mismo año, antes de la rebelión, los Herero se dividen en nueve tribus, siendo la más grande Okahandja con 23.000 habitantes en 150 aldeas. Samuel Mahadero es el líder, el reconocido como interlocutor por Alemania, pero en Omaruru está Manasse y en Otijimbingwe Zacharias. El poder dependía del ganado, y el líder debe decidir sus decisiones todavía con una asamblea.

En esta progresiva descomposición social, con la amenaza del ferrocarril, los tiroteos y violaciones a las nativas, según cita Sarkin, Mahadero escribió a su rival tribal Witbooi alentándole a la sublevación:

“Toda nuestra obediencia y paciencia con los alemanes sirve de poco, y cada día ellos disparan a alguien por ninguna razón…Muramos luchando en lugar de morir como resultado del maltrato, encarcelamiento y alguna otra calamidad. Di a todos los capitanes que bajen y se alcen para hacer batalla.”

El genocidio herero

La rebelión fue cuidadosamente preparada a lo largo de 1903, con reuniones informales donde planificaron el asalto en la propia Okahandia. Mahadero pudo instigar o unirse a ella una vez iniciada, pero es casi seguro que actuó en un doble juego. No atacaron directamente las ciudades, sino los granjeros y sus propiedades, el único elemento de dominio real en un territorio con mayoría numérica absoluta de las tribus.

El 12 de enero de 1904 los Hereros dominan Okahandia, y llegan a las 30 víctimas, casi todos granjeros para el 20 de enero. No llegaron a caer las fortificaciones, pero la revuelta dejó de 123 a 150 víctimas. Los herero fueron selectivos, y evitaron asesinar Boer –aunque murieron siete–, ingleses, mujeres y niños. La estrategia era forzar una negociación, no la aniquilación total de los alemanes. Cortaron las comunicaciones, también, entre Okahandia y Windhoek, y parte de la línea ferroviaria.

El 18 de enero el cánciller Bülow avisa al Reichstag de la situación, que provoca estupor entre los diputados, y ve en sus causas el antiguo orgullo guerrero del pueblo Herero. Bebel, el líder socialdemócrata, acusó a la deuda de los comerciantes de instigar a los Hereros a la revuelta. La suerte de Leutwein estaba echada, y fue pronto sustituido por un viejo militar prusiano llamado Lothar Von Trotha. La transición fue, en inicio, complicada ya que se pasó de una administración civil a una militar, manu militari, dirigida por Von Trotha y en línea directa con la cúpula militar de Guillermo II. Era la consecuencia del sistema político de contrapesos del II Reich, que permitía una intervención de los viejos poderes fácticos ajena al peso de la asamblea, en ocasiones sólo consultiva.

Se buscará, así, la Vernichtungspolitik, la política de destrucción, luego del apaciguamiento de Leutwein. Este último los venció en Oviumbo, lo que les llevó a una posición defensiva en Waterberg. Con la llegada de Trotha en junio de 1904 se declara la ley marcial, y la política de no hacer prisioneros se confirma de manera total. Para noviembre Leutwein está fuera, y comienza el reinado de la violencia en la colonia alemana.

El objetivo de Trotha fue, bajo la propia casuística prusiana heredada de Von Clausewitz, buscar una batalla aniquiladora que dejara la nación Herero en una posición de debilidad absoluta. Los 14.000 nuevos soldados alemanes utilizados por esta nueva administración alcanzaron la victoria en la batalla de Waterberg, a inicios de agosto de 1904, que decidió la suerte de los Herero para siempre. Con esta derrota Von Trotha publicará un comunicado para octubre del mismo año donde avisará a los Herero que están fuera de la administración colonial alemana:

“El pueblo herero tendrá que dejar el país. De otra manera, forzaré hacerlo con la fuerza de las balas. A lo largo de los límites alemanes, cada Herero, se encuentre armado o no, con o sin ganado, será disparado. No aceptaré ninguna mujer y niño más. Conduciré a la gente de vuelta con su grupo u ordenaré que sean disparados.
Firmado. El Gran General del Kaiser Supremo, von Trotha.”

En el tiempo, se discutió sobre la veracidad del documental, pero se pudo encontrar el texto en el archivo original de Botsuana. La orden no procedió de Guillermo II, ni el canciller Bülow; fue el propio de Von Trotha el que actuó con libertad, planeando un exterminio étnico determinado. Si bien el ejército alemán tuvo unas bajas notables, casi todas por enfermedad, en torno a los 1500 hombres, y con un gasto de 600 millones de marcos en una colonia deficitaria, las cifras en el caso de los Herero y los Nama (que se unirán en octubre de 1904 a la revuelta) son escalofriantes: perecieron un 80% de Hereros de una tribu de 60.000 a 80.000 personas. Los Nama, que se rindieron bajo la presión de Trotha, perecieron en números en torno al 45 – 50%, de un total de 20.000 hombres.

Al exponer a los Herero a la aniquilación total, los alemanes actuaron sin cuartel, sin tomar prisioneros, exterminando sin control a los supervivientes de la batalla. El testimonio de Jan Kubar, nativo de Gootfontein y ayudante de los alemanes, es claro al respecto:

“Los alemanes no tomaban prisioneros. Asesinaron a cientos y cientos de mujeres y niños a lo largo de las carreteras. Los golpearon con bayonetas con la culata de sus fusiles. Las palabras no pueden encontrarse para relatar lo que pasó; era demasiado terrible. Estaban reposando exhaustos e inofensivos a lo largo de las carreteras, y los soldados los descuartizaban a sangre fría…”

La gran matanza, a pesar de todo, no tuvo como protagonista a la administración colonial, sino al propio desierto del Kalahari, que acabó por deshidratación y falta de comida con gran parte de lo que quedaba de los Herero luego de Waterberg. Trotha instigó un cordón de seguridad de 250 Km al oeste de la administración colonial haciendo imposible escapar de este exilio y muerte segura. Los superviviente fueron recluidos en campos de concentración, siguiendo el modelo español o inglés en Cuba o Sudáfrica. Más de un 45% de los internados perecieron.

Von Estroff, que había acompañado a Trotha en la batalla contra los Herero, juzgó apropiadas las medidas, justificando de este modo la actuación:

“…uno debe estar de acuerdo con los esfuerzos del General von Trotha de aniquilar el conjunto de la nación Herero o expulsarlos del país. Luego de lo que ha ocurrido [la revuelta], la coexistencia de blancos y negros será muy difícil, a menos que los últimos se mantengan en un status de trabajo forzado, un tipo de esclavitud. La guerra racial que ha surgido puede ser sólo concluida con la aniquilación o la esclavitud total de un bando.”

Justicia tardía

El escándalo internacional fue inmediato, y los legados de la administración colonial británica recogieron los testimonios de las matanzas. Con la llegada de las noticias al Reichstag, Guillermo II censuró a Trotha, pero el exterminio racial se había cometido por completo para diciembre de 1904. Bebel acusó al gobierno alemán, y a Trotha de regirse por leyes inhumanas, propias de bárbaros. El 19 de noviembre de 1905 Trotha fue relevado de su puesto. Será ascendido a general de infantería en 1910, muriendo diez años más tarde de fiebre tifoidea sin ser procesado.

A inicios de los 90 del siglo XX, con la independencia de Namibia, las autoridades políticas del país pidieron a Alemania una disculpa y una indemnización por el genocidio. Ésta se vio cumplida, aún sin indemnización, en agosto de 2004 cuando la ministra para el desarrollo Heidemarie Wieczorek-Zeul afirmó en el propio país.

“Nosotros alemanes aceptamos nuestra responsabilidad moral e histórica en las culpas incurridas por los alemanes de la época…Las atrocidades cometidas en esa época habría sido denominadas genocidio.”

Fue una disculpa tardía, quizá cosmética; un juicio de sus iguales, parafraseando a Kipling, el gran poeta del colonialismo:

“Take up the White Man’s burden—
Have done with childish days—
The lightly proferred laurel,
The easy, ungrudged praise.
Comes now, to search your manhood
Through all the thankless years
Cold, edged with dear-bought wisdom,
The judgment of your peers!”

Bibliografía

ANDRÉ JULIEN, C. , Histoire de L’Algerie Contemporaine, París, Presses Universitaires de France, 1979

BRIDGMAN, H. , The Revolt of the Hereros, Los Ángeles, University of California Press, 1981

BRUNSCHWIG, H.; Mythes et realités de l’imperialisme colonial français, París, A. Colin, 1965

CURSON, P. , Border Conflicts in a German African Colony, Suffolk, Arena Books, 2012

GUILLEN, P. , Alemania El Imperio Alemán (1871 – 1918), Barcelona, Vicens Vices, 1973

HARRIS, M. , Introducción a la Antropología General, Madrid, Alianza, 2009

HOBSBAWM, E. J. , La Era del Imperio (1875 – 1914), Barcelona, Editorial Labor, 1989

JONES, A. , Genocide. A Comprensive Introduction, Nueva York, Routledge, 2006

KI-ZERBO, J. , Historia del África negra, Madrid, Alianza Editorial, 1980

KIPLING, R., Verse 1885 – 1926, Nueva York, Doubleday – Doran and Company, 1927

KRIPPENDORF, E. , El sistema internacional como historia, México, FCE, 1985

LUSANE, C. , Hitler’s Black Victims: The Historical Experiences of Afro-Germans, European Blacks and African Americans in the Nazi Era, New York, Routledge, 2003

MEYER, F. , Wirtschaft und Rech der Herero, Berlín, 1905

MCLAREN, A. D. , Germanism from Within, Nueva York, E.P. Dutton and Company, 1916

SARKIN, J. , Germany’s Genocide of the Herero…, Ciudad del Cabo, UTC Press, 2010

VVAA, Historia General de África VII. África bajo el dominio colonial (1880 – 1935), Madrid, Tecnos, 1987

VVAA, The Specter of Genocide: Mass Murder in Historical Perspective, Cambridge, Cambridge University Press, 2003

WALLERSTEIN, I. , El moderno sistema mundial (vol I – III), Madrid, Siglo XXI de España, 1999

WEISER, M. , The Herero war – the first genocide of the 20th century?, Munich, Grin Editorial, 2001

ILPES denuncia desalojo de vendedoras de San Martín

SAN MARTIN, 8 de febrero de 2013 (SIEP) “Vamos a acompañar a nuestras hermanas vendedoras que han sido desalojadas violentamente por el alcalde de ARENA, Víctor Rivera, es nuestro deber pastoral, evangélico…” expresó el Rev. Ricardo Cornejo, pastor de la Iglesia Luterana Popular de El Salvador.

El pasado 29 de enero agentes municipales obligaron a 93 vendedoras ambulantes a abandonar sus tradicionales puestos de venta, alegando que están realizando un ordenamiento de la zona y sin ofrecer ninguna alternativa para ser reinstaladas.

“Por dos años hemos acompañado a estas valientes mujeres, la mayoría madres solteras, en el 2011 nos enfrentamos al entonces alcalde del FMLN, Mario González, quien también desalojó, y por eso perdió…y hoy lo hacemos con este alcalde de ARENA y le decimos: no siga golpeando a nuestro sufrido pueblo…” agregó el pastor luterano.

Por su parte, la Sra. Vilma Hernández, presidenta de la Unidad de Vendedoras Tinecas, denunció que “nos están quitando el sagrado derecho de ganarnos la vida honradamente para sostener a nuestros hijos y no se los vamos a permitir, vamos a luchar por que nos permitan regresar a nuestros lugares de venta.”

Informó que “la semana pasada le llevamos una carta al presidente de ARENA, al Sr. Cristiani, para que viera que clase alcaldes tiene en su partido y exigirle que nos dejen trabajar en paz, y vamos a seguir insistiendo, porque esta es una lucha por nuestros hijos, por nuestros derechos, por una vida digna.”

A Brief History of Knowledge From 3000 BC to 2001 AD

When the earliest civilizations appeared (in Mesopotamia, Egypt, India and China), they were largely constrained by their natural environment and by the climate. Religion, Science and Art were largely determined by extra-human factors, such as seasons and floods. Over the course of many centuries, humans have managed to change the equation in their favor, reducing the impact of natural events on their civilization and increasing the impact of their civilization on nature (for better and for worse). How this happened to be is pretty much the history of knowledge. Knowledge has been, first and foremost, a tool to become the “subject” of change, as opposed to being the “object” of change.

One could claim that the most important inventions date from prehistory, and that “history” has been nothing more than an application of those inventions. Here is a quick rundown (in parentheses the earliest specimen we found so far and the place where it was found): tools (2 million years ago, Africa), fire (1.9 million years ago, Africa), buildings (400,000 BC, France), burial (70,000 BC, Germany), art (28,000 BC), Farming (14,000 BC, Mesopotamia), animal domestication (12,000 BC), boat (8,000 BC, Holland), weapons (8,000 BC), pottery (7,900 BC, China), weaving (6,500 BC, Palestine), money (sometime before the invention of writing, Mesopotamia), musical instruments (5,000 BC, Mesopotamia), metal (4,500 BC, Egypt), wheel (3,500 BC, Mesopotamia), writing (3,300 BC, Mesopotamia), glass (3,000 BC, Phoenicia), sundial (3,000 BC, Egypt).

Once the infrastructure was in place, knowledge increased rapidly on all fronts: agriculture, architecture (from the ziggurat of the Sumerians to the pyramids of the Egyptians to the temples of the Greeks), bureaucracy (from the city-states of the Sumerians to the kingdom of Egypt, from the empire of Persia to the economic empire of Athens), politics (from the theocracies of Mesopotamia and Egypt to the democracy of Athens), religion (from the anthropomorphic deities of Mesopotamia to the complex metaphysics of Egypt, from the tolerant pantheon of the Greeks to the one God of the Persians and the Jews), writing (from the “Gilgamesh” in Mesopotamia to the “Adventures of Sinuhe” in Egypt to the “Bible” of the Jews to Homer’s epics in Greece), economics (from the agricultural societies of Mesopotamia and Egypt to the trade-based societies of Phoenicia and Athens), transportation (from to the horse-driven chariots of Mesopotamia to the Greek trireme), art (from the funerary painting of the Egyptians to the realistic sculptures of the Greeks), etc.

For a while, Religion acted as, basically, a compendium of knowledge (about life, society and the universe). In India, the Vedas and the Upanishads painted a cyclical picture of the universe. Right and wrong actions increase the positive and negative potential energy (“apurva”) associated with each person. Apurva is eventually released (in this or the next life) and causes good or evil to the person. Basically, misfortune is caused by prior wrongful deeds. It is not only deserved but even required. Life is a loop from the individual back to the individual. This was cosmic justice totally independent of the gods. Wisdom is the realization that everything is suffering, but the realization of suffering does not lead to pessimism: it leads to salvation. Salvation does not require any change in the world. It requires a realization that everything is part of an absolute, or Brahman. Salvation comes from the union of the individual soul (“atman”) with the universal soul (“brahman”). “Maya”, the plurality of the world is an illusion of the senses. Salvation comes from “moksha”: liberation from maya and experience of Brahman. By experiencing the divine within the self, one reaches pure knowledge and becomes one with the eternal, infinite, and conscious being. Nothing has changed in the world: it is the individual’s state of mind that has changed. Self-knowledge is knowledge of the absolute.

Buddha focused on the suffering, a ubiquitous state of living beings, but ended up denying the existence of the self: only events exist, the enduring self is an illusion (the “atman” is an illusion). Each moment is an entirely new existence, influenced by all other moments. To quote a Buddhist scripture, “only suffering exists, but no sufferer is to be found”. Suffering can be ended by overcoming ignorance and attachment to Earthly things.

From ancient times, China displayed a holistic approach to nature, man, and government. Chinese religion realized the fundamental unity of the physical, the emotional and the social. Particularly during the Zhou dynasty, Chinese religion was natural philosophy. There was no fear of damnation, no anxiety of salvation, no prophets, no dogmas. Confucius was much more interested in the fate of society than in the fate of the souls of ordinary people. He believed that the power of example was the ideal foundation of the social contract: a ruler, a father, a husband have to “deserve” the obedience that is due to them. Thus, Confucius’ philosophy was about the cultivation of the self, how to transform the ordinary individual into the ideal man. The ultimate goal of an individual’s life is self-realization through socialization. If Confucius focused on society, Lao-tzu focused on nature. He believe in a “tao”, an ultimate unity that underlies the world’s multiplicity. There is a fundamental reality in the continuous flow and change of the world: the “way” things do what they do. Understanding the “tao” means identifying with the patterns of nature, achieving harmony with nature. The ideal course of action is “action through inaction” (“wuwei”): to flow with the natural order. The “tao” is the infinite potential energy of the universe. “Qi” is vital energy/matter in constant flux that arises from the “Tao”, and “Qi” is regulated by the opposites of “Yin” and “Yang”. Everything is made of yin and yang.

Note that neither Buddhism nor Confucianism nor Taoism were “religions”, in the sense of worshipping a God. In fact, they all denied the importance of gods.

In Persia, on the other hand, Zarathustra believed in one supreme God that was similar to the Indian “absolute” of Brahman, except that it was opposed by a divine enemy, and the world was due to the titanic battle between these two supernatural beings: Ahura-Mazda, the spiritual, immaterial, creator god who is full of light and good, and Ahriman, the god of darkness and evil. Unlike previous religions, this one was eschatological: at the end of time, Ahura-mazda shall emerge victorious, and, after the apocalyptic ending and a universal judgement that will take place on Earth, all humans (even sinners) shall resurrect.

Judaism, which grew out of a synthesis of Mesopotamian, Arabian, Persian and Egyptian religious cults was originally only the religion of the Jews, and El was originally the nomadic god of a nomadic people (not tied to a sanctuary but “god of the father”). It was a god of punishment and wrath, and Jewish religion was conceived as, basically, obedience to El, with the reward for the Jewish people being the Promised Land. The “Old Testament” is largely silent about the rest of humanity, and largely silent about the afterlife. This was a god who spoke directly to its people (the Jews). The earliest prophets of the kingdom of Mari has been visionary mystics in charge of foretelling the future and interpreting natural events as divine messages on behalf of the royalty. The Biblical prophets, on the other hand, addressed the people (and, eventually, “all nations”) and their main mission was to promote a higher form of morality and justice. Judaism, in its expectation that a Messiah would come and deliverer the Jews from their suffering, was largely indifference towards unbelievers. In the meantime, the misadventures of the Jewish people were due to the fact that the Jews disobeyed their god. But, at some point, El and Yahweh became synonymous, and, eventually, Yahweh became the “only” god (“There is no other god besides me”). The “Old Testament”, which originally was a history of the Jews, acquired a universal meaning.

Both Mazdaism and Judaism became monotheistic religions and denounced all other gods as mere “idols” not worthy of worship.

A major step in the evolution of knowledge was the advent of Philosophy. Both in Greece and India, the explosion in Philosophy and Science was enabled by a lack of organized religion: both regions had a form of “rational superstition” rather than the theocracies of Mesopotamia and Egypt. The gods of the Greek and of the Indian pantheon were superhuman, but never tried to explain all that happens on this planet. Philosophers and scientists were able to speculate on the nature of the universe, of the human life and of the afterlife without offending the state and fearing for their lives.

In India, six “darshana” (philosophical schools) tried to answer the fundamental questions: is there a God? Is the world real? Samkhya believed that there is no God and that the world is real (due to the interaction between two substances, prakriti and purusha). Yoga believed in a supreme being (Isvara) and that the world is real.

Vedanta believed in Brahman and that the world is not real (it is an emanation of Brahman, the only substance that truly exists).

In Greece, Pythagoras was perhaps the first philosopher to speculate about the immortality of soul. Heraclitus could not believe in the immortality of anything, because he noticed that everything changes all the time (“you cannot enter the same river twice”), including us (“we are and we are not”). On the contrary, Parmenides, the most “Indian” of the Greek philosophers, believed that nothing ever changes: there is only one, infinite, eternal and indivisible reality, and we are part of this unchanging “one”, despite the illusion of a changing world that comes from our senses. Zeno even proved the impossibility of change with his famous paradoxes (for example, fast Achilles can never catch up with a slow turtle if the turtle starts ahead, because Achilles has to reach the current position of the turtle before passing it, and, when he does, the turtle has already moved ahead, a process that can be repeated forever). Democritus argued in favor of atomism and materialism: everything is made of atoms, including the soul. Socrates was a philosopher of wisdom, and noticed that wisdom is knowing what one does not know. His trial (the most famous religious trial before Jesus’) signaled the end of the dictatorship of traditional religion. Plato ruled out the senses as a reliable source of knowledge, and focused instead on “ideas”, which exist in a world of their own, are eternal and are unchangeable. He too believed in an immortal soul, trapped in a mortal body. By increasing its knowledge, the soul can become one with the ultimate idea of the universe, the idea of all ideas. On the contrary, Aristotle believed that knowledge “only” comes from the senses, and a mind is physically shaped by perceptions over a lifetime. He proceeded to create different disciplines to study different kinds of knowledge.

The Hellenistic age that followed Alexander’s unification of the “oikoumene” (the world that the Greeks knew) on a level never seen before fostered a new synthesis of views of the world. Hellenistic philosophy placed more emphasis on happiness of the individual, while Hellenistic religion place more emphasis on salvation of the individual. Cynics, who thought that knowledge is impossible, saw attachment to material things as the root problem, and advocated a return to nature. Skeptics, who agreed that knowledge is impossible, thought that the search for knowledge causes angst, and therefore one should avoid having beliefs of any sort. Epicureans, who had a material view of the world (the universe is a machine and humans have no special status), claimed that superstitions and fear of death cause angst. Stoics viewed the entire universe as a manifestation of god and happiness as surrendering the self to the divine order of the cosmos, as living in harmony with nature.

From the very beginning, knowledge was also the by-product of the human quest for an answer to the fundamental questions: Why are we here? What is the meaning of our lives? What happens when we die? Is it possible that we live forever in some other form? The afterlife and immortality are not knowledge, since we don’t “know” them yet, but humans used knowledge to reach different conclusions about these themes. The civilizations of Mesopotamia were mainly interested in “this” life. The Egyptians were obsessed with the afterlife, with immortality originally granted only to the pharaoh but eventually extended to everybody (via the mysteries of Osiris, the first major ritual about the resurrection). The ancient Greeks did not care much for immortality, as Ulysses showed when he declined the goddess’ invitation to spend eternity with her and preferred to return to his home; but later, in the Hellenistic period, a number of religious cults focused on resurrection: the Eleusinian mysteries (about Demeter’s search through the underworld for her daughter Persephone), the Orphic mysteries (about Orpheus’ attempt to bring back his wife Eurydice from the underworld) and the Dionysian mysteries (about Dionysus, resurrected by his father Zeus). The Romans cared for the immortality of their empire, and were resigned to the mortality of the individual; but it was under Roman rule that a new Jewish religion, Christianity, was founded on the notion that Jesus’ death and resurrection can save all humans.

The other great theme of knowledge was (and still is) the universe: what is the structure of the world that we live in? Neither the Indian nor the Greek philosophers could provide credible answers. They could only speculate. Nonetheless, the Hellenistic age fostered progress in mathematics (Euclid’s “Geometry” and Diophantus’ “Arithmetic”) and science (Erarosthenes’ calculation of the circumference of the Earth, Archimedes’ laws of mechanics and hydrostatics, Aristarchus’ heliocentric theory, Ptolemy’s geocentric theory). The Romans’ main contribution to the history of knowledge may well be engineering, which, after all, is but the practical application of science to daily life. The Romans, ever the practical people, made a quantum leap in construction: from aqueducts to public baths, from villas to amphitheaters. At the same time, they too created a new level of unification: the unification of the Mediterranean world.

The intellectual orgy of Greek philosophy opened the western mind. The Romans closed it when they adopted Christianity as “the” imperial religion and turned it into a dogma. Christianity was born a Jewish religion, but it was “relocated” to Rome and thus, automatically, turned into a universal religion. Jesus’ god was substantially different from the original El/Yahweh of the “Old Testament”: it was, first and foremost, a god of love. Jesus was the very son of God, sent to the Earth to die for humans and thus save them from the original sin. St Paul made it clear that it was love for everybody, not just for the Jews; and that the “kingdom” of the Christian faith, God’s reward for the faithful, was in heaven, not on Earth. The catch was that unbelievers were no longer immune from God’s judgement: they risked eternal damnation. The reward for the faithful was resurrection, just like Jesus had resurrected. Christianity was the culmination of a tradition of mysteries for the salvation of the individual, of religion for the ordinary man and woman, even for the slaves. Its central theme was one of resurrection and eternal life available to everybody. Indirectly, it was also an ideology of universality and equality.

In fact, both Buddhism and Christianity, and, to some extent, Confucianism, were universal and egalitarian. They were not exclusive of a race, a gender, or a social class. This achievement in religion marks a conceptual step in which ordinary people (even slaves) were beginning to see themselves as equal to the kings, albeit powerless.

Islam, another offshoot of Judaism, was the culmination of the trend towards monotheist, eschatological, egalitarian and universal religions. Islam borrowed from the Persian philosopher Mani the of a succession of revelations given to different peoples by the very same God (Allah) and it borrowed from Christianity the idea of universal brotherhood and the mission to convert the unbelievers. But, unlike its predecessors, Islam was also an ideology, because it prescribed how to build a state. It made it the duty of every Muslim to struggle for the creation of a universal Islamic state. Islam’s Earthly mission was to reform society and to form a nation. Islam’s mission was inherently political. The ultimate aim of the Islamic state is to develop social justice. What had been a subtle message in Christianity became an explicit message in Islam. In fact, the entire Muslim population (not just the priestly class) is in charge of running the Islamic state. Humans are granted limited popular sovereignty under the suzerainty of God.

The Islamic philosophers felt the need to reconcile Islam and Greek philosophy. The two who exerted the strongest influence on the West, Abu Ali al-Husain ibn Abdallah ibn Sina Avicenna and Abu al-Walid Muhammad ibn Ahmad ibn Muhammad ibn Rushd Averroes, achieved such a momentous unification of religion and philosophy by envisioning the universe as a series of emanations from Allah, from the first intelligence to the intelligence of humans. This allowed them to claim that there is only one truth, that appears like two truths: religion for the uneducated masses and philosophy for the educated elite. But there is no conflict between reason and revelation: ultimately, they both reach the same conclusions about the existence of Allah. The sufists, best represented by Ibn Arabi, added an almost Buddhist element: human consciousness is a mirror of the universal, eternal, infinite consciousness of Allah. Allah reveals himself to himself through human consciousness. The Sufi wants to achieve a state of participation in the act of self-revelation. The human condition is one of longing, of both joy (for having experienced the divine) and sorrow (for having lost the divine).

The invasions of the “barbaric” people of the east, the Arab invasion from the south and the wars against the Persian empire, led to the decadence of Roman civilization and to the “dark age” that lasted a few centuries. The obliteration of culture was such that, eventually, Europe had to re-learn its philosophy, science and mathematics from the Arabs.

The Christian dogma contributed to the decline of the Greek ideal. Rationality was replaced by superstition. Virtue was replaced by faith. Justice in this world was replaced with justice in the next world. The free exercise of reason was replaced with obedience to the Church. The Greek tolerance for foreign faiths was replaced by the intolerance of the Church. Nonetheless, Christianity emulated Islam in trying to reconcile religion and philosophy. St Augustine preached the separation (grounded in Greek philosophy) of body and soul, of bodily life and spiritual life: the pleasures of the body detract/distract from the truth of the soul.

During the “dark ages”, the Christian conversion of the European pagans, from Russia to Scandinavia, was completed. The Church, in fact, replaced the Roman empire as the unifying element of Europe. The Church controlled education. The Church controlled the arts. The Church even controlled the language: Latin.

The Arab invasion disrupted the economic and political unity of the Mediterranean Sea, and the rise of the Frankish kingdom, soon to be renamed “Holy Roman Empire” (a mostly landlocked empire) caused a redesign of the main trade routes away from the sea. Venice alone remained a sea-trading power, and, de facto, the only economic link between Holy and Eastern Roman Empires. This “inland” trade eventually caused a “commercial” revolution. Trade fairs appeared in Champagne, the Flanders, and northern Germany, creating a new kind of wealth in those regions. The Italian communes became rich enough to be able to afford their own armies and thus become de-facto independent and develop economies entirely based on trade. In northern Europe, a new kind of town was born, that did not rely on Mediterranean sea. Both in the north and in the south, a real bourgeois class was born. The medieval town was organized around the merchants, and then the artisans and the peasants.

As the horse became the main element in warfare, the landowner became the most powerful warrior. A new kind of nobility was created, a land-owning nobility. The collapse of central authority in western Europe led to feudalism, a system in which the nobility enjoyed ever greater power and freedom, a global “political” revolution.

Thus the “medieval synthesis”: Church, cities, kings (clergy, bourgeoisie, nobility).

But a fourth element was even more important for the history of knowledge. As Rome decayed, and Alexandria and Antioch fell to the Muslims, the capital of Christian civilization moved to Constantinople (Byzantium). Despite the Greek influence, this cosmopolitan city created great art but little or no philosophy or science. It was left to the monasteries of western Europe to preserve the speculative traditions of the Greek world, except that they were mainly used to prove the Christian dogma. Monasticism was nonetheless crucial for the development of philosophy, music, painting. The anarchy of the “dark age” helped monasteries become a sort of refuge for the intellectuals. As the choice of lay society came down to being a warrior or a peasant, being a monk became a more and more appealing alternative. Eventually, the erudite atmosphere of the monasteries inspire the creation of universities. And universities conferred degrees that allowed graduates to teach in any Christian country, thus fueling an “educational” revolution. Johannes Scotus Erigena, Peter Abelard, Thomas Aquinas, Johannes Eckhart, John Duns Scotus (the “scholastics”) were some of the beneficiaries. Western philosophy restarted with them. As their inquiries into the nature of the world became more and more “logical”, their demands on philosophy became stricter. Eventually, Roger Bacon came to advocate that Science be founded on logic and observation; and William Occam came to advocate the separation of Logic and Metaphysics, i.e. of Science and Church.

The commercial revolution of the new towns was matched by an “agricultural” revolution of the new manors. The plough (the first application of non-human power to agriculture), the three-field rotation (wheat/rye, oats/legumes, fallow) and the horseshoe caused an agricultural revolution in northern Europe that fostered rapid urbanization and higher standards of living. Improved agricultural techniques motivated the expansion of arable land via massive deforestation.

In the cities, a “technological” revolution took place. It started with the technology of the mill, which was pioneered by the monasteries. Mills became pervasive for grinding grain, fulling clothes, pressing olives and tanning. Textile manufacturing was improved by the spinning wheel (the first instance of belt transmission of power). And that was only the most popular instance of a machine, because this was the first age of the machines. The mechanical clock was the first machine made entirely of metal.

There also was a military revolution, due to the arrival of gunpowder. Milan became the center of weapon and armor manufacturing. Demand for cannons and handguns created a whole new industry.

Finally, an “engineering/artistic” revolution also took place, as more and more daring cathedrals started dotting the landscape of Christianity. Each cathedral was an example of “total art”, encompassing architecture, sculpture, painting, carpentry, glasswork. The construction of a cathedral was a massive enterprise that involved masons, workers, quarrymen, smiths, carpenters, etc. Not since the Egyptian pyramids had something so spectacular been tried. Each cathedral was a veritable summa of European civilization.

The political, commercial, agricultural, educational, technological and artistic revolutions of the Middle Ages converged in the 13th century (the “golden century”) to create an economic boom as it had not been seen for almost a millennium.

Improved communications between Europe and Asia, thanks to the Mongol Empire that had made travel safe from the Middle East to China, particularly on the “silk road”, and to the decline of the Viking and Saracen pirates, led to a revival of sea trade, especially by the Italian city-states that profited from a triangular trade Byzantium-Arabs-Italy.

Florence, benefiting from the trade of wool, and Venice, benefiting from the sea trade with the East, became capitalistic empires Venice sponsored technological innovation that enabled long-distance and winter voyages, while Florence sponsored financial innovation that enabled to lend/borrow and invest capital worldwide. The Italian cities had a vested interest in improved education, as they need people skilled in geography, writing, accounting, technology, etc. It is not a coincidence that the first universities were established in Italy.

The economic boom came to an abruptly stop by a plague epidemics (“the Black Death”) that decimated the European population. But the Black Death also had its beneficial effects. The dramatic decrease in population led to a higher standard of living for the survivors, as the farmers obtained more land per capita and the city dwellers could command higher wages. The higher cost of labor prompted investments in technological innovation. At the same time, wealthy people bequeated their fortunes to the creation of national universities which greatly increased the demand for books. The scarcity of educated people prompted the adoption of vernacular languages instead of Latin in the universities.

Throughout the Middle Ages, the national literatures had produced national epics such as “Beowulf” (900, Britain), “Edda” (1100, Scandinavia), “Cantar del Cid” (1140, Spain), Chretien de Troyes’ “Perceval” (1175, France), “Slovo o Ploku Igoreve” (1185, Russia), “Nibelungen” (1205, Germany), “Chanson de Roland” (1200, France), Wolfram Von Eschenbach’s “Parzival” (1210, Germany). Dante Alighieri’ “Divine Comedy” (1300) heralded a new age, in which the vernacular was used for the highest possible artistic aims, a veritable compendium of knowledge. After languishing for centuries, European poetry bloomed with Francesco Petrarca’s “Canti” (1374, Italy), Geoffrey Chaucer’s “Canterbury Tales” (1400, England), Inigo Santillana’s “Cancionero” (1449, Spain), Francois de Villon’s “Testament” (1462, France). And Giovanni Boccaccio’s “Decameron” (1353, Italy) laid the foundations for narrative prose.

In observance with the diktat of the Second Council of Nicaea (787), that the visual artist must work for the Church and remain faithful to the letter of the Bible, medieval art was permeated by an aesthetics of “imitation”. Christian art was almost a reversal of Greek art, because the emphasis shifted from the body (mortal, whose movement is driven by emotions) to the soul (immortal, immune to emotions), from realism and movement to spirituality and immanence. Christian art rediscovered Egyptian and Middle-eastern simplicity via Byzantine art. Nonetheless, centuries of illuminated manuscripts, mosaics, frescoes and icons eventually led to the revolution in painting best represented by Giotto’s “Scrovegni Chapel” (1305). While Italian artists were re-founding Greco-Roman art based on mathematical relationships and a sense of three-dimensional space, as in Paolo Uccello’s “Battle of St Romano” (1456), Masaccio’s “Trinity” (1427) and Piero della Francesca’s “Holy Conversation” (1474), Northern European painters became masters of a “photographic” realism as in Jan Van Eyck’s “The Virgin of the Chancellor Rolin” (1436) and “The Arnolfini Marriage” (1434).

Before Europe had time to recover from the Black Death, the unity of the Mediterranean was shattered again by the fall of Byzantium (1453) and the emergence of the Ottoman empire (a Muslim empire) as a major European power.

However, Europe was coming out of the “dark age” with a new awareness of the world. Marco Polo had brought news of the Far East. Albertus Magnus did not hesitate to state that the Earth is a sphere. Nicolas Oresme figured out that the rotation of the Earth on an axis explains the daily motion of the universe.

In China, the Han and Tang dynasties had been characterized by the emergence of a class of officials-scholars and by a cultural boom. The Sung dynasty amplified those social and cultural innovations. The scholar-officials become the dominant class in Chinese society. The state was run like an autocratic meritocracy, but nonetheless a meritocracy. As education was encouraged by the state, China experienced a rapid increase in literacy which led to a large urban literate class. The level of competence by the ruling class fosterd technological and agrarian innovations that created the most advanced agriculture, industry and trade in the world. When Europe was just beginning to get out of its “dark age”, China was the world’s most populous, prosperous and cultured nation in the world. The Mongol invasion (the Yuan dynasty) did not change the character of that society, but, in fact, added an element of peace: the “pax tatarica” guaranteed by the invincible Mongol armies.

India was the only part of the non-Chinese world that Chinese scholars were fascinated with. They absorbed Indian culture over the centuries, and particularly adopted one philosophical school of India: Buddhism. First came “Pure Land” or Jodo Buddhism (4th c), with its emphasis on devotion instead of meditation, Then Tiantai/Tendai (6th c), Huayan/Kegon (7th c) and Chan/Zen (6th c). The latter, a fusion of Buddhism and Taoism, focused on attainment of sudden enlightenment (“satori”). According to the Northern school (Shen-hsiu) satori was to be obtained by gradual enlightenment through guided meditation, while the Southern school (Huineng) allowed for satori through individual meditation. Zen promoted spontaneous thinking, as opposed to the philosophical investigation of Confucianism, spontaneous behavior as opposed to the calculated behavior of Confucianism. Zen is the “everyday mind”.

Japan had adopted Buddhism as a state religion already in 604, under prince Shotoku Taishi, next to a Confucian-style constitution and the native shinto cult. The various Buddhist schools arrived from China in the following centuries (the Tendai school in the 9th century, the Jodo school in the 12th century), until Zen Buddhism reached Japan during the 13th century. Zen became popular among the military class (the “samurai”) that embodied the noble values in an age of anarchy. In turn, the Zen monk came to behave like a spiritual samurai. From 1192 till 1333, Japan was ruled by “shogun” (military leaders) with residence in Kamakura (the “bakufu” system of government), while the emperor (or “mikado”) became a figurehead. Even the equivalent of the scholar-official of China was military: during the 17th century, the ideal man was the literate warrior who lived according to “bushido” (“way of the warrior”). Japan remained largely isolated until 1854, when the USA forced Japan to sign a treaty that opened Japan to foreign trade, a humiliation that led to the restoration of imperial power (1868) after so many centuries of military rule.

Japan’s native religion, Shinto, provides the bases for the imperial institutions. It is, in fact, a form of Japanese patriotism. It declares Japan a divine country, and the emperor a descendant of the gods. Shinto is polytheist to the extreme, admitting in its pantheon not only thousands of spirits (“kami”), personifying the various aspects of the natural world, and ancestors, but also the emperors and the deified heroes of the Japanese nation, and even foreign deities. Shinto is non-exclusive: a Shintoist can be a Buddhist, a Catholic, etc. The reason is that there is no competition between Shinto and the metaphysics of the other religions. Shinto is a religion to deal with ordinary lives, based on the belief that humans can affect Nature by properly honoring the spirits. When Japan adopted Buddhism, the Native spirits were recast as manifestations of Buddha.

The “Rinzai” school of Zen Buddhism believed in sudden enlightenment while concentrating to solve a koan (“sanzen”, or conversation with a master). The “Soto” school believed in gradual enlightenment through meditation in daily life (“zazen”, or sitting meditation). But the traditions of Japanese society surfaced also in Zen Buddhism: satori can be facilitated by martial arts, tea ceremonies, gardening, Haiku poetry, calligraphy, No drama, etc.

In marked contrast to the western civilizations, the eastern civilizations of India, China and Japan displayed little interested in the forceful spread of their religious beliefs.

Luckily for Christian Europe, in 1492 Spain opened a new front of knowledge: having freed itself of the last Arab kingdom, it sponsored the journey of Christopher Columbus to the “West Indies”, which turned out to be a new continent. That more or less accidental event marked the beginning of the “colonial” era, of “world trade”, and of the Atlantic slave trade; and, in general, of a whole new set of mind.

Other factors were shaping the European mind: Gutenberg’s printing press (1456), which made it possible to satisfy the growing demand for books; Martin Luther’s Reformation (1517), which freed the northern regions from the Catholic dogma; Copernicus’ heliocentric theory (1530), that removed the Earth (and thus Man) from the center of the universe; and the advent of the nation states (France, Austria, Spain, England and, later, Prussia).

However, it was not the small European nations that ruled the world at the end of the Middle Ages. The largest empires (the “gunpowder empires”) were located outside Europe. Gunpowder was only one reason for their success. They had also mastered the skills of administering a strong, centralized bureaucracy required to support an expensive military. In general, they dwarfed Europe at one basic dimension: knowledge. While Europe was just coming out of its “dark age”, the gunpowder empires were at their cultural peak. The Ottoman Empire, whose capital Istanbul was the largest city in Europe, was a melting pot of races, languages and religions. It was a sophisticated urban society, rich in universities and libraries, devoted to mathematics, medicine and manufacturing. The Safavid Empire of Persia, that controlled the silk trade, was a homogeneous state of Muslim Persians. The Mughal Empire of India, an Islamic state in a Hindu country, was also a melting pot of races, languages and religions. Ming China was perhaps the most technologically and culturally advanced of all countries.

The small European countries could hardly match the knowledge and power of these empires. And, still, a small country like Portugal or Holland ended up controlling a larger territory (stretching multiple continents) than any of those empires. A dis-united Europe of small and poor states caught up in an endless loop of intestine wars, speaking different languages, technologically backwards, that had to import science, philosophy and technology from the Muslims, that had fewer people and resources than the Asian empires, managed to conquer the entire world (with the only notable exception of Japan). Perhaps the problem was with the large-scale bureaucracies of those Asian empires, that, in the long term, became less and less competitive, more and more obscurantist. In some cases, their multi-ethnic nature caused centrifugal forces. Or perhaps Europe benefited from its own anarchy: continuous warfare created continuous competition and a perennial arms race. Perhaps the fact that no European power decisively defeated the others provided a motivation to improve that was missing in the more stable empires of the East. After all, the long-range armed sailing ships, which opened the doors to extra-European colonization, were the product of military build-up. Soon, world trade came to be based on sea transportation, which was controlled by Europeans. The printing press, which the gunpowder empire were slow to adopt (or even banned), slowly changed the balance of knowledge. World trade was creating more demand for technological innovation (and science), while the printing press was spreading knowledge throughout the continent. And all of this was funded with the wealth generated by colonialism. While the Asian empires were busy enjoying their stability, the small European countries were fighting for supremacy, anywhere anytime; and, eventually, they even overthrew those much larger empires.

Nowhere was the apparent oxymoron more intriguing than in Italy, a fragmented, war-torn peninsula that, nonetheless, became the cultural center of Europe. On a smaller scale, it was the same paradox: the tiny states of Italy and the Netherlands were superior in the arts to the powerful kingdoms of Spain, France and England. In this case, though, the reason is to be found in the socio-economic transformation of the Middle Ages that had introduced a new social class: the wealthy bourgeoisie. This class was more interested in the arts than the courts (which were mainly interested in warfare). The main “customer” of the arts was still the Church, but private patronage of art became more and more common. This, in turn, led to an elite of art collectors and critics. Aesthetics led to appreciation of genius: originality, individuality, creativity. Medieval art was imitation, Renaissance art was creation.

Perhaps the greatest invention of the Renaissance was the most basic of all from the point of view of knowledge: the self. The Egyptians and the Greeks did not have a truly unified view of the self, a unique way to refer to the “I” who is the protagonist of a life and, incidentally, is also a walking body. The Greeks used different terms (pneuma, logos, nous, psyche) to refer to different aspects of the “I”. The Middle Ages were the formative stage of the self, when the “soul” came to be identified with the thinking “I”. The Renaissance simply exalted that great medieval invention, the “I”, that had long been enslaved to religion. The “I” was now free to express and affirm itself.

In a nutshell, the “Rinascimento” (Renaissance art) adapted classical antiquity to Biblical themes. This was its fundamental contradiction: a Christian art based on Pagan art. An art that was invented (by the Greeks) to please the pagan gods and (by the Romans) to exalt pagan emperors was translated into an art to pay tribute to the Christian dogma. Leonardo da Vinci’s “The Last Supper” (1497) and Michelangelo Buonarroti’s “The Universal Judgement” (1541) are possibly the supreme examples in painting, while architects such as Donato Bramante and Gianlorenzo Bernini dramatically altered the urban landscapes. But there was also an obsession with ordering space, as manifested in Sandro Botticelli’s “Allegory of Spring” (1478) and Raffaello Sanzio’s “The School of Athens” (1511). In the Netherlands, Hieronymous Bosch’s “The Garden of Delights” (1504) was perhaps the most fantastic piece of art in centuries.

The Renaissance segued into the Baroque age, whose opulence really signified the triumph of European royalty and religion. Aesthetically speaking, the baroque was a restoration of order after the creative disorder of the Renaissance. The least predictable of the visual arts remained painting, with Pieter Bruegel’s “Triumph of Death” (1562), Domenico El Greco’s “Toledo” (1599), Pieter Rubens’ “Debarquement de Marie de Medicis” (1625), Rembrandt’s “Nightwatch” (1642), Jan Vermeer’s Malkunst (1666). In Italy, Giovanni Palestrina, Claudio Monteverdi (1567) and Girolamo Frescobaldi (1583) laid the foundations for classical music and the opera. The national literary scenes bloomed. Masterpieces of poetry included Ludovico Ariosto’s “Orlando Furioso” (1532), Luiz Vas de Camoes’ “Os Lusiadas” (1572), Torquato Tasso’s “Gerusalemme Liberata” (1575), Pierre de Ronsard’s “Sonnets pour Helene” (1578), John Donne’s “Holy Sonnets” (1615), John Milton’s “Paradise Lost” (1667). Even more characteristic of the era was theater: Gil Vicente’s “Auto da Barca do Inferno” (1516), Christopher Marlowe’s “Faust” (1592), William Shakespeare’s “Hamlet” (1601) and “King Lear” (1605), Lope de Vega Carpio’s “Fuente Ovejuna” (1614), Pedro Calderon’s “El Gran Teatro del Mundo” (1633), Moliere’s “Le Misanthrope” (1666) and JeanBaptiste Racine’s “Phedre” (1677). Francois Rabelais’ “Gargantua et Pantagruel” (1552) and Miguel Cervantes’ “Don Quijote” (1615) laid the foundations of the novel.

Progress in science was as revolutionary as progress in the arts. Tycho Brahe, who discovered a new star, and Johannes Kepler, who discovered the laws of planetary motion, Francis Bacon, who advocated knowledge based on objective empirical observation and inductive reasoning, and finally Galileo Galilei, who envisioned that linear uniform motion (not rest) is the natural motion of all objects and that forces cause acceleration (which is the same for all falling objects, i.e. the same force must cause objects to fall), Suddenly, the universe did not look like the perfect, eternal, static order that humans had been used to for centuries. Instead, it looked as disordered, imperfect and dynamic as the human world.

New inventions included: the telescope (1608), the microscope (1590s), the pendulum clock (1657), the thermometer (1611), the barometer (1644).

Both the self and the world were now open again to philosophical investigation. Rene‚ Descartes neatly separated matter and mind, two different substances, each governed by its set of laws (physical or mental). While the material world, including the body, is ultimately a machine, the soul is not: it cannot be “reduced” to the material world. His “dualism” was opposed by Thomas Hobbes’ “materialism”, according to which the soul is merely a feature of the body and human behavior is caused by physical laws.

Baruch Spinoza disagreed with both. He thought that only one substance exists: God. Nature is God (“pantheism”). The universe is God. This one substance is neither physical nor mental, and it is both. Things and souls are (finite) aspects (or “modes”) of that one (infinite) substance. Immortality is becoming one with God/Nature, realizing the eternity of everything.

Gottfried Leibniz went in the other direction: only minds exist, and everything has a mind. Matter is made of minds (“panpsychism”). Minds come in degrees, starting with matter (whose minds are very simple) and ending with God (whose mind is infinite). The universe is the set of all finite minds (or “monads”) that God has created. Their actions have been pre-determined by God. Monads are “clocks that strike hours together”.

Clearly, the scientific study of reality depended on perception, on the reliability of the senses. John Locke thought that all knowledge derives from experience (“empiricism”), and noticed that we only know the ideas and sensations in our mind. Those ideas and sensations are produced by perceptions, but we will never know for sure what caused those perceptions, how reality truly is out there: we only know the ideas that are created in our mind. Ideas rule our mind

On the contrary, George Berkeley, starting from the same premises (all we know is our perceptions) reached the opposite conclusion: that matter does not even exist, that only mind exists (“idealism”). Reality is inside our mind: an object is an experience. Objects do not exist apart from a subject that thinks them. The whole universe is a set of subjective experiences. Locke thought that we can never know how the world really is, but Berkeley replied that the world is exactly how it appears: it “is” what appears, and it is inside our mind. Our mind rules ideas

David Hume increased the dose of skepticism: if all ideas come from perception, then mind is only a theater in which perceptions play their parts in rapid succession. The self is an illusion. Mental life is a series of thoughts, feelings, sensations. A mind is a series of mental events. The mental events do exist. The self that is supposed to be thinking or feeling those mental events is a fiction.

Observation led physicists to their own view of the world. By studying gases, Robert Boyle concluded that matter must be made of innumerable elementary particles, or atoms. The features of an object are due to the features and to the motion of the particles that compose it.

Following Galileo’s intuitions and adopting Boyle’s atomistic view, Isaac Newton worked out a mathematical description of the motion of bodies in space and over time. He posited an absolute time and an absolute space, made of ordered instants and points. He assumed that forces can act at distance, and introduced an invisible “gravitational force” as the cause of planetary motion. He thus unified terrestrial and celestial Mechanics: all acceleration is caused by forces, the force that causes free fall being the gravitational force, that force being also the same force that causes the Earth to revolve around the Sun. Forces act on masses, a mass being the quantitative property that expressed Galileo’s inertia (the property of a material object to either remain at rest or in a uniform motion in the absence of external forces). Philosophers had been speculating that the universe might be a machine, but Newton did not just speculate: he wrote down the formulas.

Significant innovations were also introduced, for the first time in a long time, in Mathematics. Blaise Pascal invented the mathematical theory of probability (and built the first mechanical adding machine). Leibniz envisioned a universal language of logic (a “lingua characteristica”) that would allow to derive all possible knowledge simply by applying combinatorial rules of logic. Arabic numbers had been adopted in the 16th century. Signs for addition, subtraction, multiplication were introduced by Francois Vieta. John Napier invented logarithms. Descartes had developed analytical geometry, and Newton and Leibnitz independently developed calculus.

It might not be a coincidence that a similar scientific, mathematical approach can be found in the great composers of the era: Antonio Vivaldi, George-Frideric Handel and Johann Sebastian Bach.

The next big quantum leap in knowledge came with the “industrial” revolution. It is hard to pinpoint the birth date of the industrial revolution (in 1721 Thomas Lombe built perhaps the first factory in the world, in 1741 Lewis Paul opened the first cotton mill, in 1757 James Watt improved the steam engine), but it is clear where it happened: Manchester, England. That city benefited from a fortunate combination of factors: water mills, coal mines, Liverpool’s port and, last but not least, clock-making technology (the earliest factory mechanics were clock-makers). These factors were all in the hands of the middle class, so it is not surprising that the middle class (not the aristocracy or the government) ended up managing most of the enterprises.

The quantum leap in production translated into a quantum leap in transportation: in 1782 the first steamboat sailed up the Clyde, in 1787 John Wilkinson built the first iron boat, in 1812 Henry Bell started the first commercial steamboat service in Glasgow, in 1819 the “Savannah” completed the first transatlantic crossing by a steamboat, in 1820 the first iron steamship was built, etc. By 1892 Britain’s tonnage and sea-trade exceeds the rest of the world together. ). At its peak, Britain had only 2% of the world’s population, but produced almost 20% of the world’s manufacturing output

One of the most tangible side-effects of the industrial revolution was the British Empire. There had been “empires” before, and even larger ones (the Mongol empire). But never before had an empire stretched over so many continents: Africa, America, Oceania, Asia. The Roman empire had viewed itself as an exporter of “civilization” to the barbaric world, but the British Empire upped the ante by conceiving its imperialism as a self-appointed mission to redeem the world. Its empire was a fantastic business venture, that exported people, capital and goods, and created “world trade”, not just regional trade. This enterprise was supported by a military might that was largely due to financial responsibility at home. Despite the fact that France had a larger population and more resources, Britain managed to defeat France in the War of the Spanish Succession (1702-1713), in the Seven Years’ war (1756-1763) and in the Napoleonic wars (1795-1815.

Managing the British Empire was no easy task. One area that had to be vastly improved to manage a global empire was the area of global communications: steamships, railroads, the telegraph, the first undersea cable and a national post system unified the colonies as one nation. They created the first worldwide logistical system. Coal, a key element in a country in which wood was scarce, generated additional momentum for the improvement of shipbuilding technology and the invention of railroads (1825).

Other areas that the British Empire needed to standardize were finance and law. Thus the first economic and legal systems that were global, not only regional, were born. British economic supremacy lasted until 1869, when the first transcontinental railroad connecting the American prairies with the Atlantic Coast introduced a new formidable competitor: the USA.

No wonder, thus, that Adam Smith felt a new discipline had to be created, one that studied the dynamics of a complex economy based on the production and distribution of wealth. He explained the benefits of free competition and free trade, and how competition can work for the common good (as an “invisible hand”).

Jeremy Bentham (1789) introduced “utilitarian” criteria to decide what is good and what is bad: goodness is what guarantees “the greatest happiness for the greatest number of people”. The philosophy of “utilitarianism” was later perfected by John Stuart Mill, who wrote that “pleasure and freedom from pain are the only things desirable as ends” thus implying that good is whatever promote pleasure and prevents pain

France was much slower in adopting the industrial revolution, and never came even close to matching the pace of Britain industrialization, but the kingdom of the Bourbons went through a parallel “intellectual” revolution that was no less radical and influential: “Les Lumieres”, or the Enlightenment. It started in the salons of the aristocracy, usually run by the ladies, and then it spread throughout the French society. The “philosophes” believed, first and foremost, in the power of Reason and in Knowledge, as opposed to the religious and political dogmas. They hailed progress and scorned conservative attitudes. The mood changed dramatically, as these philosophers were able to openly say things that a century earlier would have been anathema. Scientific discoveries (Copernicus, Galileo, Newton), the exploration of the world, the printing press and a religious fatigue after so many religious wars led to cultural relativism: there are no dogmas, and only facts and logic should determine opinions. So they questioned authority (Aristotle, the Bible) across the board. Charles de Montesquieu, Denis Diderot, Voltaire, Rousseau favored a purely rational religion and carried out a moral crusade against intolerance, tyranny, superstition.

Julien LaMettrie was the ultimate materialist: he thought the mind is nothing but a machine (a computer, basically) and thoughts are due to the physical processes of the brain. There is nothing special about a mind or a life. Humans are just like all other animals.

Charles Bonnet speculated that the mind may not be able to influence the body, but might simply be a side-effect of the brain (“epiphenomenalism”).

Paul-Henri Holbach believed that humankind’s miseries are mostly caused by religion and superstition, that there is no God handing out rewards or punishment, that the soul dies when the body dies, that all phenomena can be understood in terms of the features of matter.

Georges Buffon concocted the first western account of the history of life and of the Earth that was not based on the Bible.

The American revolution (1776) was, ultimately, a practical application of the Enlightenment, a feasibility study of the ideas of the Enlightenment. The French Revolution (1789-94) was a consequence of the new political discourse, but also signaled an alliance between the rising bourgeoisie, the starving peasants and the exploited workers. Its outcome was that the “nation” replaced “God” and “King”: nationalism was born. By the turn of the century, the Enlightenment had also fathered a series of utopian ideologies, from Charles Fourier’s phalanxes to Claude Saint-Simon’s proto-socialism to Pierre Proudhon’s anarchy.

In marked contrast with the British and French philosophers, the Germans developed a more “spiritual” and less “materialistic” philosophy. The Germans were less interested in economy, society and politics, and much more interested in explaining the universe and the human mind, what we are and what is the thing out there that we perceive.

Immanuel Kant single-handedly framed the problem for future generations of philosophers. Noticing that the mind cannot perceive reality as it is, he believed that phenomena exist only insofar as the mind turns perceptions into ideas. The empirical world that appears to us is only a representation that takes place inside our mind. Our mind builds that representation thanks to some a-priori knowledge in the form of categories (such as space and time). These categories allow us to organize the chaotic flow of perceptions into an ordered meaningful world. Knowledge consists in categorizing perceptions. In other words, Kant said that knowledge depends on the structure of the mind.

Other German philosophers envisioned an even more “idealistic” philosophy.

Johann Fichte thought the natural world is construed by an infinite self as a challenge to itself and as a field in which to operate. The Self needs the non-Self in order to be.

Peter Schelling believed in a fundamental underlying unity of nature, which led to view Nature as God, and to deny the distinction between subject and object.

The spiritual theory of reality reached its apex with Georg-Wilhelm-Friedrich Hegel. He too believed in the unity of nature, that only the absolute (infinite pure mind) exists, and that everything else is an illusion. He proved it by noticing that every “thesis” has an “antithesis” that can be resolved at a higher level by a “synthesis”, and each synthesis becomes, in turns, a thesis with its own antithesis, which is resolved at a higher level of synthesis, and so forth. This endless loop leads to higher and higher levels of abstraction. The limit of this process is the synthesis of all syntheses: Hegel’s absolute. Reality is the “dialectical” unfolding of the absolute. Since we are part of the absolute as we develop our dialectical knowledge, it is, in a sense, the absolute that is trying to know itself. We suffer because we are alienated from the absolute instead of being united with it. Hegel applied the same “dialectical” method to history, believing that history is due to the conflict of nations, conflicts that are resolved on a higher plane of political order.

Arthur Schopenhauer (1819) opened a new dimension to the “idealistic” discourse by arguing that a human being is both a “knower” and a “willer”. As knowers, humans experience the world “from without” (the “cognitive” view). As free-willing beings, humans are also provided with a “view from within” (the “conative” view). The knowing intellect can only scratch the surface of reality, while the will is able to grasp its essence. Unfortunately, the will’s constant urge for ever more knowledge and action causes human unhappiness: we are victims of our insatiable will. In Buddhist-like fashion, Schopenhauer reasoned that the will is the origin of humansufferings: the less one “wills”, the less one suffers. Salvation can come through an “euthanasia of the will”.

Ludwig Feuerbach inverted Hegel’s relationship between the individual and the Absolute and saw religion as a way to project the human experience (“species being”) into the concept of God.

Soren Kierkegaard (1846) saw philosophy and science as vain and pointless, because the thinker can never be a detached, objective, external observer: the thinker is someone who exists and is part of what is observed. Existence is both the thinker’s object and condition. He thought that philosophers and scientists missed the point. What truly matters is the pathos of existing, not the truth of Logic. Logic is defined by necessity, but existence is dominated by possibility. Necessity is a feature of being, possibility is a feature of becoming. He focused on the fact that existence is possibility, possibility means choice, and choice causes angst. We are trapped in an “aut-aut”, between the aesthetic being (whose life is paralyzed by multiple possibilities) and the ethic being (whose life is committed to one choice). The only way out of the impasse is faith in God.

Inventions and discoveries of this age include Alessandro Volta’s battery, a device that converts chemical energy into electricity, John Dalton’s theory that matter is made of atoms of differing weights. By taking Newton to the letter, Pierre-Simon LaPlace argued that the future is fully determined: given the initial conditions, every future event in the universe can be calculated. The primacy of empirical science (“positivism”) was championed by Auguste Comte, who described the evolution of human civilization as three stages, corresponding to three stages of the human mind: the theological stage (in which events are explained by gods and kings rule); the abstract stage (in which events are explained by philosophy, and democracy rules); and the scientific (“positive”) stage (in which there is no absolute truth, but science provides generalizations that can be applied to the real world).

Hermann von Helmholtz offered a detailed picture of how perception works, one that emphasized how an unconscious process in the brain was responsible for turning sense data into thought and for mediating between perception and action.

In Mathematics, George Boole resuscitated Leibniz’s program of a “lingua characteristica” by applying algebraic methods to a variety of fields. His idea was that the systematic use of symbols eliminated the ambiguities of natural language. A number of mathematicians realized that the traditional (Euclidean) geometry was not the only possible geometry. Non-Euclidean geometries were developed by Carl-Friedrich Gauss, Nikolaj Lobachevsky (1826), Janos Bolyai (1829) and Georg Riemann (1854). The latter realized that the flat space of Euclidean geometry (the flat space used by Newton) was not necessarily the only possible kind of space: space could be curved, and he developed a geometry for curved space (in which even a straight line is curved, by definition). Each point of that space can be more or less curved, according to a “curvature tensor”.

Somehow, the convergence of utopianism, idealism and positivism yielded Karl Marx’s historical materialism. Marx was fully aware that humans are natural beings who have to interact with nature (work) in order to survive. Labor converts the raw materials of nature into the products that help humans survive. But in the industrial society the difference between the time/cost of manufacturing a product versus the price that people are willing to pay for it: had created a “surplus value” that was making the capitalist class richer and richer, while hardly benefiting the working class at all. Marx set out to analyze the “alienation” caused to the working class by the fact that producer and product had been separated. He envisioned the society of his time as divided into two antagonistic classes: the proletariat and the bourgeoisie. And he envisioned the whole of human history as a conflict not of nations but of classes. His remedy was socialism: all citizens should own the tools of production. After socialism, the final stage of human history was to be communism: the: full equality of a class-less society.

While human knowledge was expanding so rapidly, literature was entering the “romantic” age. The great poets of the age were William Blake and William Wordsworth in England, Friedrich Hoelderlin and Johann-Wolfgang Goethe in Germany, Giacomo Leopardi in Italy. With the exception of Carlo Goldoni’s comedies in Italy, theater was dominated by German drama: Gotthold-Ephraim Lessing in Germany), Friedrich von Schiller, Georg Buchner. The novel became a genre of equal standing with poetry and theater via Goethe’s “Wilhelm Meister” (1796), Stendhal’s “Le Rouge et Le Noir” (1830), Honore’ de Balzac’s “Le Pere Goriot” (1834), Emily Bronte’s “Wuthering Heights” (1847), ), Herman Melville’s “Moby Dick” (1851), Nikolaj Gogol’s “Dead Souls” (1852), Gustave Flaubert’s “Madame Bovary” (1857), Victor Hugo’s “Les Miserables” (1862

While painting was relatively uneventful compared with the previous age, despite the originality of works such as Francisco Goya’s“Aquelarre” (1821) and Jean-Francois Millet’s “The Gleaners” (1851), this was the age of classical music, that boasted the geniuses of Wolfgang-Amadeus Mozart, Franz-Peter Schubert and Ludwig Van Beethoven.

In the meantime, the world had become a European world. The partition of Africa (1885) had given Congo to Belgium, Mozambique and Angola to Portugal, Namibia and Tanzania to Germany, Somalia to Italy, Western Africa and Madagascar to France, and then Egypt, Sudan, Nigeria, Uganda, Kenya, South Africa, Zambia, Zimbabwe, Botswana to Britain. Then there were the “settler societies” created by the European immigrants who displaced the natives: Canada, USA, Australia, South Africa. In subject societies such as India’s (and, de facto, China’s), few Europeans ruled over huge masses of natives. The mixed-race societies of Latin America were actually the least “European”. There were fewer and shorter Intra-European wars but many more wars of conquest elsewhere. Europeans controlled about 35% of the planet in 1800, 67% in 1878, 84% in 1914.

Japan was the notable exception. It had been the least “friendly” to the European traders, and it became the first (and only) non-European civilization to “modernize” rapidly. In a sense, it became a “nation” in the European sense of the word. It was also the first non-European nation to defeat a European power (Russia). No wonder that the Japanese came to see themselves as the saviors of Asia: they were the only ones that had resisted European colonization.

To ordinary people, the age of wars among the European powers seemed to be only a distant memory. The world was becoming more homogeneous and less dangerous. One could travel from Cairo to Cape Town, from Lisbon to Beijing carrying with minimal formalities. It was “globalization” on a scale never seen before and not seen again for a century. Such a sense of security had not been felt since the days of the Roman empire, although, invisible to most, this was also the age of a delirious arms race that the world never had seen before.

No wonder that European population increased dramatically at the end of the 19th century. In 30 years, Germany’s population grew by 43%, Austria-Hungary’s by 35%, Britain’s by 26%. A continuous flow of people emigrated to the Americas.

After the French revolution, nationalism became the main factor of war. Wars were no longer feuds between kings, they were conflicts between peoples. This also led to national aspirations by the European peoples who did not have a country yet: notably Italians and Germans, who were finally united in 1861 and 1871 (but also the Jews, who had to wait much longer for a homeland). Nationalism was fed by mass education (history, geography, literature), which included, more or less subtly, the exaltation of the national past.

France lived its “Belle Epoque” (the 40 years of peace between 1871 and 1914). It was the age in which cafes (i.e., the lower classes) replaced the salons (i.e., the higher classes) as the cultural centers. And this new kind of cultural center witnessed an unprecedented convergence of sex, art and politics. Poetry turned towards “Decadentism” and “Symbolism”, movements pioneered by Charles Baudelaire’s “Les Fleurs du Mal” (1857), Isidore de Lautreamont’s “Les Chants de Maldoror” (1868), Arthur Rimbaud’s “Le Bateau Ivre” (1871), Paul Verlaine’s “Romances sans Paroles” (1874) and Stephane Mallarme’s “L’apres-midi d’un Faune” (1876). Painters developed “Impressionism”, which peaked with Claude Monet, and then “Cubism”, which peaked with Pablo Picasso, and, in between, original styles were pursued by Pierre Renoir, Georges Seurat, Henry Rousseau, Paul Gaugin and Henri Matisse. France had most of the influential artistic movements of the time. In the rest of Europe, painting relied on great individualities: Vincent van Gogh in Holland, Edvard Munch in Norway, Gustav Klimt in Austria and Marc Chagall in Russia. French writers founded “Dadaism” (1916) and “Surrealism” (1924), and an Italian in Paris founded “Futurism” (1909). They inherited the principle of the “Philosophes”: question authority and defy conventions, negate aesthetic and moral values. At the same time, they reacted against the ideological values of the Enlightenment itself: Dadaism exalted irrationality, Surrealism was fascinated by dreams, Futurism worshipped machines.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

Berlin, in the meantime, had become not only the capital of a united Germany but also the capital of electricity. Germany’s pace of industrialization had been frantic. Werner Von Siemens founded Siemens in 1847. In 1866 that company invented the first practical dynamo. In 1879 Siemens demonstrated the first electric railway and In 1881 it demonstrated the first electric tram system. In 1887 Emil Rathenau founded Siemens’ main competitor, the Algemeine Elektrizitats Gesellschaft (AEG), specializing in electrical engineering, whereas Siemens was specializing in communication and information. In1890 AEG developed the alternating-current motor (invented in the USA by Nikola Tesla) and the generator, which allowed to build the first power plants: alternating current made it easier to transmit electricity over long distances. In 1910, Berlin was the greatest center of electrical production in the world Germany’s industrial output had passed from France’s (in 1875) and Britain’s (in 1900). Berlin was becoming a megalopolis, as its population grew from 1.9 million in 1890 to 3 million in 1910.

Electricity changed the daily lives of millions of people, mainly in the USA, because it enabled the advent of appliances, for example Josephine Cochrane’s dishwasher (1886), Willis Carrier’s air conditioner (1902), and General Electric’s commercial refrigerator (1911). Life in the office also changed dramatically. First (in 1868) Christopher Latham Sholes introduced a practical typewriter that changed the concept of corresponding, and then (in 1885) William Burroughs introduced an adding machine that changed the concept of accounting.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

Progress in transportation continued with Friedrich von Harbou’s dirigible (1873), Daimler and Maybach’s motorcycle (1885), Karl Benz’s gasoline-powered car (1886) and Wilbur and Orville Wright’s airplane (1903). But, more importantly, the USA introduced a new kind of transportation, not physical (of people) but virtual (of information). The age of communications was born with Samuel Morse’s telegraph (1844), Alexander Bell’s telephone (1876), Thomas Edison’s phonograph (1877), Kodak’s first consumer camera (1886). Just like Louis Daguerre had invented the “daguerrotype” in 1839, but his invention had been improved mainly in the USA, so the Lumiere brothers invented cinema (in 1895) but the new invention soon became an American phenomenon. The most dramatic of these events was perhaps Guglielmo Marconi’s transatlantic radio transmission of 1901, when the world seemed to shrink.

“Creationist” views of the world had already been attacked in France by the “philosophes”. In the age of Progress, a new, much more scientific attack, came from Britain.

Herbert Spencer attempted a synthesis of human knowledge that led him to posit the formation of order as a pervasive feature of the universe. Basically, the universe is “programmed” to evolve towards more and more complex states. In particular, living matter continuously evolves. The fittest forms of life survive. Human progress (wealth, power) results from a similar survival of more advanced individuals, organizations, societies and cultures over their inferior competitors

Charles Darwin explained how animals evolved: through the combination of two processes, one of variation (the fact that children are not identical to the parents, and are not identical to each other) and selection (the fact that only some of the children survive). The indirect consequence of these two processes is “adaptation”, whereby species tend to evolve towards the configuration that can best cope with the environment. The “struggle for survival” became one of the fundamental laws of life. In a sense, Darwin had merely transferred Adam Smith’s economics to biology. But he had also introduced an important new paradigm: “design without a designer “. Nature can create amazingly complex and efficient organisms without any need for a “designer” (whether human or divine). Humans are used to the idea that someone designs, and then builds, an artifact. A solution to a problem requires some planning. But Darwin showed that Nature uses a different paradigm: it lets species evolve through the combined forces of variation and selection, and the result is a very efficient solution to the problem (survival). No design and no planning are necessary. It was more than a theory of evolution: it was a new way of thinking, that was immediately applied to economics, sociology, history, etc.

Ernst Haeckel argued that “ontogeny recapitulates phylogeny”: the development of the body in the individual of a species (or ontogeny) summarizes the evolutionary development of that species (phylogeny).

Far less publicized, but no less dramatic, was the discovery of Gregor Mendel. He set out to explain why children do not inherit the average of the traits of their parents (e.g., a color in between the black eyes of the mother and the blue eyes of the father) but only the trait of one or the other (black or blue eyes). He came up with a simple but, again, revolutionary explanation: there are units of transmission of traits (which today we call “genes”), and one inherit not a mathematical combination of her parents’ traits but either one or the other unit. Mendel introduced an important distinction: the “genotype” (as the program that determines how an organism looks like) versus “the phenotype” (the way the organism looks like).

Similar progress was going on in the study of the human mind. Paul Broca studied brain lesions to understand the structure of the brain and how it determines human behavior and personality.

The acceleration in Physics had been dramatic since Newton’s unification of terrestrial and celestial Mechanics, but the age of steam and electrical power introduced the first strains on its foundations. In 1824, Sadi Carnot had worked out a preliminary science of heat, or Thermodynamics, and in 1864 James Clerk Maxwell unified electricity and magnetism, thus founding Electromagnetism. In 1887 Heinrich Herz discovered radio waves, and in 1895 Wilhelm-Conrad Roentgen discovered X rays. Newton’s Physics had not been designed for these phenomena.

In 1896 radioactivity was discovered and led Physicists to believe that the atom was not indivisible, that it had its own structure. In 1900 Max Planck invented Quantum Theory by positing that energy can only be transmitted in discrete “quanta”. In 1905 Albert Einstein published “The Special Theory of Relativity”. In 1911 Ernest Rutherford showed how the atom is made of a nucleus and orbiting electrons.

Newton’s Physics viewed the world as a static and reversible system that undergoes no evolution, whose information is constant in time. Newton’s Physics was the science of being. But his Physics was not very useful to understand the world of machines, a dynamic world of becoming. Thermodynamics describes an evolving world in which irreversible processes occurs: something changes and can never be undone. Thermodynamics was the science of becoming. The science of being and the science of becoming describe dual aspects of nature. Thermodynamics was born to study gases: systems made of a myriad small particles in frantic motion. Newton’s Physics would require a dynamic equation for each of them, which is just not feasible. Thermodynamics describes a macroscopic system by global properties (such as temperature, pressure, volume). Global properties are due to the motion of its particles (e.g., temperature is the average kinetic energy of the molecules of a system). They are fundamentally stochastic, which implies that the same macro-state can be realized by different micro-states (e.g., a gas can have the same temperature at different points in time even if its internal status is changing all the time). Sadi Carnot realized that a “perpetual-motion” machine was not possible: it is not possible to continuously convert energy from one form to another and back. The reason is that any transformation of energy has a “cost” that came to be called “entropy”. That quantity became the real oddity of Thermodynamics. Everything else was due to a view of a complex system stochastic (as opposed to Newton’s deterministic view of a simple system), but entropy was a new concept, that embodied a fundamental feature of our universe: things decay, and some processes are not reversible. Heat flows spontaneously from hot to cold bodies, but the opposite never occurs. You can dissolve a lump of sugar in a cup of coffee, but, once it is dissolved, you can never bring it back. You may calculate the amount of sugar, its temperature and many other properties, but you cannot bring it back. Some happenings cannot be undone. The second law of Thermodynamics states that the entropy (of an isolated system) can never decrease. It is a feature of this universe that natural processes generate “entropy”. This translates into a formula that is not an equality. Newton’s Physics was built on the equal sign (something equals something else). Thermodynamics introduced the first law of nature that was an inequality.

Ludwig Boltzmann interpreted entropy as a measure of disorder in a system. He offered a statistical definition of entropy: the entropy of a macrostate is the logarithm of the number of its microstates. Entropy measures the very fact that many different microscopic states of a system may result in the same macroscopic state. One can interpret this fact as “how disordered the system is”. This interpretation combined with the second law of Thermodynamics led to the fear of an “eternal doom”: the universe must evolve in the direction of higher and higher entropy, thus towards the state of maximum entropy, which is absolute disorder, or the “heat death”.

Maxwell’s Electromagnetism introduced another paradigm shift: the concept of field (pioneered by Michael Faraday). Maxwell proved that electricity and magnetism, apparently related to different phenomena, are, in reality, the same phenomenon. Depending on circumstances, one can witness only the electrical or only the magnetic side of things, but they actually coexist all the time. The electric force is created by changes in the magnetic field. The magnetic force is created by changes in the electric field. The oddity was that the mathematical expression of these relations between electric and magnetic forces turned out to be “field equations”, equations describing not the motion of particles but the behavior of fields. Fields are generated by waves that radiate through space. Gravitation was not any longer the only example of action at distance: just like there was a “gravitational field” associated to any mass, so there turned out to exist an “electromagnetic” field to any electrical charge. Light itself was shown to be made up of electromagnetic waves.

Ernst Mach was another influential Physicist who had a powerful intuition. He envisioned the inertia of a body (the tendency of a body at rest to remain at rest and of a body in motion to continue moving in the same direction) as the consequence of a relationship of that body with the rest of the matter in the universe. Basically, he thought that each body in the universe interacts with all the other bodies in the universe, even at gigantic distances, and its inertia is the sum of those myriad interactions.

The last of the major ideas in Physics before Relativity came from Henri Poincare, who pioneered “chaos” theory when he pointed out that a slight change in the initial conditions of some equations results in large-scale differences. Some systems live “at the edge”: a slight change in the initial conditions can have catastrophic effects on their behavior.

The intellectual leadership, though, was passing to the Mathematicians.

By inventing “set theory”, Georg Cantor emancipated Mathematics from its traditional domain (numbers). He also introduced “numbers” to deal with infinite quantities (“transfinite” numbers) because he realized that space and time are made of infinite points, and that, between any two points, there exists always an infinite number of points. Nonetheless, an infinite series of numbers can have a finite sum. These were, after all, the same notions that, centuries before Cantor, had puzzled Zeno. Cantor gave them mathematical legitimacy.

Gottlob Frege (1884) aimed at removing intuition from arithmetic. He thus set out, just like Leibniz and Boole before him, to replacing natural language with the language of Logic, “predicate calculus”. Extending Cantor’s program, Frege turned Mathematics itself into a branch of Logic: using Cantor’s “sets”, he reconstructed the cardinal numbers by a purely logical method that did not rely on intuition.

Frege realized that Logic was about the “syntax”, not the “semantics” of propositions. An expression has a “sense” (or intension) and “a reference” (or extension): “red” is the word for the concept of redness and the word for all the things that are red. In some cases, expressions with different senses actually have the same referent. For example, “the star of the morning” and “the star of the evening”, that both refer to Venus. In particular, propositions of Logic can have many senses, but only have one of two referents: true or false.

Giuseppe Peano was pursuing a similar program at the same time, an “axiomatization” of the theory of natural numbers.

Charles Peirce gave a pragmatic definition of “truth”: something is true if it can be used and validated. Thus, truth is defined by consensus. Truth is not agreement with reality, it is agreement among humans. Truth is “true enough”. Truth is not eternal. Truth is a process, a process of self-verification. In general, he believed that an object is defined by the effects of its use: a definition that works well is a good definition. An object “is” its behavior. The meaning of a concept consists in its practical effects on our daily lives: if two ideas have the same practical effects on us, they have the same meaning.

Peirce was therefore more interested in “beliefs” than in “truths”. Beliefs lead to habits that get reinforced through experience. He saw that the process of habit creation is pervasive in nature: all matter can be said to acquire habits, except that the “beliefs” of inert matter have been fixed to the extent that they can’t be changed anymore. Habit is, ultimately, what makes objects what they are. It is also what makes us what we are: I am my habits. Habits are progressively removing chance from the universe. The universe is evolving from an original chaos in which chance prevailed and there were no habits towards an absolute order in which all habits have been fixed.

At the same time, Peirce realized that Frege’s theory of sense and referent was limited. Peirce introduced the first version of “semiotics” that focused on what signs are. An index is a sign that bears a causal relation with its referent (for example, cigarette smokes “means” that someone was in the room). An icon is a sign that bears a relation of similarity with its referent (for example, the image of a car refers to the car). A symbol is a sign that bears a relation with its referent that is purely conventional (for example, the letters “car” refers to a car). A sign refers to an object only through the mediation of other signs (or “interpretants”). There is an infinite regression of interpretants from the signifier (the sign) to the signified (the referent). A dictionary defines a word in terms of other words, which are defined in terms of other words, which are defined in terms of other words, and so forth. Peirce believed that knowing is “semiosis” (making signs) and semiosis is an endless process.

Philosophy was less interested in Logic and more interested in the human condition, the “existentialist” direction that Schopenhauer and Kierkegaard had inaugurated.

Friedrich Nietzsche believed that humans are driven by the “will to power”, an irresistible urge to order the course of one’s experiences (an extension of Schopenhauer’s will to live). All living beings strive for a higher order of their living condition to overcome their present state’s limitations. Human limitations are exemplified by Science: Science is only an interpretation of the world. Truth and knowledge are only relative to how useful they are to our “will to power”. He viewed Christian morality as a device invented by the weak to assert their will to power over the strong, a “slave morality”. He believed that Christian values had become obsolete (“God is dead”) and advocated a new morality founded on the ideal of the “superman”, who rises above the masses and solves the problems of this world, not of the otherworld.

Henri Bergson was, instead, a very spiritual philosopher, for whom reality was merely the eternal flow of a pantheistic whole. This flow has two directions: the upward flow is life, the downward flow is inert matter. Humans are torn between Intellect and Intuition: Intellect is life observing inert matter (in space), whereas Intuition is life observing life (in time). Intellect can “understand” inert matter, not only Intuition can “grasp” life. In order to understand matter, Intellect breaks it down into objects located in space. Intuition, instead, grasps the flow of life as a whole in time.

Francis-Herbert Bradley was the last major “idealist”. He argued that all categories of science (e.g., space and time) can be proven to be contradictory, which proves that the world is a fiction, a product of the mind. The only reality has to be a unity of all things, the absolute.

Inevitably, the focus of knowledge shifted towards the psyche.

William James adapted Peirce’s “pragmatism” to the realm of the mind. He believed that the function of mind is to help the body to live in an environment, just like any other organ. The brain is an organ that evolved because of its usefulness for survival. The brain is organized as an associative network, and associations are governed by a rule of reinforcement, so that it creates “habits” out of regularities (stimulus-response patterns). A habit gets reinforced as it succeeds. The function of thinking is pragmatic: to produce habits of action. James was intrigued by the fact that the brain, in doing so, also produced “consciousness”, but thought that mental life is not a substance, it is a process (“the stream of consciousness”).

Edward Thorndike postulated the “law of effect”: animals learn based on the outcome of their actions. He envisioned the brain as a network: learning occurs when elements are connected. Behavior is due to the association of stimuli with responses that is generated through those connections. A habit is a chain of “stimulus-response” pairs.

Wilhelm-Max Wundt had founded Psychology to study the psyche via experiments and logic, not mere speculation. The classical model of Psychology was roughly this. Actions have a motive. Motives are hosted in our minds and controlled by our minds. Motives express an imbalance between desire and reality that the mind tries to remedy by changing the reality via action. An action, therefore, is meant to restore the balance between reality and our desires. But what about dreams?

Sigmund Freud was less revolutionary than he seemed to be, because, in principle, he simply applied the classical model of Psychology. He decided that dreams have a motive, that those motives are in the mind, and that they are meant to remedy an imbalance. Except that the motives of dreams are not conscious: the mind contains both conscious motives and unconscious motives. There is a repertory of motives that our mind, independent of our will, has created over the years, and they participate daily in determining our actions. Freud’s revolution was in separating motive and awareness. A dream is only apparently meaningless: it is meaningless if interpreted from the conscious motives. But the dream is perfectly logical if one considers also the unconscious motives. The meaning of dreams are hidden and reflect memories of emotionally meaningful experience. Dreams are not prophecies, as ancient oracles believed, but hidden memories. Psychoanalysis was the discipline invented by Freud to sort out the unconscious mess.

Freud divided the self in different parts that coexist. The ego perceives, learns and acts consciously. The super-ego is the (largely unconscious) moral conscience which was created during childhood by parental guidance as an instrument of self-repression The id is the repertory of unconscious memories created by “libido”.

Somewhat unnecessarily, Freud painted a repulsive picture of the human soul. He believed that the main motive was “libido” (sexual desires) and that a child is, first and foremost, a sexual being. As parents repress the child’s sexuality, the child undergoes oral, anal and phallic stages. Boys desire sex with their mother and are afraid their father wants to castrate them. Girls envy the penis and are attracted to their father. And so forth.

Carl Jung shifted the focus towards a different kind of unconscious, the collective unconscious. He saw motives not so much in the history of the individual as in the history of the entire human race. His unconscious is a repertory of motives created over the millennia and shared by all humakind. Its “archetypes” spontaneously emerge in all minds. All human brains are “wired” to create some myths rather than others. Thus mythology is the key to understanding the human mind, because myths are precisely the keys to unlock those motives. Dreams reflect this collective unconscious, and therefore connect the individual with the rest of humankind and its archaic past. For Jung, the goal of Psychoanalysis is a spiritual renewal through the mystical connection with our primitive ancestors.

Another discipline invented at the turn of the century was Hermeneutics. Wilhelm Dilthey argued that human knowledge can only be understood by placing the knower’s life in its historical context. Understanding a text implies understanding the relationship between the author and its age. This applies in general to all cultural products, because they are all analogous to written texts.

Ferdinand Saussure was the father of “Structuralism”. The meaning of any human phenomenon (e.g, language) lies the network of relationships that it is part of. A sign is meaningful only within the entire network of signs, and the meaning of a sign “is” its relationship to other signs. Language is a system of signs having no reference to anything outside itself. He also separated “parole” (a specific utterance in a language, or a speaker’s performance) from “langue” (the entire body of the language, or a speaker’s competence), thus laying the foundations for Linguistics.

Edmund Husserl’s aim was to found “Phenomenology”, the science of phenomena. He believed that the essence of events is not their physical description provided by science, but the way we experience them. In fact, science caused a crisis by denying humans the truth of what they experience, by moving away from phenomena as they are. He pointed out that consciousness is “consciousness of”: it correlates the act of knowing (“noesis”) of the subject and the object that is known (“noema”). The self knows a phenomenon “intuitively”. The essence (“eidos”) of a phenomenon is the sum of all possible “intuitive” ways of knowing that phenomenon. The eidos can be achieve only after “bracketing out” the physical description of the phenomenon, only after removing the pollution of science from the human experience, so that the self can experience a purely transcendental knowledge of the phenomenon. This would restore the unity of subject and object that science separated.

In Physics, a number of ideas were converging towards the same view of the world. Henri Poincare` showed that the speed of light has to be the maximum speed and that mass depends on speed. Hendrik Lorentz unified Newton’s equations for the dynamics of bodies and Maxwell’s equations for the dynamics of electromagnetic waves in one set of equations, the “Lorentz transformations”. These equations, which were hard to dispute because both Newton’s and Maxwell’s theories were confirmed by countless experiments, contained a couple of odd implications: bodies seemed to contract with speed, while clocks seemed to slow down.

Albert Einstein devised an elegant unification of all these ideas that matched, in scope, the one provided two centuries earlier by Newton. He used strict logic. His axioms were that the laws of nature must be uniform, that those laws must be the same in all frames of reference that are “inertial” (at rest or moving of linear uniform motion), and that the speed of light was the same in all directions. He took the oddities of the Lorentz transformations literally: length and duration appear different to different observers, depending on their state of motion, because space and time are relative. “Now” and “here” became meaningless concepts. The implications of his axioms were even more powerful. All physical quantities were now expressed in four dimensions, a time component and a three-dimensional space component. One, in particular, represented both energy and momentum, depending on the space-time coordinate that one examined. It also yield the equivalence between mass and energy (E=mc2). Time does not flow (no more than space does): it is just a dimension. A life is a series of points in space-time, points that have both a spatial and a temporal component.

Einstein’s world was still Newton’s world, though, in some fundamental ways. For example, it was deterministic: the past determines the future. There was one major limitation: because nothing can travel faster than light, there is a limit to what can happen in one’s life. Each observer’s history is constrained by a cone of light within the space-time continuum radiating from the point (space and time) where the observer “is”.

Einstein’s next step was to look for a science that was not limited to “inertial” systems. He believed that phenomena should appear the same for all systems accelerated with respect to one another. His new formulas had new startling implications. The dynamic of the universe was reduced to the interaction between masses and the geometry of space-time: masses curve space-time, and the curvature of space-time determines how masses move. Space-time is warped by all the masses that is studded with. Every object left to itself moves along a “geodesic” of space-time (the shortest route between two points on the warped surface of space-time). It so happens that space-time is warped, and thus objects appear to be “attracted” by the objects in space-time that have warped it. But each object is simply moving on a geodesic (the equivalent of a straight line in traditional “flat” space). It is space-time that is curved, not the geodesic (the trajectory) of the body. Space-time “is” the gravitational field. Einstein thus reduced Physics to Geometry. The curvature of space-time is measured by a “curvature tensor” (as in Riemann’s geometry) such that each point in space-time is described by ten numbers (the “metric tensor”). If the metric tensor is reduced to zero curvature, one obtains traditional Physics in traditional flat space. Curvature (i.e., a gravitational field) also causes clocks to slow down and light to be deflected.

Surprisingly, Einstein’s Relativity, that granted a special status to the observer, re-opened the doors to Eastern spirituality. Nishida Kitaro was perhaps the most distinguished Eastern philosopher to attempt a unification of western science and Zen Buddhism. In Kitaro’s system, western science is like a robot without feelings or ethics that provides the rational foundations for life, while Zen provides the feelings and the ethics. “Mu” is the immeasurable moment in space-time (“less than a moment”) that has to be “lived” in order to reach the next “mu”. The flow of “mu” creates a space-time topology. Mu’s infinitesemal brief presence creates past, present, and future. The “eternal now” contains one’s whole being and also the being of all other things. The present is merely an aspect of the eternal. The eternal generates all the time a present. Mu also creates self-consciousness and free will. There is a fundamental unity of the universe, in particular between the self and the world. Each self and each thing are expressions of the same reality, God. The self is not a substance: it is nothingness (“to study the self is to forget the self”). Religion, not science, is the culmination of knowledge. It is also the culmination of love.

The European countries (and at least two of their former colonies, Brazil and the USA) experienced an unprecedented boom in literature. The great novels of the time expanded over the genres invented by the previous generations: Leo Tolstoj’s “War and Peace” (1869), George Eliot’s “Middlemarch” (1872), Emile Zola’s “L’Assommoir” (1877), Fodor Dostoevsky’s “Brothers Karamazov” (1880), Joaquim-Maria Machado de Assis’ “Memorias Postumas” (1881), Joris Huysmans’ “A Rebours” (1884), Perez Galdos’ “Tristana” (1892), Jose-Maria Eca de Queiros’ “Casa de Ramires” (1897), Thomas Mann’s “Buddenbrooks” (1901), Henry James’ Golden Bowl (1904), Luigi Pirandello’s “Il Fu Mattia Pascal” (1904), Joseph Conrad’s “Nostromo” (1904), Maksim Gorkij’s “The Mother” (1907) and Franz Kafka’s “Der Prozess” (1915).

Theatre was largely reinvented both as a realist and as a fantastic art through Henrik Ibsen’s “Wild Duck” (1884), Alfred Jarry’s “Ubu Roi” (1894), August Strindberg’s “The Dream” (1902), Anton Chekhov’s “The Cherries Garden” (1904), Gerhart Hauptmann’s “The Weavers” (1892), Arthur Schnitzler’s “Reigen “ (1896), Frank Wedekind’s “The Book of Pandora” (1904), Bernard Shaw’s “Pygmalion” (1914).

Poetry works outside of France’s “isms” ranged from Robert Browning’s “The Ring And The Book” (1869) to Gerald-Manley Hopkins’ “The Wreck Of The Deutschland” (1876), from Ruben Dario’s “Prosas Profanas” (1896) to Giovanni Pascoli’s “Canti di Castelvecchio” (1903), from Antonio Machado’s “Campos de Castilla” (1912) to Rabindranath Tagore’s “Gitanjali” (1913). In the new century, France still led the way of literary fashion with Guillaume Apollinaire’s “Alcools” (1913) and Paul Valery’s “La Jeune Parque” (1917).

Classical music reflected the nationalist spirit of the age (Richard Wagner in Germany, Hector Berlioz in France, Modest Moussorgsky in Russia, Giuseppe Verdi in Italy, Antonin Dvorak in the Czech Republic, Fryderyk Chopin in Poland, Ferencz Liszt in Hungary) and the impact of Beethoven’s symphonies on the German-speaking world (Johannes Brahms, Richard Strauss, Joseph Bruckner and Gustav Mahler).

At the beginning of the new century, a number of compositions announced that the classical format was about to exhaust its mission: Aleksandr Skrjabin’s “Divine Poem” (1905), Arnold Schoenberg’s “Pierrot Lunaire” (1912), Claude Debussy’s “Jeux” (1912), Igor Stravinskij’s “Le Sacre du Printemps” (1913), Charles Ives’ “Symphony 4” (1916), Sergej Prokofev’ “Classic Symphony” (1917) and Erik Satie’s “Socrates” (1918).

All the progress in Science, Philosophy and the Arts did not help avert a new international war, one so large that was called a “world war”. Its immediate causes (1914) were insignificant. The real causes were the “nations” themselves. The nationalistic spirit caused the confrontation, and the confrontation caused a massive arms race, and this race turned each European nation into a formidable war machine. Soldiers were transported by battleship, submarine, zeppelin, air fighter, train, car and tank. Enemies were killed with grenades, cannons, machine guns, torpedoes, bombs. 60 million men were mobilized. 8 million died. Serbia, Russia, France, Britain, Japan, Canada, Australia, Italy (1915), China (1917) and the USA (1917) won against Austria, Germany and Turkey. Russia was allied with the winners, but had to withdraw to take care of its own revolution (1917).

The post-war age opened with three new political “isms”: Vladimir Ilic Lenin’s communism (1917), Benito Mussolini’s fascism (1922) and Adolf Hitler’s nazism (1933). Mussolini and Hitler capitalized on the nationalist spirit of the two youngest nations of Europe. The Russian revolution was two revolutions in one. The first one (in february) was caused by food shortages, and involved women, workers and soldiers. The second one (in october) was in reality a coup by Lenin’s Bolshevik Party, determined to apply Leon Trotsky’s program of “permanent revolution” (bypass the bourgeoise-democratic society and aim directly for the dictatorship of the proletariat). Lenin inaugurated a collectivist economy supported by a terror apparatus. Lenin was succeeded by Joseph Stalin, under whose rule Marxism-Leninism became the euphemism for a vast, pervasive, centralized bureaucracy in charge of every aspect of life (the “nomenklatura” system). The communist goal required the mobilization of all human and material resources to generate economic power which guaranteed political and military power.

The three “isms” had something in common, besides the totalitarian regime: they soon became ideologies of mass murder. Lenin’s was scientific, with the goal to create absolute dictatorship (of the proletariat) via absolute violence; Stalin’s was political, to safeguard and increase his own power; Hitler’s was racist, to annihilate inferior races; Mao’s was idealist, to create a just society.

But they did not invent genocide: 2.4 million Chinese died in the 1911 revolution and 2 million would die in the civil war of 1928-1937, the Ottoman empire slaughtered 1.2 million Armenians in 1915, World War I killed 8 million soldiers. Britain had already experimented on concentration camps in the Boer war (1899-02).

However, the numbers escalated with the new ideologies of mass murder: Lenin’s “revolution” killed 5 million; Stalin’s purges of 1936-37 killed 13 million; World War 2 killed 55 million, of which millions in Hitler’s gas chambers; Mao’s “Great Leap Forward” (1958-1961) caused the death of perhaps 30 million and his “cultural revolution” (1966-1969) caused the death of perhaps 11 million.

In the meantime, Physics was still in a fluctuating state.

Niels Bohr (1913) showed that electrons are arranged in concentric shells outside the nucleus of the atom, with the number of electrons determining the atomic number of the atom and the outermost shell of electrons determining its chemical behavior. Paul Rutherford (1919) showed that the nucleus of the atom contains positively charged particles (protons) in equal number to the number of electrons. In 1932 James Chadwick showed that the nucleus of the atom contains electrically neutral particles (neutrons): isotopes are atoms of the same element (containing the same number of electrons/protons) but with different numbers of neutrons. Their model of the atom was another case of Nature preferring only discrete values instead of all possible values. (Max Planck had shown in 1900 that atoms can emit energy only in discrete amounts).

At this point, Physics was aware of three fundamental forces: the electromagnetic force, the gravitational force and now the nuclear force.

The theory that developed from these discoveries was labeled “Quantum Mechanics”. It was born to explain why Nature prefers some “quanta” instead of all possible values. Forces are due to exchanges of discrete amounts of energy (“quanta”).

The key intuition came in 1923, when Louis DeBroglie argued that matter can be viewed both as particles and waves: they are dual aspects of the same reality..This also explained the energy-frequency equivalence discovered by Albert Einstein in 1905: the energy of a photon is proportional to the frequency of the radiation.

Max Born realized (1926) that the “wave” corresponding to a particle was a wave of probabilities, it was a representation of the state of the particle. Unlike a pointless particle, a wave can be in several places at the same time. The implication was that the state of a particle was not a specific value, but a range of values. A “wave function” specifies the values that a certain quantity can assume, and, in a sense, states that the quantity “has” all those values (e.g., the particle “is” in all the places compatible with its wave function). The “wave function” summarizes (“superposes”) all the possible alternatives. Erwin Schroedinger’s equation describes how this wave function evolves in time, just like Newton’s equations describe how a classical physical quantity evolves in time. The difference is that, at every point in time, Schroedinger’s equation yields a range of values (the wave function) not a specific value.

The probability associated with each of those possible values is the probability that an observation would reveal that specific value (e.g., that an observation would find the particle in one specific point). This was a dramatic departure for Physics. Determinism was gone, because the state of a quantum system cannot be determined anymore. Chance had entered the picture, because, when a Physicist performs an observation, Nature decides randomly which of the possible values to reveal. And a discontinuity had been introduced between unobserved reality and observed reality: as long as nobody measures it, a quantity has many values (e.g., a particle is in many places at the same time), but, as soon as someone measures it, the quantity assumes only one of those values (e.g, the particle is in one specific point).

The fact that the equations of different quantities were linked together (a consequence of Einstein’s energy-frequency equivalence) had another odd implication, expressed by Werner Heisenberg’s “uncertainty principle”: there is a limit to the precision with which we can measure quantities. The more precise we want to be about a certain quantity, the less precise we will be about some other quantity.

Space-time turns out to be discrete: there is a minimum size to lengths and intervals, below which Physics ceases to operate. Thus, there is a limit to how small a physical system can be.

Later, Physicists would realize that vacuum itself is unrecognizable in Quantum Mechanics: it is not empty.

Besides randomness, which was already difficult to digest, Physicists also had to accept “non-locality”: a system can affect a distant system despite the fact that they are not communicating. If two systems get entangled in a wave, they will remain so forever, even if they move to the opposite sides of the universe, at a distance at which a signal cannot travel in time to tell one what the other one is doing.

If this were not enough, Paul Dirac (1928) realized that the equations of Quantum Mechanics allowed for “anti-matter” to exist next to usual matter, for example a positively charged electron exists that looks just like the electron but has the opposite charge. Paul Dirac’s equations for the electron in an electromagnetic field, which combined Quantum Mechanics and Special Relativity, transferred Quantum Theory outside Mechanics, into Quantum Electrodynamics.

Perhaps the most intriguing aspect of Quantum Mechanics is that a measurement causes a “collapse” of the wave function. The observer changes the course of the universe by the simple act of looking at a particle inside a microscope.

This led to different “interpretations” of Quantum Mechanics. Niels Bohr argued that maybe only phenomena are real. Werner Heisenberg, instead, thought that maybe the world “is” made of possibility waves. Paul Dirac thought that Quantum Mechanics simply represents our (imperfect) knowledge of a system. Hugh Everett took the multiple possible values of each quantity literally, and hypothized that we live in an ever multiplying “multiverse”: at each point in time, the universe splits according to all the possible values of a measurement. In each new universe one of the possible values is observed, and life goes on.

John Von Neumann asked at which point does the collapse occur. If a measurement causes Nature to choose one value, and only one, among the many that are allowed by Schroedinger’s equation, “when” does this occur? In other words, where in the measuring apparatus does this occur? The measurement is performed by having a machine interact with the quantum system and eventually deliver a visual measurement to the human brain. Somewhere in this process a range of possibilities collapses into one specific value. Somewhere in this process the quantum world of waves collapses into the classical world of objects. Measurement consists in a chain of interactions between the apparatus and the system, whereby the states of the apparatus become dependent on the states of the system. Eventually, states of the observer’s consciousness are made dependent on states of the system, and the observer “knows” what the value of the observable is. If we proceed backwards, this seems to imply that the “collapse” occurs in the conscious being, and that consciousness creates reality.

Einstein was the main critic: he believed that Quantum Mechanics was an incomplete description of the universe, and that some “hidden variables” would eventually turn it into a deterministic science just like traditional science and his own Relativity.

From the beginning, it was obvious what was going to be the biggest challenge for Quantum Mechanics: discovering the “quantum” of gravitation. Einstein had explained gravitation as the curvature of space-time, but Quantum Mechanics was founded on the premise that each force is due to the exchange of quanta: Gravity did not seem to work that way, though.

A further blow to the traditional view of the universe came from Edwin Hubble’s discovery (1929) that the universe is expanding. It is not only the Earth that is moving around the Sun, and the Sun that is moving around the center of our galaxy: but all galaxies are moving away from each other.

The emerging discipline was Biology. By the 1940s Darwin’s theory of evolution (variation plus selection) had been finally wed to Mendel’s theory of genetic transmission (mutation), yielding the “modern synthesis”. Basically, Mendel’s mutation explained were Darwin’s variation came from. At the same time, biologists focused on population, not individuals, using the mathematical tool of probabilities. “Population Genetics” was born.

Erwin Schroedinger noticed an apparent paradox in the biological world: as species evolve and as organisms grow, life creates order from disorder, thus contradicting the second law of Thermodynamics. The solution to this paradox is that life is not a “closed” system: the biological world is a world of energy flux. An organism stays alive (i.e., maintains its highly organized state) by absorbing energy from the outside world and processing it to decrease its own entropy (i.e., increase its own order). “Living organisms feed upon negative entropy”. Life is “negentropic”. The effect of life’s negentropy is that entropy increases in the outside world. The survival of a living being depends on increasing the entropy of the rest of the universe.

However, the lives of ordinary people were probably more affected by a humbler kind of science that became pervasive: synthetic materials. In 1907 Leo Baekeland invented the first plastic (“bakelite”). In 1925 cellophane was introduced and in 1930 it was the turn of polystyrene. In 1935 Wallace Carothers invented nylon.

The influence of Einstein can also be seen on Samuel Alexander, who believed in “emergent evolution”: existence is hierarchically arranged and each stage emerges from the previous one. Matter emerges from space-time, life emerges from matter, mind emerges from life, God emerges from mind.

Arguing against both idealism, materialism and dualism, Bertrand Russell took Einstein literally and adopted the view that there is no substance (“neutral monism”): everything in the universe is made of space-time events, and events are neither mental nor physical. Matter and mind are different ways of organizing space-time.

Elsewhere, he conceived of consciousness as a sense organ that allows us to perceive some of the processes that occur in our brain. Consciousness provides us with direct, immediate awareness of what is in the brain, whereas the senses “observe” what is in the brain. What a neurophysiologist really sees while examining someone else’s brain is part of her own (the neurologist’s) brain.

But Bertrand Russell was perhaps more influential in criticizing Frege’s program. He found a paradox that seemed to terminate the program to formalize Mathematics: the class of all the classes that are not members of themselves is both a member and not a member of itself (the barber who shaves all barbers who do not shave themselves both shaves and does not shave himself). He solved the paradox (and other similar paradoxes, such as the proposition “I am lying” which is true if it is false and false if it is true) by introducing a “theory of types”, which basically resolved logical contradictions at a higher level.

Ludwig Wittgenstein erected another ambitious logical system. Believing that most philosophical problems are non-issues created by linguistic misunderstandings, he set out to investigate the nature of language. He concluded that the meaning of the world cannot be understood from inside the world, and thus metaphysics cannot be justified from inside the world (no more and no less than religion or magic). Mathematics also lost some of its appeal: it cannot be grounded in the world, therefore it is but a game played by mathematicians.

Wittgenstein saw that language has a function, that words are tools. Language is a game between people, and it involves more than a mere transcription of meaning: it involves assertions, commands, questions, etc. The meaning of a proposition can only be understood in its context, and the meaning of a word is due to the consensus of a society. To understand a word is to understand a language.

Edward Sapir argued that language and thought influence each other. Thought shapes language, but language also shapes thought. In fact, the structure of a language exerts an influence on the way its speakers understand the world. Each language contains a “hidden metaphysics”, an implicit classification of experience, a cultural model, a system of values. Language implies the categories by which its speakers not only communicate but also think.

Lev Vygotsky reached a similar conclusion from a developmental viewpoint: language mediates between society and the child. Language guides the child’s cognitive growth. Thus, cognitive faculties are merely internalized versions of social processes that we learned via language as children. Thus, one’s cognitive development (way of thinking) depends on the society in which she grew up.

Something similar to the wave/particle dualism of Physics was taking place in Psychology. Behaviorists such as John Watson, Ivan Pavlov and Burrhus Skinner believed that behavior is due to stimulus-response patterns. Animals learn how to respond to a stimulus based on reward/punishment, i.e. via selective reinforcement of random responses. All of behavior can be reduced to such “conditioned” learning. This also provided an elegant parallel with Darwinian evolution, which is also based on selection by the environment of random mutations. Behaviorists downplayed mind: thoughts have no effect on our actions.

Cognitivists such as Max Wertheimer, Wolfgang Kohler and Karl Lashley (the “gestalt” school) believed just the opposite: an individual stimulus does not cause an individual response. We perceive (and react to) “form”, as a whole, not individual stimuli. We recognize objects not by focusing on the details of each image, but by focusing the image as a whole. We solve problems not by breaking them down in more and more minute details, but via sudden insight, often by restructuring the field of perception. Cognitivists believed that the processing (thought) between input and output was the key to human behavior, whereas Behaviorists believed that behavior was just a matter of linking outputs with inputs.

Cognitivists conceived the brain as a holistic system. Functions are not localized but distributed around the brain. If a piece of the brain stops working, the brain as a whole may still be working. They envisioned memory as an electromagnetic field, and a specific memory as a wave within that field.

Otto Selz was influenced by this school when he argued that to solve a problem entails to recognize the situation and to fill in the gaps: information in excess contains the solution. Thus solving a problem consists in comprehending it, and comprehending it consists in reducing the current situation to a past situation. Once we “comprehend” it, we can also anticipate what comes next: inferring is anticipating.

Last, but not least, Fredrick Bartlett suggested that memory is not a kind of storage, because it obviously does not remember the single words and images. Memory “reconstructs” the past. We are perfectly capable of describing a scene or a novel or a film even though we cannot remember the vast majority of the details. Memory has “encoded” the past in an efficient format of “schemata” that bear little resemblance to the original scenes and stories, but that take little space and make it easy to reconstruct them when needed.

Kurt Goldstein’s theory of disease is also an example of cognitivist thinking. Goldstein took issue against dividing an organism into separate “organs”: it is the whole that reacts to the environment. A “disease” is the manifestation of a change in the relationship between the organism and its environment. Healing is not a “repair”, but an adaptation of the whole organism to the new state. A sick body is, in fact, a system that is undergoing global reorganization.

Jean Piaget focused entirely on the mind, and precisely on the “growth” of the mind. He realized that, during our lifetime, the mind grows, just like the body grows. For him cognition was self-regulation: organisms need to constantly maintain a state of equilibrium with their environment.

Piaget believed that humans achieve that equilibrium through a number of stages, each stage corresponding with a reorganization of our cognitive life. This was not a linear, gradual progress of learning, but a discontinuous process of sudden cognitive jumps. Overall, the growth of the mind was a transition from the stage of early childhood, in which the dominant factor is perception, which is irreversible, to the stage of adulthood in which the dominant factor is abstract thought, which is reversible.

Charlie-Dunbar Broad was a materialist in the age of behaviorists and cognitivists. He believed that mind was an emergent property of the brain, just like electricity is an emergent property of conductors. Ultimately, all is matter.

That is not to say that the “spiritual” discourse was dead. Martin Buber that humans were mistaken in turning subjects into objects and losing the meaning of God. He argued that our original state was one of “I-You”, in which the “I” recognizes other “I”’s in the world, but we moved towards a “I-It” state, in which the “I” sees both objects and people merely as means to an end. This changes the way in which we engage in dialogue with each other, and thus our existence. Thus we lost God, which is the “Eternal You”.

For Martin Heidegger, the fundamental question was the question of “being”. A conceptual mistake is to think of the human being as a “what” instead of a “who”. Another conceptual mistake is to separate the “who” from the “what”: the human being is part of the world, at the same time that is the observer of the world. The human being is not “Dasein” (existence) but “Dase-in” (“existing in” the world). We cannot detach ourselves from reality because we are part of it. We just “act”: we are “thrown” in an action. We know what to do because the world is not a world of particles or formulas: it is a world of meaning, that the mind can understand. Technology alienates humans because it recasts the natural environment as merely a reservoir of natural resources to be exploited, when in fact it provided them with an identity.

Vladimir Vernadsky introduced the concept of the “biosphere” to express the unity of all life.

Alfred Whitehead believed in the fundamental unity of the world, due to the continuous interaction of its constituents, and that matter and mind were simply different aspects of the one reality, due to the fact that mind is part of the bodily interaction with the world. He thought that every particle is an event having both an “objective” aspect of matter and a “subjective” aspect of experience. Some material compounds, such as the brain, create the illusion that we call “self”. But the mental is not exclusive to humans, it is ubiquitous in nature.

The relationship of the self with the external reality was also analyzed by George Herbert Mead, who saw consciousness as, ultimately, a feature in the world, located outside the organism and due to the interaction of the organism with the environment. Consciousness “is” the qualities of the objects that we perceive. Those qualities are perceived the way they are because of the acts that we performed. The world is the result of our actions. It is our acting in the environment that determines what we perceive as objects. Different organisms may perceive different objects. We are actors as well as observers (of the consequences of our actions). Consciousness is not the brain process: the brain process is only the switch that turns consciousness on or off. Consciousness is pervasive in nature. What is unique to humans, as social species, is that they can report on their conscious experiences. That “reporting” is what we call the “self”. A self always belongs to a society of selves.

Sarvepalli Radhakrishnan believed that science was proving a universal process of evolution at different levels (material, organic, biological, social) whose ultimate goal was to reveal the absolute (the spiritual level). Human consciousness is not the last step in evolution, but will be succeeded by the emergence of a super-consciousness capable of realizing the union with a super-human reality that human science cannot grasp.

Muhammad Iqbal believed that humans are imperfect egos who are striving to reach God, the absolute ego.

However, it was an economist, John Maynard Keynes, to frame the fundamental philosophical problem of the post-industrial state. As citizens no longer need to worry about survival, “man will be faced with his real, permanent problem: how to use his freedom”.

But Karl Jaspers saw existence as a contradiction in terms. In theory humans are free to choose the existence they prefer, but in practice it is impossible to transcend the historical and social background. Thus one is only truly free of accepting of one’s destiny. Ultimately, we can only glimpse the essence of our own existence, but we cannot change it.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

The ambition of creating a universal language a` la Leibniz to find the solution to all philosophical problems had not died either. Several philosophers, particularly the “logical positivists” (such as Rudolf Carnap and Alfred-Jules Ayer), shed new light on this program. Carnap believed that meaning could be found only in the marriage of science and Frege’s symbolic logic. He believed in the motto “the meaning of a proposition is its method of verification”, which put all the responsibility on the senses. He demoted Philosophy to a second-rate discipline whose only function would be to clarify the “syntax” of the logical-scientific discourse. The problem is that the senses provide a subjective view of the world, and therefore the “meaning” derived from verification is personal, not universal. Also, it is not clear how one can “verify” statements about history. Soon it became clear that even scientific propositions cannot quite be “verified” in an absolute way. Last, but not least, Carnap could not prove the very principle of verification based on the principle of verification.

Karl Popper clarified that truth is always and only relative to a theory: no definition of absolute truth is possible. The issue is not what is “true”, but what is “scientific”. Popper argued that matching the facts was not enough to qualify as “scientific”: a scientific theory should also provide the means to falsify itself.

Symbolic logic had made huge progress and reached an impressive level of sophistication. The implicit premise of much work on Logic was that the laws of thought are the laws of logic, and viceversa. After Frege, contributions to “axiomatizating” Mathematics and Language had come from Russell, Whitehead and Wittgenstein. David Hilbert interpreted the spirit of his age when he advanced his program of “formal systems”, which, again, was an adaptation of Leibniz’s old dream: devising an automatic procedure such that, by applying a set of rules on a set of axioms, one could prove any possible theorem. A major setback for Hilbert’s program was Kurt Goedel’s theorem of incompleteness (1931): every formal system (that contains arithmetic) also contains at least one proposition that cannot be proven true or false (an unprovable proposition). In other words, there is always an unprovable theorem in every system a` la Hilbert. Thus Hilbert’s program appeared to be impossible. Nonetheless, Alan Turing eventually (1936) found Hilbert’s procedure, which came to be called “Turing Machine” (an imaginary machine, not a physical one). Such a machine is capable of a few elementary operations on symbols (read current symbols, process them, write new symbols, examine new symbols) and is capable of remembering its own state. One can imagine an infinite number of Turing machines, depending on the rules to manipulate the symbols. Turing then imagined a “universal” machine capable of simulating all possible Turing machines. Turing showed that, given infinite time and infinite memory, such a universal machine could prove any theorem (except, of course, Goedel’s unprovable one).

Turing did more than complete Hilbert’s program: he introduced a whole new vocabulary. Reasoning had been reduced to computation, which was manipulation of symbols. Thus “thinking” had been reduced to symbol processing. Also, Turing shifted the emphasis from “formulas” to “algorithms”: the Turing Machine was a series of instructions. Today’s computer is but the physical implementation of a universal Turing machine with a finite memory.

Alfred Tarski found the “truth” the Carnap was looking for. Tarski realized a subtle but key difference between the fact that “p” is true and the sentence “p is true”. The fact and the sentence are actually referring to two different things, or the same thing at different levels. The latter is a “meta-sentence”, expressed in a meta-language. The sentences of the meta-language are about sentences of the language. Tarski realized that truth within a theory can be defined only relative to another theory, the meta-theory. In the meta-theory one can define (one can list) all the statements that are true in the theory. Tarski replaced the intuitive notion of “truth” with an infinite series of rules which define truth in a language relative to truth in another language.

Ernst Cassirer adopted the view that was coming from Logic: that the human mind is a symbolic system, and that “Understanding” the world is turning it into symbols. The difference between animals and humans is that animals live in the world, whereas humans live in a symbolic representation of the world. All cultural artifacts are symbolic forms that mediate between the self and the world.

Alfred Korzybski made a similar distinction between animals and humans. Animals are only hunters and gatherers, activities that are bound to the territory, i.e. they are “space-binders”. Humans, instead, developed agriculture, that is bound to a memory of the past and to prediction of the future, i.e. they are “time-binders”. Time-binding is enabled by the manipulation of symbols, and allows to transmit knowledge to other humans.

The influence of Peirce was offering a different take on truth and meaning. John Dewey viewed knowledge as a way to generate certainty from doubt (habits from chaos). When faced with an indeterminate situation, humans work out a scientific or common-sense theory of that situation that reduces its indeterminacy. Charles Morris developed a theory of signs (“semiotics”) based on the theories of Peirce and Saussure, and separated three disciplines of signs: “syntax” studies the relation between signs and signs; “semantics” studies the relation between signs and objects; “pragmatics” studies the relation between signs, objects and users.

World War 2 (1939-1945) actually started during the 1930s, when Germany (Austria 1938, Czechoslovakia 1938), Italy (Ethiopia 1936) and Japan (Manchuria 1931, China 1937, Indochina 1940) began expanding their respective territories through a policy of aggression and annexation. Eventually, after Germany (1939-1940) invaded Poland and France, the powers of the world fell into two camps: Britain, USA and Russia (who won) against Germany, Italy and Japan (the “axis”). World War 2 made explicit the commitment to genocide: Germany slaughtered Jews by the millions in gas chambers, the USA won the war by detonating the first nuclear weapons (the first practical application of Einstein’s Relativity).

In fact, both shame of the apocalypse (the two world wars just ended) and fear of the apocalypse (the nuclear holocaust) permeated the cultural mood of the era.

In poetry the apocalyptic spirit of the time was captured by Rainer Maria Rilke’s “Duineser Elegien” (1923), William Yeats’ “The Tower” (1928), Czeslaw Milosz’s “Poem of the Stony Time” (1933), Fernando Pessoa’s “Mensagem” (1933), Federico Garcia-Lorca’s “Llanto por Ignacio Sanchez Mejias” (1935), Eugenio Montale’s “La Bufera” (1941), Nazim Hikmet’s In This Year” (1941), Wallace Stevens’s “Notes Toward A Supreme Fiction” (1942), Thomas-Stearns Eliot’s “Four Quartets” (1942), Juan-Ramon Jimenez’s “La Estacion Total” (1946).

In theater, new forms of expression were invented to deliver the message by Ernst Toller’s “Masse-Mensch” (1921), Luigi Pirandello’s “Enrico IV” (1922), Paul Claudel (1868): “Le Soulier de Satin” (1928), Jean Giraudoux’s “Electre” (1937), Bertold Brecht’s “Leben Des Galilei” (1939).

But this was certainly the century of the novel. The spirit of the times was captured by James Joyce’s “Ulysses” (1922), Marcel Proust’s “A la Recherche du Temp Perdu” (1922), Italo Svevo’s “La Coscienza di Zeno” (1923), Francis-Scott Fitzgerald’s “The Great Gatsby” (1925), Andre’ Gide’s “Les Faux-Monnayeurs” (1925), Virginia Woolf’s “To The Lighthouse” (1927), Julien Green’s “Adrienne Mesurat” (1927), Stanislaw Witkiewicz’s “Insatiability” (1930), Louis-Ferdinand Celine’s “Voyage a Bout de la Nuit” (1932), William Faulkner’s “Light in August” (1932), Robert Musil’s “The Man Without Qualities” (1933), Elias Canetti’s “Auto Da Fe” (1935), Flann O’Brien’s “At Swim-two-birds” (1939), Jean-Paul Sartre’s “La Nausee” (1938), Joseph Roth’s “Die Legende vom heiligen Trinker” (1939), Mikhail Bulgakov’s The Master and Margarita” (1940), Albert Camus (1913, France): “The Stranger” (1942), Jorge-Luis Borges’ “Ficciones” (1944), Julien Gracq’s “Un Beaux Tenebreux” (1945), Hermann Broch’s “Der Tod des Vergil” (1945).

If literature was getting, overall, more “narrative”, painting became more abstract and symbolic with Rene Magritte’s “Faux Miroir” (1928), Salvator Dali’s “La Persistence de la Memoire” (1931), Paul Klee’s “Ad Parnassum” (1932), Pablo Picasso’s “Guernica” (1937), Max Ernst’s “Europe After the Rain II” (1942). Constantin Brancusi and Hans Arp were the giants of sculpture.

Music continued its journey away from the classical canon with Leos Janacek’s “Glagolitic Mass” (1926), Bela Bartok’s “Music for Strings, Percussion and Celesta” (1936) Edgar Varese’s “Ionisation” (1933) Alban Berg’s “Violin Concerto” (1935) Olivier Messiaen’s “Quatuor pour la Fin du Temps” (1941) and Goffredo Petrassi’s “Coro di Morti” (1941).

On the lighter side, new forms of entertainment and mass media were born, mostly in the USA. In 1914 composer Jerome Kern had staged the first “musical”. In 1926 Hollywood debuted the “talking movie” (films with synchronized voice and music). In 1927 Philo Farnsworth invented television.

Cinema was by far the most influential of new forms of art, thanks to films such as David-Wark Griffith’s “The Birth of a Nation” (1915), Victor Sjostrom’s “Phantom Chariot” (1920), Erich von Stroheim’s “Greed” (1924), Sergei Eisenstein’s “Battleship Potemkin” (1925), Fritz Lang’s “Metropolis” (1926), Josef von Sternberg’s “Das Blaue Engel” (1930), the Marx Brothers’ “Duck Soup” (1933), Charlie Chaplin’s “Modern Times” (1936), Jean Renoir’s “La Grande Illusion” (1937), Howard Hawks’s “Bringing Up Baby” (1938), Orson Welles’s “Citizen Kane” (1941), Frank Capra’s “John Doe” (1941).

But the visual arts also added a new one: the comics. The comics came to compete with the novel and the film, and reached their artistic peak with “Little Nemo” (1905), “Popeye” (1929), “Buck Rogers” (1929), “Tintin” (1929), “Mickey Mouse” (1930), “Dick Tracy” (1931), “Alley Oop” (1933), “Brick Bradford” (1933), “Flash Gordon” (1934), “Li’l Abner” (1934), “Terry Lee” (1934).

America’s contribution to music included Afro-American music: the blues was born around 1912, jazz in 1917, gospel in 1932, rhythm’n’blues in 1942, bebop in 1945.

After World War 2, Stalin’s Soviet Union became an exporter of “revolutions” throughout the world, an ideological empire that had few precedents in history: Mao Tze-tung’s China in 1949, Ho Chi Min’s Vietnam in 1954, Fidel Castro’s Cuba in 1959, Julius Nyere’s Tanzania in 1961, Kenneth Kaunda’s Zambia in 1964, Siad Barre’s Somalia in 1969, Haile Mengitsu’s Ethiopia in 1974, Samora Machel’s Mozambique in 1975, Pol Pot’s Cambodia in 1975, Robert Mugabe’s Zimbabwe in 1980, Arap Moi’s Kenya in 1982, etc. The USA retaliated by supporting anti-communist regimes around the world (often as totalitarian as the ones imposed by the communist revolutions). In Latin America, for example, the Soviet Union, via its proxy of Cuba, sponsored a series of national insurrections, while the USA supported “caudillos” that were no more democratic than Hitler (Guatemala 1960, Bolivia 1965, Chile 1973, Peru 1970, Colombia 1979, El Salvador 1980).

Both the Soviet Union and the USA fought for supremacy in what was termed a “Cold War”, a war that was never fought directly but only indirectly, everywhere and all the time. They both became military superpowers by amassing thousands of nuclear weapons. The nuclear deterrence worked insofar as they never struck at each other. But the consequence was that the theater of military operations was the entire planet.

The “Cold War” resulted in a partition of the world in two spheres of influence: Soviet and American. An “iron curtain” divided Europe in two, and the Wall (1961) that divided West and East Berlin was its main symbol.

A parallel process, soon engulfed in the Cold War, was the decolonization of Africa and Asia. Mahatma Gandhi was the most celebrate of the independence leaders. The European powers granted independence to most of their colonies. New countries were born (India and Pakistan in 1947, Israel in 1948, Indonesia in 1949, Ghana in 1957, and most of Africa followed within a decade). The exceptions (Algeria, Angola, Portugal) suffered from decade-long independence wars. Even where independence had been granted, intestine civil wars caused massive convulsions, again exploited by the two superpowers for their power games.

Another by-product of the post-war order was the birth of Arab nationalism with Egyptian leader Gamal Nasser.

The most visible political decline was the one of Britain. While its empire was disintegrating and its economy was slower than the economies of Germany, Japan and France (that soon passed it in GDP terms), Britain maintained an aloof attitude, reveling in its diversity: it did not join the European Community, it did not adopt the metric system, etc. Outdated industrial infrastructure. It reorganized the empire as the Commonwealth, but that was a cost, no longer a source of revenues. Despite being the real winner of World War 2, Britain became rapidly irrelevant.

The western European countries, assembled around a USA-led alliance (NATO), were free and democratic (with the exception of the Iberian peninsula) but were nonetheless torn between a socialist left and a capitalist right. De facto, they all adopted different versions of the same model: a social-democratic state that guaranteed rights to workers and sheltered citizens through a generous social net.

Among armed conflicts, two were particularly significant: the Arab-Israeli conflict (1948-2004) and the USA-Vietnam war (1964-1973). They both dragged the USA into long and expensive military ventures.

Despite the political gloom, the post-war age was the age of consumerism, of the economic boom (in the USA, Japan and western Europe), of the “baby boom” and of the mass media.

The office was mechanized and electrified thanks to a deluge of calculators, photocopiers, telefax machines, telex machines, touch-tone phones, and, finally, mainframe computers (1964).

Landmarks in communications were the telephone cable across the Atlantic (1956) and the first telecommunication satellite (1962).

Landmarks in transportation were Pan Am’s first transatlantic flight (1939), the long-distance jet (1958) and the wide-body jet (1967).

Commercial television introduced cheap forms of mass entertainment.

The 33-1/3 RPM long-playing vinyl record (1948) and the transistor radio (1954) changed the way people (especially young people) listened to music.

A youth culture began to appear in the USA in the 1950s, initially blasted as a culture of “juvenile delinquents”, and evolved into the generation of the free-speech movement, of the civil rights, of the anti-war movement and of the hippies. It then migrated to Europe, where it transformed into the student riots of 1968.

Rock music was very much the soundtrack of the youth movement. Rock’n’Roll was the music of the young rebels who reacted against the repressive conventions of post-war society. Bob Dylan and the militant folk-singers interpreted young people’s distrust of the Establishment and their idealistic dreams. Psychedelic music was an integral part of the hippie movement (and dramatically changed the concept of “song” by introducing atonal and anarchic elements).

The 1960s were also the age of the sexual revolution and of feminism, announced by Simone de Beauvoir.

The single most emotional event for the collective imagination was space exploration, largely fueled by the rivalry between the USA and the Soviet Union. In 1957 the Soviet Union launched the first artificial satellite, the “Sputnik”. In 1961 Yuri Gagarin became the first human astronaut. In 1962 the USA launched the first telecommunication satellite, the “Telstar”. In 1969 Neil Armstrong became the first human to set foot on the Moon.

However, progress in Physics was certainly not limited to the space. In fact, Physics was booming from the very small to the very large.

Astronomy revealed a whole new world to the peoples of the Earth who used to believe (just a few thousand years earlier) that the Earth was all there was to it. There are billions of galaxies in the universe, each made of billion of stars (200 billion in our galaxy, roughly the “Milky Way”), and planets orbit around the stars (nine around ours, the Sun). Pluto, the last of the solar planets, turned out to be 5.9 billion kms from the Sun, a distance that no human could hope to cover during a lifetime. Distances were suddenly measured in “light-year”, one light-year being 9 trillion kms, a distance that would have been unimaginable just a century before. The nearest star is “Alpha Centauri”, 4.3 light-years from the Earth. Sirius, the brightest star in the sky, is actually 8.7 light-years away. The center of the Milky Way is 26 thousand light-years from the Sun. Andromeda, the nearest galaxy, is 2.2 million light-years far.

In 1965 the “microwave background radiation” was discovered, a remnant of some catastrophic event a long time back in the past of the universe. That event was named “Big Bang”: the universe was born when a massive explosion sent the original energy hurling away in all directions. Eventually, gravitation caused pieces of matter to coalesce together, thus forming the structures that we observe today (galaxies, stars, planets), leaving behind the background radiation and causing the expansion of the universe that is still going on. Depending on how much mass there is in the universe, this expansion may some day be reversed (and end in a “Big Crunch”) or continue forever. Cosmologists also realizes that there are different kinds of “stars”. Some of them are very small and very heavy, and spin frantically around their axis (“pulsars”). Some of them collapsed into “black holes”, which are bodies whose gravitational field is so strong that nothing can escape them, not even light. Inside black holes, time and space sort of switch roles: an object can only proceed ahead in space (towards the center of the black hole) while being able to move around in time.

The oddities of Cosmology fueled a boom in science fiction (comics, films, novels, tv series).

As for the “very small”, the view of matter made of three particles (electron, proton and neutron) was shattered by the discoveries of a multitude of subatomic particles. The radioactive decay of atomic nuclei, first observed in 1896 by Antoine Becquerel, Pierre Curie and Marie Curie, had already signaled the existence of a fourth kind of fundamental force (the “weak” force) to the known three (gravitational, electromagnetic, and nuclear or “strong”). Wolfgang Pauli in 1930 inferred the existence of the neutrino to explain a particular case of radioactive decay. Another source of new particles was the study of “cosmic rays”, that Victor Franz Hess reduced to radiation coming from the space (1912). This led to the discovery of muons (1937) and pions (predicted in 1935 by Yukawa Hideki). In 1963 Murray Gell-Man hypothesized that the nucleus of the atom was made of smaller particles. In 1967 the theory of quarks (Quantum Chromodynamics) debuted: the nucleus of the atom (neutrons and protons) is made of quarks, that are held together by gluons. Quarks differ from previously known particles because their magic number is “three”, not two: there are six quarks, each coming in three “flavors” (and each having, as usual, its anti-quark) and they combine not in pairs but in trios.

Forces are mediated by discrete packets of energy, represented as virtual particles or “quanta”. The quantum of the electromagnetic field (e.g., of light) is the photon: any electromagnetic phenomenon involves the exchange of a number of photons between the particles taking part in it. Other forces are defined by other quanta: the weak force by the W particle, gravitation (supposedly) by the graviton and the nuclear force by gluons. Particles can be divided according to a principle first formulated (in 1925) by Wolfgang Pauli: some particles (the “fermions”, named after Enrico Fermi) never occupy the same state at the same time, whereas other particles (the “bosons”, named after Satyendra Bose) do. The wave functions of two fermions can never completely overlap, whereas the wave fuctions of two bosons can completely overlap (the bosons basically lose their identity and become one). Fermions (such as electrons and its family, the leptons, and quarks and their “hadrons”, protons and neutrons) make up the matter of the universe, while bosons (photons, gravitons, gluons) are the virtual particles that glue the fermions together. Bosons therefore represent the forces that act on fermions. They are the quanta of interaction. An interaction is always implemented via the exchange of bosons between fermions. (There exist particles that are bosons but do not represent interactions, the so called “mesons”, which are made of quarks and decay very rapidly).

There are twelve leptons: the electron, the muon, the tau, their three neutrinos and their six anti-particles. There are 36 quarks: six times three flavors plus the corresponding anti-quarks. Thus there are 4 forces, 36 quarks, 12 leptons, 12 bosons.

Science was even applied to life itself.

Ilya Prigogine developed “Non-equilibrium Thermodynamics” to explain phenomena far from equilibrium such as life itself. He divided nature into “conservative” systems (the ones studied by classical Physics) and “dissipative” systems (subject to fluxes of energy/matter), and noticed that the latter are ubiquitous in nature: everything that is alive is a dissipative system. They create order by feeding on external energy/metter: they are non-equilibrium systems that are sustained by a constant influx of matter/energy. He realized that such systems exhibit spontaneous development of order: they are self-organizing systems, which maintain their internal organization by trading matter/energy with the environment.

James Jerome Gibson look at life from the point of view of a network of integrated living beings. A living being does not exist in isolation. In fact, its main purpose is to pick up information from the environment. All the information needed to survive is available in the environment. Thus information originates from the interaction between the organism and its environment. Information “is” the continuous energy flow of the environment.

The other great fascination was with computers, developed during the war to crack the secret German code. In 1946 the first non-military computer, “Eniac”, was unveiled. In 1947 William Shockley invented the transistor at Bell Labs. In 1951 the first commercial computer was built, the “Univac”. In 1955 John McCarthy founded “Artificial Intelligence”, a discipline to study if “intelligent” machines could ever be built. In 1956 Robert Noyce and Jack Kilby invented the microchip, that made it possible to build smaller computers. In 1958 Texas Instruments built the first integrated circuit. Also in 1958 Jim Backus (at IBM) invented the FORTRAN programming language, the first machine-independent language. In 1965 Gordon Moore predicted that the processing power of computers would double every 18 months. In 1964 IBM introduced the first “operating system” for computers (the “OS/360”). In 1965 DEC introduced the first mini-computer, the PDP-8, that used integrated circuits.

Genetics rapidly became the most exciting field in Biology. Each living cell contains deoxyribonucleic acid (DNA for short), discovered in 1944 by Oswald Avery,and, in 1953, Francis Crick and James Watson figured out the double-helix structure of the DNA molecule: genetic information is encoded in a rather mathematical form, which was christened “genetic code” because that’s what it is, a code written in an alphabet of four “letters” (which are, physically, acids). Crick reached the conclusion that information flows only from the (four) nucleid acids of the DNA to the (twenty) aminoacids of proteins, never the other way around. In other words: genes encoded in DNA determine the organism. An organism owes its structure to its “genome”, its repertory of genes. It took a few more years for biologists to crack the genetic code, i.e. to figure out how the four-letter language of DNA is translated into the twenty-letter language of proteins. Biologists also discovered ribonucleic acid (RNA), the single-strand molecule that partners with DNA to manufacture proteins.

Less heralded but no less powerful to change our view of our race was the progress made by neurologists in understanding how the brain works. The neuron had been discovered in 1891 by Santiago Ramon y Cajal, and in 1898 Edward Thorndike had already proposed that the brain was a “connectionist” system (that the connections, not the units, were the key to its working). But the picture remained fuzzy until (1949) Donald Hebb showed that those connections were dynamic, not static, and that they changed according to a system of punishment and reward, or “selectively”: a connection that was used to produce useful behavior was reinforced, one that was part of a failure was weakened. As new techniques allowed neurologists to examine the electrical and chemical activity of the brain, it became clear that neurons communicate via “neurotransmitters”. A neuron is nothing more than a generator of impulses, activated when the sum of its inputs (the neurotransmitters received from other neurons, weighted according to the “strength” of the corresponding connections) exceeds a certain potential. The connections between neurons are continuously adjusted to improve the accuracy of the brain’s responses. Basically, each operation of recognition is also an operation of learning, because connections are refined every single time they are used. The structure of the brain also became more clear, in particular the fact that there are two hemispheres, the left being dominant for language and speech, the right being dominant for visual and motor tasks.
TM, ®, Copyright © 2003 Piero Scaruffi All rights reserved.

Michel Jouvet discovered that REM (“rapid eye movement”) sleep is generated in the pontine brain stem (or “pons”). The pons sends signals to eye muscles (causing the eye movement), to the midbrain (causing a low level of brain activity and inhibition of muscle movements), and to the thalamus. The thalamus then excites the cortex, which receives a valid sensory signal from the thalamus and interprets it as if it were coming from the sense organs. During REM sleep several areas of the brain are working frantically, and some of them are doing exactly the same job they do when the brain is awake. The only major difference is that the stimuli they process are now coming from an internal source rather than from the environment: during dreams the sensory input comes from the sensory cortex.

The obsession with Alan Turing’s abstract “machine” and with the first concrete computers shaped the intellectual landscape of many thinkers. Not only had Turing proved that the computer was, basically, an “intelligent” being. John Von Neumann, with his embryonic experiments on artificial life or “cellular automata” (1947), had also shown how such a machine could be made to reproduce and evolve.

It was Turing himself to frame the philosophical issue for the next generations, with what came to be known as “Turing’s test” (1947): a machine can be said to be intelligent if a human being, asking all sorts of questions, cannot tell whether the answers come from a human being or from a machine. It was, ultimately, a “behaviorist” approach to defining intelligence: if a machine behaves exactly like a human being, than it is as intelligent as the human being.

Kenneth Craik viewed the mind as a particular type of machine which is capable of building internal models of the world and of processing them to produce action. Craik’s emphasis was on the internal representation and on the symbolic processing of such a representation.

Craik’s ideas formed the basis for Herbert Simon’s and Allen Newell’s theory of mind, that the human mind is but a “physical symbol processor”. The implicationg was that the computer was, indeed, intelligent: it was just a matter of programming it the way the mind is. They proceeded to implement a “general solver”, a computer program that, using logic, would be able to solve any possible problem (Hilbert’s dream).

The implicit assumption behind the program of “Artificial Intelligence” was that the “function” is what matters: the “stuff” (brains or integrated circuits) is not important.

Hilary Putnam argued that the same mental state can be realized in more than one physical state, for example pain can be realized by more than one brain (despite the fact that all brains are different). Therefore, the physical state is not all that important. It is the “function” that makes a physical state of the brain also a mental state. Mental states are mere decorations: they have a function. The consequence of this conclusion, though, is that a mind doesn’t necessarily require a brain. In fact, a computer does precisely what a mind does: perform functions that can be implemented by different physical states (different software). The “functionalist” approach popularized the view that the mind is the software and the brain is its hardware. The execution of that program (the mind) in a hardware (brain or computer) yields behavior.

Jerry Fodor speculated that the mind represents knowledge in terms of symbols, and then manipulates those symbols to produce thought. The manipulation of those symbols is purely syntactic (without knowing what those symbols “mean”). The mind uses a “language of thought” (or “mentalese”), common to all sentient beings, and produced through evolution, to build those mental representations.

The program of Artificial Intelligence was not as successful as its pioneers hoped because they neglected the importance of knowledge. An “intelligent” system is only capable of performing logical operations, no matter how many and how smart; but, ultimately, even the most intelligent human being in the world needs knowledge to make sensible decisions. In fact, a person with a lot of knowledge is likely to make a more sensible decision than a much more clever person with very little knowledge. Thus the primacy shifted from “intelligence” to “knowledge”: “Expert Systems” (first conceived around 1965) apply a “general problem solver” to a “knowledge base”. The knowledge base is built by “cloning” a human expert (usually via a lengthy process of interviews). A knowledge base encodes facts and rules that are specific to the “domain” of knowledge of the human expert. Once the appropriate knowledge has been “elicited”, the expert system behaves like a human expert.

In parallel to the “knowledge-based” school, the search for machine intelligence pursued other avenues as well.

Norman Wiener noticed that both living systems and machines are “control systems”, systems in which “feedback” is employed to maintain internal “homeostasis” (a steady state). A thermostat is a typical control system: it senses the temperature of the environment and directs the heater to switch on or off; this causes a change in the temperature, which in turn is sensed by the thermostat; and so forth. Every living system is also a control system. Both living systems and machines are “cybernetic” systems. The “feedback” that allows a system to control itself is, ultimately, an exchange of information between the parts of the system.

Claude Shannon worked out a no less influential metaphor for machines: they are also similar to thermodynamic systems. The entropy of a thermodynamic system is a measure of disorder, i.e. a measure of the random distribution of atoms. As entropy increases, that distribution becomes more homogeneous. The more homogeneous the distribution is, the less “informative” it is. Therefore, entropy is also a measure of the lack of information.

Yet another version of the facts was delivered by the proponents of “Neural Networks”.

An artificial “neural network” is a piece of software or hardware that simulates the neural network of the brain. Several simple units are connected together, with each unit connecting to any number of other units. The “strength” of the connections can fluctuate from zero strength to infinite strength. Initially the connections are set randomly. During a “training” period, the network is made to to adjust the strength of the connections using some kind of feedback: every time an input is presented, the network is told what the output should be and asked to adjust its connections accordingly. The network continues to learn forever, as each new input causes a readjustment of the connections.

The difference between “expert systems” and “neural networks” is actually quite ideological. Expert systems operate at the level of knowledge, whereas neural networks operate at the level of connections. In a way, they describe two different ways to look at human intelligence: as a brain that produce intelligent behavior, and as a mind that produces intelligent decisions.

Needless to say, the mind-body problem, originally introduced by Descartes, was revived by the advent of the computer.

Dualists largely extended an intuition by Charlie-Dunbar Broad, that the universe is a series of layers, and that each layer yields the following layer but cannot explain the new properties that emerge with it. For example, the layer of elementary particles yields the layer of macroscopic phenomena. Each new layer is an “emergent” phenomenon of a lower layer. Thus the mind is an emergent property of the brain, and not a separate substance. The new dualism was a dualism of properties, not a dualism of substances. Dualism was restated as “supervenience”. Biological properties “supervene” (or “are supervenient”) on physical properties, because the biological properties of a system are determined by its physical properties. By the same token, mental properties are supervenient on neural properties.

Herbert Feigl revived materialism in the age of the brain: the mind is created by the neural processes in the brain. We have not explained how this happens the same way that humans could not explain lightning or magnetism for centuries. Nonetheless, mental states “are” physical states of the brain. Philosophers such as Donald Davidson realized that it is implausible to assume that for every single mental state there is a unique physical state of the brain (a one to one correspodence). For example, a person can have the same feeling twice, despite the fact that the configuration of the brain has changed. Thus, Feigl’s “identity theory” was revised to admit that many physical states of the brain may yield the same mental state.

Another form of materialism (“eliminative” materialism) originated with Paul Feyerabend and Richard Rorty, who believed that mental states do not exist. The mental world is only a vocabulary of vague terms that don’t refer to real entities. The mind-body dualism is a false problem that leads to false problems.

By that line of reasoning, Gilbert Ryle revived behaviorism in the context of the mind-body problem: the mental vocabulary does not refer to the structure of something, but simply to the way somebody behaves or will behave. The mind “is” the behavior of the body. Descartes invented a myth: the mind inside the body (“the ghost in the machine”).

Existentialism was the big philosophical “ism” of Post-war Europe. Existentialists focused on the human experience. Theirs was a philosophy of the crisis of values. The object and the subject of Existentialism are the same: the “I”.

Jean-Paul Sartre believed that there is no God. The individual is alone. There is no predestination (no “human nature”), determining our actions. We are free to act as we will. It is our actions that determines our nature. Existence (the free I) precedes essence (the I’s nature). In the beginning, the individual is nothing. Then she defines herself by her actions. Each individual is fully responsible for what she becomes. This total freedom causes angst. It is further amplified by the fact that an individual’s choices affect the whole of humankind. Existentialism abolishes God, but emphasizes that its atheism increases (not decreases) the individual responsibility for her actions. It complicates, not simplifies, the moral life. “We are alone, with no excuses”.

Maurice Merleau-Ponty countered that human freedom is never total: it is limited by our body. The individual is, first and foremost, a “situated” being, a body that lives in an environment. The body is not just an object surrounded by objects: it is the very subject of experience, that interacts with the environment. The body shapes the environment, but, in turn, the environment shapes the body, whose freedom is therefore limited by the way the environment shapes it. The same conditioning exists in society: the body is a linguistic actor but its linguistic action is constrained by the language it uses (the meaning of a linguistic action is constructed on the basis of a meaning acquired from the language). Ditto at the level of society: we are political agents, but we our political actions are shaped by the historical background. At all levels, there are a “visible” and an “invisible” dimensions of being that continuously affect each other.

A number of thinkers related to the zeitgeist of Existentialism even if they did not belong to the mainstream of it.

David Bohm fulfilled Einstein’s hope to find “hidden variables” to remove randomness from Quantum Theory. Bohm hypothized the existence of a potential that permeates the universe. This potential, that lies beyond the four-dimensional geometry of space-time, generates a field that acts upon particles the same way a classical potential does. This field can be expressed as the mother of all wave functions, a real wave that guides the particle (the “pilot-wave”). This field is, in turn, affected by all particles: everything in the universe is entangled in everything else. The universe is an undivided whole in constant flux. Similarly, at the higher dimension (the “implicate order”) there is no difference between matter and mind. That difference arises within the “explicate order” (the conventional space-time of Physics). As we travel inwards, we travel towards that higher dimension, the implicate order, in which mind and matter are the same. As we travel outwards, we travel towards the explicate order in which subject and object are separate. Mind and matter can never be completely separated because they are entangled in the same quantum field. Thus every piece of matter has a rudimentary mind-like quality.

Ghose Aurobindo speculared that Brahman first involutes (focuses on itself), next materializes (the material universe), and then evolves into consciousness. We are part of this process, which is still going on. Human consciousness is the highest stage of consciousness so far reached by Brahman, but not the last one, as proven by the fact that social, cultural and individual life in the human world are still imperfect.

Sayyid Qutb, the philosopher of militant Islam, lived in the dream of a purified world dedicated to the worship of God alone. Human relationships should be founded on the belief in the unity of God. Pagan ignorance (for example, of Christians and Jews) is the main evil in the world, because it rebels against God’s will and establishes secular societies that violate God’s sovereignty on Earth. The separation of church and state is “the” problem.

Pierre Teilhard de Chardin saw evolution as a general law of nature: the universe’s matter-energy is progressing towards ever increased complexity. Humans mark the stage when evolution leaves the “biosphere” and enters the “noosphere” (consciousness and knowledge). The evolution of the noosphere will end in the convergence of matter and spirit into the “omega point”.

Throughout the second half of the century, Structuralism was one of the dominant paradigms of philosophy: uncover the real meaning hidden in a system of signs.

Claude Levi-Strauss extended it to social phenomena, which he considered as systems of signs just like language. Myths from different cultures (myths whose contents are very different) share similar structures. Myth is a language, made of units that are combined together according to certain rules. The “langue” is the myth’s timeless meaning, the “parole” is its historical setting. “Mytheme” is the elementary unit of myth. Mythemes can be read both diachronically (the plot that unravels, the sequence of events) and synchronically (the timeless meaning of it, the “themes”). The themes of myths are binary relationships between two opposing concepts (e.g., between selfishness and altruism). Binary logic is, in a sense, the primordial logic, and mythical thinking is, in a sense, logical thinking. Mythical thinking is inherent to the human mind: it is is the human way of understanding nature and the human condition. Conversely, myths are tools that we can use to find out how the human mind works.

Roland Barthes transformed Saussure’s Structuralism into “semiology”, a science of signs to unveil the meaning hidden in the “langue” of cultural systems such as cinema, music, art.

Structuralism often reached provocative conclusions that had social and political implications.

Michel Foucault analyzed the mechanisms of western (liberal, democratic) society. Western society jails fools, who, in ancestral societies, were tolerated or even respected as visionary. Foucault perceived as disturbing the tendency to criminalize the creative force of madness. Basically, Western societies torture the minds of criminals, whereas totalitarian societies tortured their bodies. Prisons are, de facto, an instrument of social control, a device to train minds that do not comply with the dogmas. Thus western societies are vast mechanisms of repression, no less oppressive than the totalitarian regimes they replaced. Similar arguments can be made for sexuality and crime.

Jacques Lacan analyzed the unconscious as a system of signs. Motives are signifiers which form a “signifying chain”: the subconscious “is” that chain. This chain is permanently unstable because it does not refer to anything: the self itself is a fiction of the subconscious. A baby is born with a united psyche, but later in life, as the baby separates from the mother, that unity is broken, and the self is born; and the rest of one’s lifetime is spent trying to reunite the self and the other. Psychic life as a permanent struggle between two “consciousnesses”.

Dilthey’s Hermeneutics was also influential. Hans-Georg Gadamer applied it to Husserl’s Phenomenology and derived a discipline of “understanding”, where to him “understanding” was the “fusion of horizons” between a past text and a present interpreter.

Paul Ricoeur believed that the symbols of pre-rational culture (myth, religion, art, ideology) hide meaning that can be discovered by interpretation. There are always a patent and a latent meaning. A similar dichotomy affects human life, which is torn between the “voluntary” and the “involuntary” dimensions, between the “bios” (one’s spatiotemporal life) and the “logos” (one’s ability to grasp universal spacetime). He made a distinction between language and discourse: language is, indeed, only a system of signs, and therefore timeless, but discourse always occurs at some particular moment of time, i.e. it depends on the context. A language is a necessary condition for communication, but it itself does not communicate: only discourse communicates. The signs in a language system refer only to other signs in it, but discourse refers to a world. Discourse has a time dimension that is due to the merging of two different kinds of time: cosmic time (the uniform time of the universe) and lived time (the discontinuous time of our life’s events). Historical time harmonizes these two kinds of time.

The debate on language proceeded in multiple directions. Wilfred Sellars conceived a sort of linguistic behaviorism: thoughts are to the linguistic behavior of linguistic agents what molecules are to the behavior of gases.

Roman Jakobson, the leading exponent of “formalism”, decomposed an act of communication into six elements that summarize the act of communication like this: a message is sent by an addresser to an addressee who shares a common code, a physical channel and a context. These elements reveal that communication performs many functions in one.

The speculation on language culminated with Noam Chomsky’s studies on grammar. Chomsky rephrased Saussure’s dichotomy of langue and parole as performance and competence: we understand sentences that we have never heard before, thus our linguistic competence exceeds our linguistic performance. In fact, the number of sentences that we can “use” is potentially infinite. Chomsky concluded that what we know is not the infinite set of sentences of the language, but only a finite system of rules that defines how to build sentences. We know the “grammar” of a language. Chomsky separated syntax from semantics: a sentence can be “well-formed” without being meaningful (e.g., “the apple took the train”). In doing so, Chomsky reduced the problem of speaking a language to a problem of formal logic (because a grammar is a formal system). Chomsky realized that it was not realistic to presume that one learns a grammar from the sentences that one hears (a fraction of all the sentences that are possible in a language). He concluded that human brains are designed to acquire a language: they are equipped at birth with a “universal grammar”. We speak because our brain is meant to speak. Language “happens” to a child, just like growth. Chomsky’s universal grammar is basically a “linguistic genotype” that all humans share.

As Sellars had already noted, Chomsky’s analysis of the structure of language was not enough, though, to explain the phenomenon of language among humans. John-Langshaw Austin argued that the function of sentences is not so much to describe the state of the world as to cause action in the world. He classified a speaker’s “performative” sentences (requests, promises, orders, etc) based not on their structure but on their “effect” on the listener. We speak for a reason. “Pragmatics” is the study of “speech acts”. A speech act is actually made up of three components: a “locutionary” act (the words employed to deliver the utterance), an “illocutionary” act (the type of action that it performs, such as commanding, promising, asking) and a “perlocutionary” act (the effect that the act has on the listener, such as believing or answering).

There is more to a sentence than its meaning: a sentence is “used” for a purpose. Paul Grice realized that speech acts work only if the listener cooperates with the speaker, and the speaker abides by some common-sense rule: the speaker wants to be understood and cause an action, and the listener makes this assumption in trying to understand the speaker’s purpose. Grice believed that some “maxims” help the speaker say more than the word that she is saying: those maxims are implicit knowledge that the listener uses in order to grasp the purpose of the utterance. Language has meaning to the extent that some conventions hold within the linguistic community.

The intimidating progress of Science caused a backlash of sort among philosophers who disputed Science’s very foundations. After all, scientific hypotheses cannot be tested in isolation from the whole theoretical network within which they are formulated.

Aleksandr Koyre’ and Gaston Bachelard had already noted that scientific progress is not linear: it occurs in spurts. Thomas Kuhn formalized that intuition with the concept of “paradigm shifts”. At any point in time the scientific community agrees on a scientific paradigm. New evidence tends to be accomodated in the ruling paradigm. When the ruling paradigm collapses because of some evidence that cannot be accomodated, then a paradigm shift occurs. A paradigm shift results in a different way of looking at the world, analogous to a religious conversion. Scientific revolutions are, ultimately, linguistic in character. Thus the truth of a theory does not depend exclusively on the correspondence with reality. The history of science is the history of the transformations of scientific language.

Willard Quine argued that a hypothesis can be verified true or false only relative to some background assumptions, a condition that rapidly becomes recursive: each statement in a theory partially determines the meaning of every other statement in the same theory. One builds a “web of beliefs”, and each belief in the web is based on some other beliefs of the same web. Each belief contributes to support the entire web and is supported by the entire web. The web as a whole fits the requirements of Science. But there might be several such webs that would work as well: scientific theories are “undetermined” by experience. It is the same situation as with language: there are always many (potentially infinite) interpretations of a discourse depending on the context. A single word has no meaning: its meaning is always relative to the other words that it is associated to. The meaning of a sentence depends on the interpretation of the entire language. Its meaning can even change in time. For example, it is impossible to define what a “correct” translation of a sentence is from one language to another, because that depends on the interpretations of both entire languages. Translation from one language to another is indeterminate. Translation is possible only from the totality of one language to the totality of another language.

Another strong current of thinkers was the Marxist one, which frequently spent more time criticizing capitalism than in heralding socialism.

Juergen Habermas added an element that was missing from Marx’s “materialistic” treatment of society: social interaction, the human element. Societies rely both on labor (instrumental action) and socialization (communicative action). What we are witnessing is not so much alienation but a crisis of institutions that manipulate individuals. Communicative Action, not the revolution of the proletariat, can transform the world and achieve a more humane and just society based on free and unconditioned debate among equal citizens.

Herbert Marcuse analyzed the operation of mass societies and concluded that they seduce the citizens with the dream of individual liberty only to enslave them in a different way. The only true revolution is emancipation from the economic loop that enslaves us. Such a revolution would bring about an ideal state in which technology is used to provide individual happiness, not surplus.

Theodor Adorno warned that reason has come to dominate not only nature, but also humanity itself, and therefore Western civilization is moving towards self-destruction. For example, mass-culture industries manipulate the masses into cultivating false needs.

Cinema was probably the most faithful interpreter of the times through its well-established genres: Akira Kurosawa’s “Rashomon” (1950), Billy Wilder’s “Sunset Boulevard” (1950), Vittorio DeSica’s “Miracle in Milan” (1951), Kenji Mizoguchi’s “Ugetsu Monogatari” (1953), Yasujiro Ozu’s “Tokyo Monogatari” (1953), Elia Kazan’s “On The Waterfront” (1954), Ingmar Bergman’s “Seventh Seal” (1956), John Ford’s “The Searchers” (1956), Don Siegel’s “Invasion of the Body Snatchers” (1956), Alfred Hitchcock’s “North By Northwest” (1959), Jean-Luc Godard’s “Breathless” (1959), Federico Fellini’s “La Dolce Vita” (1960), John Huston’s “The Misfits” (1961), Robert Aldrich’s “Hush Hush Sweet Charlotte” (1965), Michelangelo Antonioni’s “Blow-Up” (1966), Luis Bunuel’s “Belle de Jour” (1967), Roman Polansky’s “Rosemary’s Baby” (1968), Stanley Kubrick’s “2001 A Space Odyssey” (1968), Sergio Leone’s “Once Upon a Time in The West” (1968), Sam Peckinpah’s “The Wild Bunch” (1969).

Music moved further away from the tradition of consonant music with John Cage’s “Concerto for Prepared Piano” (1951), Pierre Boulez’s “Le Marteau Sans Maitre” (1954), Luigi Nono’s “Canto Sospeso” (1956), Karlheinz Stockhausen’s “Gesang der Junglinge” (1956), Iannis Xenakis’s “Orient Occident” (1960), Britten’s “War Requiem” (1962), Penderecki’s “Passio Secundum Lucam” (1965), Berio’s “Sinfonia” (1968).

Poetry explored a much broader universe of forms: Pablo Neruda’s “Canto General” (1950), Andrade’s “Claro Enigma” (1951), Paul Celan’s “Mohn und Gedaechtnis” (1952), George Seferis’s “Emerologio Katastromatos” (1955), Yannis Ritsos’s “Moonlight Sonata” (1956), Ezra Pound’s “Cantos” (1960), Pierpaolo Pasolini’s “Le Ceneri di Gramsci” (1957), Vladimir Holan’s “A Night with Hamlet” (1964), Vittorio Sereni’s “Gli Strumenti Umani” (1965), Andrea Zanzotto’s “La Belta`” (1968).

Fiction was the most prolific of the literary genres: Cesare Pavese’s “La Luna e i Falo’” (1950), Elsa Morante’s “L’Isola di Arturo” (1957), Italo Calvino’s “Il Barone Rampante” (1957), Carlo-Emilio Gadda’s “La Cognizione del Dolore” (1963), Alejo Carpentier’s “Los Pasos Perdidos” (1953), Jose Donoso’s “Coronacion” (1957), Gabriel Garcia Marquez’s “Ciento Anos de Soledad” (1967), Malcom Lowry’s “Under the volcano” (1947), William Gaddis’ “The Recognitions” (1955), Wilson Harris’ “Palace of the Peacock” (1960), Anthony Burgess’s “Clockwork Orange” (1962), Janet Frame’s “Scented Gardens For The Blind” (1963), Saul Bellow’s “Herzog” (1964), John Barth’s “Giles Goat Boy” (1966), Yukio Mishima’s “Golden Pavillion” (1956), Boris Pasternak’s “Doctor Zivago” (1957), Witold Gombrowicz’s “Pornography” (1960), Gunther Grass’ “Die Brechtrommel” (1959), Thomas Bernhard’s “Verstoerung” (1967), Elias Canetti’s “Auto da fe” (1967), Raymond Queneau’s “Zazie dans le Metro” (1959), Julio Cortazar’s “Rayuela” (1963), Carlos Fuentes’s “Artemio Cruz” (1964), Jorge Amado’s “Dona Flor” (1966), Kobe Abe’s “Woman of Sand” (1962), Kenzaburo Oe’s “Silent Cry” (1967), Patrick White (Australia, 1912): “Voss” (1957).

Theatre built upon the innovations of the first half of the century: Tennessee Williams’ “A Streetcar Named Desire” (1947), Arthur Miller’s “Death of a Salesman” (1949), Samuel Beckett (1906, Ireland): “En Attendant Godot” (1952), Friedrich Durrenmatt (1921, Switzerland): “The Visit of the Old Lady” (1956), Max Frisch (1911): “Herr Biedermann und die Brandstifter” (1958), Harold Pinter (1930, Britain): “Caretaker” (1959), Eugene Ionesco (1912): “Rhinoceros” (1959), John Arden’s “Serjeant Musgrave’s Dance” (1959), Peter Weiss (1916): “Marat/Sade” (1964).

An epoch-defining moment was the landing on the Moon by USA astronauts, an event that ideally ended the decade of the boom.

If the Moon landing had seemed to herald complete domination by the USA, the events of the following decade seemed to herald its decline. The USA was defeated militarily in Vietnam (1975), Lebanon (1983) and Somalia (1992). The oil crisis of the 1970s created a world-wide economic crisis. The USA lost one of its main allies, Iran, to an Islamic revolution (1979), that was as significant for the political mood of the Middle East as Nasser’s Arab nationalism had been for the previous generation. After 30 years of rapid growth, both Japan and Germany became economic powers that threatened the USA globally. Both countrie caught up with the USA in terms of average wealth. Militarily, the Soviet Union remained a formidable global adversary, extending its political influence to large areas of the developed world.

Other problems of the age were drugs and AIDS. The culture of drugs and the holocaust of AIDS marked the depressed mood of the arts. Soon, another alarming term would surfance in the apocalyptic language: global warming.

However, space exploration continued, still propelled by the desire of the USA and the Soviet Union to compete anytime anywhere. In 1970 and 1971 the Soviet Union sent spacecrafts to our neighbors, Venus and Mars. In 1977 the USA launched the Voyager to reach other galaxies. In 1981 the U.S.A launched the first space shuttle. In 1986 the Soviet Union launched the permanent space station “MIR”. In 1990 the USA launched the Hubble space telescope.

Computers staged another impressive conceptual leap by reaching the desk of ordinary folks: the micro-processor (1971) enabled the the first “personal” computer (1974) which became ubiquitous from 1981 on.

As mass-media became more pervasive, they also changed format: the video-cassette recorder (1971), the cellular telephone (1973), the portable stereo (1979), the compact disc (1981), the DVD (1995). Ultimately, these innovations made both entertainment, communication and culture more “personal” and more “portable”.

Classical music reflected the complex world of the crisis with Dmitrij Shostakovic’s “Symphony 15” (1971), Morton Feldman’s “Rothko Chapel” (1971), Gyorgy Ligeti’s “Double Concerto” (1972), Henryk Gorecki’s “Symphony 3” (1976), Arvo Part’s “De Profundis” (1980), Witold Lutoslaski’s “Symphony 3” (1983).

The novel continued to experiment with newer and newer formats and structures: Vladimir Nabokov’s “Ada” (1969), Michel Tournier’s “Le Roi des Aulnes” (1970), Ismail Kadare’s “Chronicle in Stone” (1971), Danilo Kis’s “Hourglass” (1972), Thomas Pynchon’s “Gravity’s Rainbow” (1973), Nadine Gordimer’s “The Burger’s Daughter” (1979), Barbara Pym’s “Quartet in Autumn” (1977), Manuel Puig’s “El Beso de la Mujer Arana” (1976), Mario Vargas-Llosa’s “La Tia Julia” (1978), Salman Rushdie’s “Midnight’s Children” (1980), Elfriede Jelinek’s “Die Ausgesperrten” (1980), Toni Morrison’s “Tar Baby” (1981), Uwe Johnson’s “Jahrestage” (1983), Jose Saramago’s “Ricardo Reis” (1984). Milan Kundera’s “The Unbearable Lightness of Being” (1985), Joseph McElroy’s “Women and Men” (1987), Antonia Byatt’s “Possession” (1990), Winfried Georg Sebald’s “Die Ausgewanderten” (1992).

Poetry was becoming more philosophical through works such as Joseph Brodsky’s “Stop in the Desert” (1970), Mario Luzi’s “Su Fondamenti Invisibili” (1971), Derek Walcott’s “Another Life”“ (1973), Edward-Kamau Brathwaite’s “The Arrivants” (1973), Giorgio Caproni’s “Il Muro della Terra” (1975), John Ashbery’s “Self-Portrait in a Convex Mirror: (1975), James Merrill’s “The Changing Light at Sandover (1982).

By now, cinema was even more international than literature: John Boorman’s “Zardoz” (1973), Martin Scorsese’s “Mean Streets” (1973), Francis-Ford Coppola’s “The Godfather Part II” (1974), Robert Altman’s “Nashville” (1975), Theodoros Anghelopulos’s “Traveling Players” (1975), Bernardo Bertolucci’s “1900” (1976), Terence Malick’s “Days of Heaven” (1978), Ermanno Olmi’s “L’Albero degli Zoccoli” (1978), Woody Allen’s “Manhattan” (1979), Andrej Tarkovskij’s “Stalker” (1979), Istvan Szabo’s “Mephisto” (1981), Peter Greenaway’s “The Draughtsman’s Contract” (1982), Ridley Scott’s “Blade Runner” (1982), Terry Gilliam’s “Brazil” (1985), Wim Wenders’s “Wings of Desire” (1988), Zhang Yimou’s “Hong Gaoliang” (1987), Aki Kaurismaki’s “Leningrad Cowboys go to America” (1989), Tsui Hark’s “Wong Fei-hung” (1991), Takeshi Kitano’s “Sonatine” (1993), Krzysztof Kieslowski’s “Rouge” (1994), Bela Tarr’s “Satantango/ Satan’s Tango” (1994), Quentin Tarantino’s “Pulp Fiction” (1994), Jean-Marie Jeunet’s “City of Lost Children” (1995), Lars Von Trier’s “The Kingdom” (1995), Emir Kusturica’s “Underground” (1995), Jan Svankmajer’s “Conspirators of Pleasure” (1996), David Lynch’s “Lost Highway” (1997), Manuel de Oliveira’s “Viagem ao Principio do Mundo” (1997), Hirokazu Kore-eda’s “The Afterlife” (1998).

Physics was striving for grand unification theories. Both the “Big Bang” model and the theory of elementary particles had been successful examples of hybrid Quantum and Relativity theories, but, in reality, the quantum world and the relativistic world had little in common. One viewed the world as discrete, the other one viewed the world as continuous. One admitted indeterminacy, the other one was rigidly deterministic. One interpreted the weak, strong and electromagnetic forces as exchanges of virtual particles, the other one interpreted the gravitational force as space-time warping. Given the high degree of success in predicting the results of experiments, the two theories were surprisingly difficult to reconcile. Attempts to merge them (such as “Superstring Theory”) generally led to odd results.

Skepticism affected philosophers. Paul Feyerabend argued that the history of science proceeds by chance: science is a hodgepodge of more or less casual theories. And it is that way because the world is that way: the world does not consist of one homogeneous substance but of countless kinds, that cannot be “reduced” to one another. Feyerabend took the Science of his time literally: there is no evidence that the world has a single, coherent and complete nature.

Richard Rorty held that any theory is inevitably conditioned by the spirit of its age. The goal of philosophy and science is not to verify if our propositions agree with reality but to create a vocabulary to express what we think is reality. Facts do not exist independently of the way we describe them with words. Thus science and philosophy are only genres of literature.

Another sign that a new era had started was the decline of Structuralism. Jacques Derrida accused Structuralism of confusing “being” and “Being”, the code and the transcendental reality. Language is also a world in which we live. In fact, there are multiple legitimate interpretations of a text, multiple layers of meaning. Language is constantly shifting. He advocated deciphering the “archi-escriture” (“deconstruction” or “differance”).

France after World War II provided the ideal stage for a frontal critique of the rationalist tradition founded by Descartes and publicized by the Enlightenment that views reason as the source of knowledge and knowledge as the source of progress. “Modernism” had been based on the implicit postulate that progress founded on science is good, and that reason applied to society leads to a better (e.g. egalitarian) social order. The pessimistic views of Friedrich Nietzsche, Arnold Toynbee, Oswald Spengler, Martin Heidegger and Ludwig Wittgenstein escalated as modern society revealed the dark sides of rapid economic growth, industrialization, urbanization, consumerism, of the multiplying forms of communication and information, of globalization. Technology and media on one hand democratize knowledge and culture but on the other hand introduce new forms of oppression. The earliest forms of reaction to modernism had manifested themselves in Bohemian lifestyles, subcultures such as Dadaism, anticapitalist ideologies, phenomenology and existentialism. But it was becoming more and move self-evident that perception of the object by the subject is mediated by socially-constructed discourse, that heterogeneity and fragmentation make more sense than the totalization of culture attempted by modernism, and that the distinction of high-culture and low-culture was artificial. The post-modernist ethos was born: science and reason were no longer viewed as morally good; multiple sources of power and oppression were identified in capitalist society; education no longer trusted as unbiased but seen as politicized; etc. Realizing that knowledge is power, the postmodernist generation engaged in political upheavals such as student riots (Berkeley 1964, Paris 1968), adopted mottos such as “power to the imagination” and identified the bourgeoisie as the problem. For postmodernism the signifier is more important than the signified; meaning is unstable (at any point in time the signified is merely a step in a never-ending process of signification); meaning is in fact socially constructed; there are no facts, only interpretations.

Guy Debord argued that the “society of the spectacle” masks a condition of alienation and oppression. Gilles Deleuze opted for “rhizomatic” thought (dynamic, heterogeneous, chaotic) over the “arborescent thought” (hierarchical, centralized, deterministic) of modernism.

Felix Guattari speculatd that there is neither a subject nor an object of desire, just desire as the primordial force that shapes society and history; a micropolitics of desire that replace Nietsche’s concept of the “power to will”. In his bold synthesis of Freud, Marx and Nietzsche (“schizoanalysis”) the subject is a nomadic desiring machine.

Jean-Francois Lyotard was “incredulous” towards Metaphysics (towards “metanarratives”) because he viewed the rational self (capable of analyzing the world) as a mere fiction. The self, like language, is a layer of meanings that can be contradictory. Instead of “grand narratives”, that produce knowledge for its own sake, he preferred mini-narratives that are “provisional, contingent, temporary, and relative”; in other words, a fragmentation of beliefs and values instead of the grand unification theories of Metaphysics.

Jean Baudrillard painted the picture of a meanigless society of signs in which the real and the simulation are indistinguishable. The transformation from a “metallurgic” society to a “semiurgic” society (a society satured with artificial signs) leads to an implostion in all directions, an implosion of boundaries (eg politics becomes entertainment). More importantly, the boundary between the real and the simulation becomes blurred. Technology, economics and the media create a world of simulacra. The simulation can even become more real than the real (hyper-real). Post-modern society is replacing reality with a simulated reality of symbols and signs. At the same time meaning has been lost in a neutral sterile flow of information, entertainment and marketing. The postmodern person is the victim of an accelerating proliferation of signs that destroys meaning; of a global process of destruction of meaning. The postmodern world is meaningless, it is a reservoir of nihilism. In fact, the accelerating proliferation of goods has created a world in which objects rule subjects: “Things have found a way to elude the dialectic of meaning, a dialectic which bored them: they did this by infinite proliferation”. The only metaphysics that makes sense is a metaphysics of the absurd like Jarry’s pataphysics.

However, the topic that dominated intellectual life at the turn of the millennium and that fostered the first truly interdisciplinary research (involving Neurology, Psychology, Biology, Mathematics, Linguistics, Physics) was: the brain. It was Descartes’ mind-body problem recast in the age of the neuron: who are we? Where does our mind come from? Now that new techniques allowed scientists to study the minutiae of neural processes the ambition became to reconstruct how the brain produces behavior and how the brain produces consciousness. Consciousness became a veritable new frontier of science.

The fascination with consciousness could already be seen in Julian Jaynes’ theory that it was a relatively recent phenomenon, that ancient people did not “think” they way we think today. He argued that the characters in the oldest parts of the Homeric epics and of the Ancient Testament were largely “non-conscious”: their mind was “bicameral”, two minds that spoke to each other, as opposed to one mind being aware of itself. Those humans were guided by “hallucinations” (such as gods) that formed in the right hemisphere of the brain and that communicated to the left hemisphere of the brain, that received them as commands. Language did not serve as conscious thought: it served as communication between the two hemispheres of the brain. The bicameral mind began “breaking down” when the hallucinated voices no longer provided “automatic” guidance for survival. As humans lost faith in gods, they “invented” consciousness.

A major revolution in the understanding of the brain was started, indirectly, by the theory of the immune system advanced by Niels Jerne, which viewed the immune system as a Darwinian system. The immune system routinely manufactures all the antibodies it will ever need. When the body is attacked by foreign antigens the appropriate antibodies are “selected” and “rewarded” over the antibodies that are never used. Instead of an immune system that “designs” the appropriate antibody for the current invader, Jerne painted the picture of a passive repertory of antibodies that the environment selects. The environment is the actor. Jerne speculated that a similar paradigm might be applied to the mind: mind manufactures chaotic mental events that the environment orders into thought. The mind already knows the solution to all the problems that can occur in the environment in which it evolved over millions of years. The mind knows what to do, but it is the environment that selects what it actually does.

Neurologists such as Michael Gazzaniga cast doubt on the role of consciousness. He observed that the brain seems to contain several independent brain systems working in parallel, possibly evolutionary additions to the nervous system. Basically, a mind is many minds that coexist in a confederation. A module located in the left hemisphere interprets the actions of the other modules and provides explanations for our behavior: that is what we feel as “consciousness”. If that is the case, than our “commands” do not precede action, they follow it. First our brain orders an action, then we become aware of having decided it. There are many “I”’s and there is one “I” that makes sense of what all the other “I”’s are doing: we are aware only of this “verbal self”, but it is not the one in charge.

A similar picture was painted by Daniel Dennett, who believed the mind is due to competition between several parallel narrative “drafts”: at every point in time, there are many drafts active in the brain, and we are aware only of the one that is dominant at that point in time. There is, in fact, no single stream of consciousness. The continuity of consciousness is an illusion.

Jerne’s theory was further developed by Gerald Edelman, who noticed that the human genome alone cannot specify the complex structure of the brain, and that individual brains are wildly diverse. The reason, in his opinion, is that the brain develops by Darwinian competition: connections between neurons and neural groups are initially under-determined by the genetic instructions. As the brain is used to deal with the environment, connections are strengthened or weakened based on their success or failure in dealing with the world. Neural groups “compete” to respond to environmental stimuli (“Neural Darwinism”). Each brain is different because its ultimate configuration depends on the experiences that it encounters during its development. The brain is not a direct product of the information contained in the genome: it uses much more information that is available in the genome, i.e. information from the environment. As it lives, the brain continuously reorganizes itself. Thus brain processes are dynamic and stochastic. The brain is not an “instructional” system but a “selectional” system.

The scenario of many minds competing for control was further refined by William Calvin, who held that a Darwinian process in the brain finds the best thought among the many that are continuously produced. A neural pattern copies itself repeatedly around a region of the brain, in a more or less random manner. The ones that “survive” (that are adequate to act in the world) reproduce and mutate. “Thoughts” are created randomly, compete and evolve subconsciously. Our current thought is simply the dominant pattern in the copying competition. A “cerebral code” (the brain equivalent of the genetic code) drives reproduction, variation and selection of thoughts.

Paul MacLean introduced the view of the human brain as three brains in one, each brain corresponding to a different stage of evolution: the “reptilian” brain for instinctive behavior (mostly the autonomic system), the “old mammalian” brain for emotions that are functional to survival (mostly the limbic systemi) and the “new mammalian” brain for higher cognitive functions (basically, the neo-cortex). Mechanical behavior, emotional behavior and rational behavior arose chronologically and now coexist and complement each other.

Merlin Donald viewed the development of the human mind in four stages: the “episodic” mind, that is limited to stimulus-response associations and cannot retrieve memories without environmental cues (lives entirely in the present); the “mimetic” mind, capable of motor-based representations and of retrieving memories independently of environmental cues (understands the world, communicates and makes tools; the “mythic” mind, that constructs narratives and creates myths; and the “theoretical” mind, capable of manipulating symbols.

Steven Mithen identified four “modules” in the brain, which evolved independently and represent four kinds of intelligence: social intelligence (the ability to deal with other humans), natural-history intelligence (the ability to deal with the environment), tool-using intelligence and linguistic intelligence. The hunters-gatherers of pre-history were experts in all these domains, but those differente kinds of expertise did not mix. For thousands of years these different skillsets had been separated. Mithen speculates that the emergence of self-awareness caused the integration of these kinds of intelligence (“cognitive fluidity”) that led to the cultural explosion of art, technology, farming, religion.

The role of a cognitive system in the environment was analyzed by Humberto Maturana and Francisco Varela. They believed that living beings are units of interaction, and that cognition is embodied action (or “enaction”). Organisms survive by “autopoiesis”, the process by which an organism continuously reorganizes its own structure to maintain a stable relationship with the environment. A living being cannot be understood independently of its environment, because it is that relationship that molds its cognitive life. Conversely, the world is “enacted” from the actions of living beings. Thus living beings and environment mutually specify each other. Life is an elegant dance between the organism and the environment, and the mind is “the tune of that dance”.

Wilson-Edward Osborne, the founder of “sociobiology”, applied the principles of Darwinian evolution to behavior, believing that the social behavior of animals and humans can be explained from the viewpoint of evolution.

Richard Dawkins pointed out that one can imagine a Darwinian scenario also for the evolution of ideas, which he called “memes”. A meme is something that infects a mind (a tune, a slogan, an ideology, a religion) in such a way that the mind feels the urge to communicate it to other minds, thus contributing to spreading it. As memes migrate from mind to mind, they replicate, mutate and evolve. Meme are the cultural counterpart of genes. A meme is the unit of cultural evolution, just like a gene is the unit of biological evolutionJust like genes use bodies as vehicles to spread, so memes use minds as vehicles to spread. The mind is a machine for copying memes, just like the body is a machine for copying genes. Memes have created the mind, not the other way around.

Dawkins held the view that Darwinian evolution was driven by genes, not by bodies. It is genes that want to live forever, and that use bodies for that purpose. To Dawkins, evolution is nothing but a very sophisticated strategy for genes to survive. What survives is not my body but my genes.

Dawkins also called attention to the fact that the border of a “body” (or, better, phenotype) is not so obvious: a spider would not exist without its cobweb. Dawkins’ “extended phenotype” includes the world that an organism interacts with. The organism alone is an oversimplification, and does not really have biological relevance. The control of an organism is never complete inside and null outside: there is a continuum of degrees of control, which allows partiality of control inside (e.g., parasites operate on the nervous system of their hosts) and an extension of control outside (as in the cobweb). What makes biological sense is an interactive system comprising the organism and its neighbors. The very genome of a cell can be viewed as a representation of the environment inside the cell.

Stuart Kauffman and others saw “self-organization” as a general property of the universe. Both living beings and brains are examples of self-organizing systems. Evolution is a process of self-organization. The spontaneous emergence of order, or self-organization of complex systems, is ubiquitous in nature. Kauffman argued that self-organization is the fundamental force that counteracts the universal drift towards disorder. Life was not only possible and probable, but almost inevitable.

Linguistics focused on metaphor as more than a poetic tool. George Lakoff argued that language is grounded in our bodily experience, that language is “embodied”. Our bodily experience creates our concepts. Syntax is created by our bodily experience. The “universal” grammar shared by all humans is due to the fact that we all share roughly the same bodily experience. The process by which we create concepts out of bodily experience is metaphor, the process of experiencing something in terms of something else. The entire human conceptual system is metaphorical, because a concepts can always be understood in terms of less abstract concepts, all the way down to our bodily experience. No surprise that we understand the world through metaphors, and we do so without any effort, automatically and unconsciously. Lakoff held that language was created to deal with physical objects, and later extended to non-physical objects by means of metaphors. Thus metaphor is biological: our brains are built for metaphorical thought.

Dreams continued to fascinate neurologists such as Allan Hobson and Jonathan Winson, as new data showed that the brain was “using” sleep to consolidate memories. Dreams are a window on the processing that goes on in the brain while we sleep. The brain is rapidly processing a huge amount of information, and our consciousness sees flashes of the bits that are being processed. The brain tries to interpret these bits as narratives, but, inevitably, they look “weird”. In reality, there is no story in a dream: it is just a parade of information that is being processed. During REM sleep the brain processes information that accumulated during the day. Dreams represent “practice sessions” in which animals refine their survival skills. Early mammals had to perform all their “reasoning” on the spot. Modern brains have invented a way to “postpone” processing sensory information.

Colin McGinn was skeptic that any of this could lead to an explanation of what consciousness is and how it is produced by the brain. He argued that we are not omnipotent: like any other organisms, there may be things that we just can’t conceive. Maybe consciousness just does not belong to the “cognitive closure” of our organism. In other words, understanding our consciousness is beyond our cognitive capacities.

The search for consciousness inside the brain took an unexpected turn when a mysterious biorhythm of about 40 Hertz was detected inside the brain. The traditional model for consciousness was “space-based binding”: there must be a place inside the brain where perceptions, sensations, memories and so forth get integrated into the “feeling” of my consciousness.

Thus Gerald Edelman and Antonio Damasio hypothesized mechanisms by which regions of the brain could synthesize degrees of consciousness. Damasio realized that the “movie in the mind” consciousness caused by the flow of sensory inputs was not enough to explain self-awareness. He believed that “Self” consciousness reqired a topography of the body and a topography of the environment, that ultimately the “self” originated from its juxtaposition against the “non-self”. An “owner” and “observer” of the movie is created within a second-order narrative of the self interacting with the non-self. The self is continuously reconstructed via this interaction. The “I” is not telling the story: the “I” is created by stories told in the mind.

Francis Crick launched the opposite paradigm (“time-based binding”) when he speculated that synchronized firing (the 40 Hertz biorhythm) in the region connecting the thalamus and the cortex might “be” a person’s consciousness. Instead of looking for a “place” where the integration occurs, Crick and others started looking for integration in “time”. Maybe consciousness arises from the continuous dialogue between regions of the brain.

Rodolfo Llinas noticed a possible implication of this viewpoint. It looks like neurons are active all the time. We do not control our neurons, no more than we control our blood circulation. In fact, neurons are always active, even when there are no inputs. Neurons operate at their own pace, regardless of the pace of information. A rhythmic system controls their activity, just like rhythmic systems control heartbeat or breathing. It seems that neurons are telling the body to move even when the body is not moving. Neurons generate behavior all the time, but only some behavior actually takes place. It sounds like Jerne’s model all over again: it is the environment that selects which movement the body will actually perform. Consciousness is a side-effect: the thalamus calls out all cortex cells that are active, and the response “is” consciousness.

How consciousness was produced by evolution was a fascinating mystery in itself. Graham Cairns-Smith turned the conventional model upside down when he claimed that emotions came first. A rudimentary system of feelings was born by accident during evolution. That system proved to be useful for survival, and therefore evolved. The organism was progressively flooded with emotions until a “stream of consciousness” appeared. Language allowed to express it in sounds and thoughts instead of mere facial expressions. Then the conscious “I” was born.

The Berlin Wall fell in 1989 and the Soviet Union was dissolved in 1991, two years after withdrawing from Afghanistan (a lengthy and debilitating war). Most of its Eastern European satellites adopted the American model (democracy and capitalism) and applied for membership in both NATO (the USA-led military alliance) and the European Union (the economic union originally sponsored by Italy, France and Germany, which now included also Britain and Spain). From the point of view of the USA, not only had the enemy (the communist world) surrendered, but most of its allies had turned friends to the USA. Almost overnight, the entire world was adopting the American model. The “domino” effect that the USA had feared would propagate communism took place in the opposite direction: the moment the Soviet Union fell, almost all the countries of the world abandoned communism and adopted the American economic and political model. Democratic reforms removed the dictators of Latin America, Far East and (later) Africa. The exceptions were rare (Cuba in Latin America, Burma and North Korea in the Far East, Zimbabwe in subequatorial Africa). There were only two notable exceptions. Under the stewardship of Deng Xiaoping (who had seized power in 1978), China itself had embarked into pseudo-capitalistic economic reforms, but the one-party system remained in place and kept strict control over freedom of speech. The Arab world, from Morocco to Iraq, from Syria to Yemen, and its eastern neighbors Iran and Afghanistan, were probably ruled by the most totalitarian regimes.

Other than these exceptions, the world was largely being molded after the example of the USA. What had been a picturesque melting pot (mostly a demographic experiment) had become a highly efficient economic and military machine, now imitated throughout the planet.

The adoption of the same economic model favored the creation of several free-trade zones and the creation of a “global village”.

The 1990s were largely a decade of economic boom and (relative) peace (Africa being the theater of most remaining wars).

The USA found itself at the helm of an odd structure. It was definitely not an “empire”, since each country maintained plenty of independence and every country became a fierce USA competitor, but at the same time. It had assumed a revolutionary (not imperial) mission to spread liberal democracy around the world. It fought wars that were more liberation than expansion wars. Its enemies were the enemies of liberal democracy (Nazism, Fascism, Communism, Islamic fundamentalism). It was, first and foremost, an empire of Knowledge: 75% of all Nobel laureates in the sciences, economics, and medicine were doing research in the USA.

As the two countries of the world were not forced anymore to choose between the USA and the Soviet camp, some of them achieved enough independence to exert significant regional influence. The new regional powers included the European Union (which kept growing in size and ambition), China, India (the largest democracy in the world), Japan, Brazil, Nigeria and South Africa. Last, but not least, there was Russia, intent in rebuilding itself as a non-communist country.

There was a significant shift from the Atlantic Ocean to the Pacific Ocean, as Japan, China, South Korea, Indochina, Australia, India became more and more relevant, while Western Europe was becoming less and less relevant. The combined gross product of Asian-Pacific countries increases from 7.8% of world GDP in 1960 to 16.4% in 1982 and to over 20% in 2000.

Islamic fundamentalism was not easy to define as a political entity, but, benefiting from the example of Iran’s Islamic state and the funds pouring from the oil states, it managed to hijack a country, the very Afghanistan that had contributed to the fall of the Soviet Union.

After a crisis in the 1970s, that had proven to the whole world how crucial the supply of oil was for the world’s economy, the Middle East had become a strategic area above and beyond the scope of the Cold War. With the end of the Cold War, the Middle East became an even more dangerous place because its hidden dynamics became more evident: a deadly combination of totalitarian regimes, Islamic fundamentalism, Palestinian-Israeli conflict and huge oil reserves. A practical demonstration came with the “Gulf War” in which a large USA-led coalition repulsed an Iraqi invasion of Kuwait.

Western society was being dominated by automation, from the sphere of the household to the sphere of science. The main innovation of the 1990s was the Internet, which, created in 1985 became a new tool to communicate and spread knowledge, thanks to electronic mail (“e-mail”) and the “World-Wide Web”. This was a completely new landscape, not too dissimilar from the landscape that explorers of the 16th century had to face. Suddenly, companies had to cope with computer viruses that spread over the Internet and people could find virtually unlimited amounts of information about virtually any topic via the “search engines”. The effects of the Internet were also visible on the economy of the USA: it largely fueled the boom of the 1990s, including the bubble of the stock market (the “dot-com” bubble).

The other emerging technology was genetic engineering. Having explained how life works, humans proceeded to tamper with it. The first genetically-engineered animal was produced in 1988, followed in 1994 by the first genetically-engineered edible vegetable, and, in 1997, by the first clone of a mammal. The Human Genome Project was deciphered.

Both ordinary folks and the intellectual elite had the feeling that they lived a special time. It is not surprising that thinkers turned increasingly to interpreting history. Ironically, this autobiographical theme started when Francis Fukuyama declared the “end of history”, meaning that the ideological debate had ended with the triumph of liberal democracy.

John Ralston Saul criticized globalization, that he viewed as caused by a geopolitical vacuum: nation states had been replaced by transnational corporations. The problem is that natural resources and consumers live in real places.

Samuel Huntington interpreted the history of the world of the last centuries as a “Western Civil War”, in which the Christian powers of Europe fought each other anytime anywhere. The fall of Communism and the triumph of Capitalism ended the Western Civil War. Now the world was turning towards a “clash of civilizations” (Western, Islamic, Confucian, Japanese, Hindu, Slavic-Orthodox, Latin American, African).

Just like the Moon landing that had seemed a good omen for the USA turned out to open a decade of problems, the fall of the Soviet Union that seemed another good omen for the USA turned out to open another category of problems. In 2001, hyper-terrorism staged its biggest success yet by crashing hijacked planes into two New York skyscrapers. The USA retaliated by invading (and democratizing) their home base, Afghanistan, and, for good measure, Iraq. Hyper-terrorism rapidly found new targets around the world, from Spain to the Arab countries themselves. Far from having fostered an era of peace, the fall of communism had opened a new can of worms. The second Iraqi war was also the first instance of a crack within the European allies: Britain, Italy and Poland sided with the USA, while France and Germany strongly opposed the USA-led invasion.

Into the new millennium, Science is faced with several challenges: unifying Quantum and Relativity theories; discovering the missing mass of the universe that those theories have predicted; understanding how the brain manufactures consciousness; deciphering the genome; managing an ever larger community of knowledge workers; using genetics for medical and agricultural purposes; and resuming the exploration of the universe.

Appendix: The New Physics: The Ubiquitous Asymmetry (physics.doc), a chapter of Nature of Consciousness

Piero Scaruffi, December 2004

Bibliography

William McNeill: A History of the Human Community (1987)
Charles VanDoren: A History of Knowledge (1991)
Mark Kishlansky: Civilization In The West (1995)
Roberts: Ancient History (1976)
Arthur Cotterell: Penguin Encyclopedia of Ancient Civilizations (1980)
John Keegan: A History of Warfare (1993)
Bernard Comrie: The Atlas Of Languages (1996)
Henry Hodges: Technology in the Ancient World (1970)
Alberto Siliotti: The Dwellings of Eternity (2000)
Alan Segal: Life After Death (2004)
David Cooper: World Philosophies (1996)
Ian McGreal: Great Thinkers of the Eastern World (1995)
Richard Popkin: The Columbia History of Western Philosophy (1999)
Mircea Eliade: A History of Religious Ideas (1982)
Paul Johnson: Art, A New History (2003)
Ian Sutton: Western Architecture (1999)
Donald Grout: A History of Western Music (1960)
Geoffrey Hindley: Larousse Encyclopedia of Music (1971)
Michael Roaf: Mesopotamia and the Ancient Near East (1990)
Hans Nissen: The Early History of the Ancient Near East (1988)
Annie Caubet: The Ancient Near East (1997)
Trevor Bryce: The kingdom of the Hittites (1998)
Rosalie David: Handbook to Life in Ancient Egypt (1998)
Henri Stierlin: Pharaohs Master-builders (1992)
Glenn Moore: Phoenicians (2000)
Barry Cunliffe: The Ancient Celts (1997)
David Abulafia: The Mediterranean in History (2003)
Henri Stierlin: Hindu India (2002)
Hermann Goetz: The Art of India (1959)
Heinrich Zimmer: Philosophies of India (1951)
Surendranath Dasgupta: A History of Indian Philosophy (1988)
Gordon Johnson: Cultural Atlas of India (1996)
Jadunath Sinha: “History Of Indian Philosophy” (1956)
Haridas Bhattacharyya: “The Cultural Heritage Of India” (1937)
Heinrich Zimmer: Philosophies of India (1951)
Charles Hucker: China’s Imperial Past (1975)
Sherman Lee: A History of Far Eastern Art (1973)
Wolfgang Bauer : China and the Search for Happiness (1976)
Joseph Needham: Science and Civilisation in China (1954)
John King Fairbank & Edwin Reischauer: East Asia Tradition and Transformation (1989)
Penelope Mason: History Of Japanese Art (1993)
Paul Varley: Japanese Culture (1973)
Thomas Martin: Ancient Greece (1996)
Katerina Servi: Greek Mythology (1997)
Robin Sowerby: The Greeks (1995)
Peter Levi: The Greek World (1990)
Tomlinson: Greek And Roman Architecture (1995)
Bruno Snell: The Discovery of the Mind (1953)
Henri Sierlin: The Roman Empire (2002
Duby & Perrot: A History of Women in the West vol 1 (1992)
Giovanni Becatti: The Art of Ancient Greece and Rome (1968)
Marvin Tameanko: Monumental Coins (1999)
John Norwich: A short history of Byzantium (1995)
Kevin Butcher: Roman Syria (2003)
Tomlinson: Greek And Roman Architecture (1995)
Bart Ehrman: Lost Scriptures (2003)
Elaine Pagels: The Origins Of Satan (1995)
Robert Eisenman: James the Just (1997)
Timothy Freke: The Jesus Mysteries (1999)
John Dominic Crossan: The Historical Jesus (1992)
Albert Hourani: A History of the Arab peoples (1991)
Bernard Lewis: The Middle East (1995)
John Esposito: History of Islam (1999)
Michael Jordan: Islam – An Illustrated History (2002)
Edgar Knobloch: Monuments of Central Asia (2001)
Huseyin Abiva & Noura Durkee: A History of Muslim Civilization (2003)
Vernon Egger: A History of the Muslim World to 1405 (2003)
David Banks: Images of the Other – Europe and the Muslim World Before 1700 (1997)
Reza Aslan: No God but God (2005)
Majid Fakhry: A History of Islamic Philosophy (1970)
Carter-Vaughn Findley: The Turks in World History (2005)
Richards, John: The Mughal Empire (1995)
Bertold Spuler: The Mongol Period – History of the Muslim World (1994)
David Christian: A History of Russia, Central Asia and Mongolia (1999)
Graham Fuller: The Future of Political Islam (2003)
Norman Cantor: Civilization of the Middle Ages (1993)
Henri Pirenne: Histoire Economique de l’Occident Medieval (1951)
Robert Lopez: “The Commercial Revolution of the Middle Ages” (1976)
Will Durant: “The Age of Faith” (1950)
James Chambers: “The Devil’s Horsemen” (1979)
Henry Bamford Parkes: The Divine Order (1968)
Fernand Braudel: The Mediterranean (1949)
Lynn White: Medieval Technology and Social Change (1962)
Gerhard Dohrn-van Rossum: “History of the Hour” (1996)
Frances & Joseph Gies: Cathedral Forge and Waterwheel (1994)
Georges Duby: The Age of the Cathedrals (1981)
Gunther Binding: High Gothic Art (2002)
Xavier Barral: Art in the Early Middle Ages (2002)
Daniel Hall: “A History of Southeast Asia” (1955)
Geoffrey Hosking: Russia and the Russians (2001)
Simon Schama: “A History of Britain” (2000)
Will Durant: The Renaissance (1953)
John Ralston Saul: “Voltaire’s Bastards” (1993)
Joel Mokyr: Lever of Riches (1990)
Hugh Thomas: The Slave Trade (1997)
Peter Watson: Ideas (2005)
John Crow: “The Epic of Latin America” (1980)
David Fromkin: “Europe’s Last Summer” (2004)
Mary Beth Norton: A People And A Nation (1986)
John Steele Gordon: “An Empire Of Wealth” (2004)
Daniel Yergin: “The Prize” (1991)
Lawrence James: Rise and Fall of the British Empire (1994)
Robert Jones Shafer: “A History of Latin America” (1978)
Paul Kennedy: The Rise and Fall of the Great Powers (1987)
Jonathan Spence: “The Search for Modern China” (1990)
Henry Kamen: Empire (2002)
Edward Kantowicz: The World In The 20th Century (1999)
Christian Delacampagne: A History of Philosophy in the 20th Century (1995)
Piero Scaruffi: Nature of Consciousness (2006)
Jacques Barzun: “From Dawn to Decadence” (2001)
Peter Hall: Cities in Civilization (1998)
Sheila Jones: The Quantum Ten (Oxford Univ Press, 2008)
Orlando Figes: “Natasha’s Dance – A Cultural History of Russia” (2003)
Roger Penrose:The Emperor’s New Mind (1989)
Gerard Piel: The Age Of Science, 2001)
Paul Johnson: Modern Times (1983)
Edward Kantowicz: The World In The 20th Century (1999)
Tony Judt: Postwar – A History of Europe Since 1945 (2005)
John Lewis Gaddis : The Cold War (2005)
Stephen Kinzer: Overthrow – America’s Century of Regime Change (2007)
Piers Brendon: The Decline And Fall Of The British Empire 1781-1997
HH Arnason: History of Modern Art (1977)
Herbert Read: A Concise History of Modern Painting (1959)
Jonathan Glancey: 20th Century Architecture (1998)
MOCA: At The End of the Century (1998)
Jonathan Glancey: 20th Century Architecture (1998)
Eric Rhode: A History of the Cinema (1976)
Robert Sklar: Film (1993)
Eileen Southern: The Music of Black Americans (1971)
Ted Gioia: A History of Jazz (1997)
Mark Prenderast: The Ambient Century (2000)
Piero Scaruffi: A History of Jazz (2007)
Piero Scaruffi: History of Rock and Dance Music (2009)

The History of Language: Why We Speak

The Origin of Language Charles Darwin observed that languages seem to evolve the same way that species evolve. However, just like with species, he failed to explain what the origin of language could be.

Languages indeed evolved just like species, through little “mistakes” that were introduced by each generation. It is not surprising that the evolutionary trees drawn by biologists (based on DNA similarity) and linguists (based on language similarity) are almost identical. Language may date back to the beginning of mankind.

What is puzzling, then, is not the evolution of modern languages from primordial languages: it is how it came to be that non-linguistic animals evolved into a linguistic animal such as the human being. The real issue is the “evolution of language” from non-language, not the “evolution of languages” from pre-existing languages, that is puzzling.

Several biologists and anthropologists believe that language was “enabled” by accidental evolution of parts of the brain and possibly other organs.

The USA biologist Philip Lieberman views the brain as the result of evolutionary improvements that progressively enabled new faculties. Human language is a relatively recent evolutionary innovation that came about when speech and syntax were added to older communication systems. Speech allowed humans to overcome the limitations of the mammalian auditory system, and syntax allowed them to overcome the limits of memory.

The USA neurologist Frank Wilson believes that the evolution of the human hand allowed humans a broad new range of new activities that, in turn, fostered an evolution of the brain that resulted in the brain of modern humans. Anatomical changes of the hand dramatically altered the function of the hand, eventually enabling it to handle and use objects. This new range of possibilities for the hand created a new range of possibilities for thought: the brain could think new thoughts and could structure them. The human brain (and only the human brain) organizes words into sentences, i.e. does syntax, because of the hand. “The brain does not live inside the head”.

Linguistic Darwinism

According to Chomsky’s classical theory, language is an innate skill: we come pre-wired for language, and simply “tune” that skill to the language that is spoken around us. In Chomsky’s view language is biology, not culture. This implies that the language skill is a fantastic byproduct of evolution. Syntax must be regarded as any other organ acquired via natural selection. How did such a skill develop, since that skill is not present elsewhere in nature? Where did it come from? Language appears to be far too complex a skill to have been acquired via step-by-step refinement of the Darwinian kind, especially since we are not aware of any intermediary steps (e.g., species that use a grammar only to some extent).

The British linguist Derek Bickerton advanced a theory that attempted to bridge Darwin and Chomsky. Bickerton argued that language was the key to the success of the human species, the one feature that made us so much more powerful than all other species. Everything else, from memory to consciousness, seems to be secondary to it. We cannot recall any event before we learned language. We can remember thoughts only after we learned language. Language seems to be a precondition to all the other features that we rank as unique to humans.

First of all, human language cannot just be due to the evolution of primitive, emotion-laden “call systems”. We still cry, scream, laugh, swear, etc. Language has not fully replaced that system of communication. The primitive system of communication continues to thrive alongside language. Language did not replace it, and probably did not evolve from it. Language is something altogether different.

He emphasized the difference (not the similarity) between human and animal communication. Animal communication is holistic: it communicates the whole situation. Human language deals with the components of the situation. Also, animal communication is pretty much limited to what is evolutionarily relevant to the species. Humans, on the other hand, can communicate about things that have no relevance at all for our survival. In fact, we could adapt our language to describe a new world that we have never encountered before. The combinatorial power of human language is what makes it unique. Bickerton thinks that human and animal communication are completely different phenomena.

In fact, Bickerton believes that human language is not primarily a means to communicate but a means to represent the world. Human language did not evolve from animal communication but from older representation systems. First, some cells (the sensory cells) were born whose only task was to respond to the environment. As sensory cells evolved and their inputs became more complex, a new kind of cells appeared that was in charge of mediating between these cells and motor cells. These mediating cells eventually evolved categories that were relevant to their survival. Animals evolved that were equipped with such “primary” representational systems. At some point, humans evolved who were equipped with syntax and were capable of representing representations (of models of models). Human language was so advantageous that it drove a phenomenal growth in brain size (not the other way around).

Two aspects of language, in particular, set it apart from the primitive call system of most animals: the symbolic and the syntactic aspects. A word stands for something (such as an object, a concept, an action). And words can be combined to mean more than their sum (“I walk home” means more than just the concepts of “I”, “walking” and “home”). Bickerton believes that syntax is what makes our species unique: other species can also “symbolize”, but none has showed a hint of grammar.

The philosopher Nicholas Humphrey once advocated that language was born out of the need to socialize. On the contrary, Bickerton believes that Humphrey’s “social intelligence” had little to do with the birth of proto-language. Socialization as a selective pressure would not have been unique to humans, and therefore language would have developed as well in other primates. Syntax, instead, developed only in humans, which means that a selective pressure unique to humans must have caused it. Bickerton travels back to the origins of hominids, to the hostile savannas where hominids were easy targets for predators and had precious little food sources. Other primates had a much easier life in the forests. The ecology of early hominids created completely different selective pressures than the ones faced by other primates. In his quest for the very first utterances, Bickerton speculates that language was born to label things, then evolved to qualify those labels in the present situation: “leopard footprints” and “danger” somehow needed to be combined to yield the meaning “when you see leopard footprints, be careful”.

Bickerton shows how this kind of “social calculus”, coupled with Baldwin effects, could trigger and successfully lead to the emergence of syntax. Social intelligence was therefore important for the emergence of syntax, even if it was not important for the emergence of proto-language.

Bickerton points out that the emergence of language requires the ability to model other minds. I am motivated to communicate information only if I can articulate this simple scenario in my mind: I know something that you don’t know and I would gain something if you knew it. Otherwise the whole point of language disappears.

Bickerton thinks that consciousness and the self were enabled by language: language liberated humans from the constraints of animal life and enabled off-line thinking. The emergence of language even created the brain regions that are essential to conscious life. Basically, he thinks that language created the human species and the world that humans see.

Summarizing, Bickerton believes that: language is a form of representation, not just of communication, a fact that sets it apart from animal communication; language evolved from primordial representational systems; language has shaped the cognitive life of the human species.

Grooming

The psychologist Robin Dunbar believes that originally the function of language was not to communicate information, but to cement society.

All primates live in groups. The size of a primate’s neocortex, as compared to the body mass, is directly proportional to the size of the average group for that primate. Humans tend to live in the larger groups of primates, and human brains are correspondingly much larger.

As humans transitioned from the forest to the savanna, they needed to band together in order to survive the increased danger of being killed by predators. Language helped keep large groups together. Thus humans who spoke had an evolutionary advantage (the group) over humans who did not develop that skill. Dunbar believes that human speech is simply a more efficient way of “grooming”: apes cement social bonds by grooming the members of their group. Humans “gossiped” instead of grooming each other. Later, and only later, humans began to use language also to communicate information.

Dunbar believes that dialects developed for a similar reason: to rapidly identify members of the same group (it is notoriously difficult to imitate a dialect).

Language and societies evolved together: society stimulated the invention of language, and language enabled larger societies, that stimulated even more sophisticated languages, that enabled even larger societies, etc.

Co-evolution

The USA anthropologist Terrence Deacon believes that language and the brain co-evolved. They evolved together influencing each other step by step. In his opinion, language did not require the emergence of a language organ. Language originated from symbolic thinking, an innovation that occurred when humans became hunters because of the need to overcome the sexual bonding in favor of group cooperation.

Both the brain and language evolved at the same time through a series of exchanges. Languages are easy to learn for infants not because infants can use innate knowledge but because language evolved to accommodate the limits of immature brains. At the same time, brains evolved under the influence of language through the Baldwin effect. Language caused a reorganization of the brain, whose effects were vocal control, laughter and sobbing, schizophrenia, autism.

Deacon rejects the idea of a universal grammar a` la Chomsky. There is no innate linguistic knowledge. There is an innate human predisposition to language, but it is due to the co-evolution of brain and language and it is altogether different from the universal grammar envisioned by Chomsky. What is innate is a set of mental skills (ultimately, brain organs) which translate into natural tendencies, which translate into some universal structures of language.

Another way to describe this is to view language as a “meme”. Language is simply one of the many “memes” that invade our mind. And, because of the way the brain is, the meme of language can only assume such and such a structure: not because the brain is pre-wired to such a structure but because that structure is the most natural for the organs of the brain (such as short-term memory and attention) that are affected by it.

Chomsky’s universal grammar is an outcome of the evolution of language in our mind during our childhood. There is no universal grammar in our genes, or, better, there are no language genes in our genome.

The secret of language is not in the grammar, but in the semantics. Language is meaningful. Deacon envisions a hierarchy of levels of reference (of meaning), that reflects the evolution of language. At the top is the level of symbolic reference, a stable network of interconnected concepts. A symbol does not only refer to the world, but also to other symbols. The individual symbol is meaningless: what has meaning is the symbol within the vast and ever changing semantic space of all other symbols. At lower levels, Deacon envisions less and less symbolic forms of representation, which are also less and less stable. At the lowest, most fluctuating level of the hierarchy there lie references that are purely iconic and indexicals, created by a form of learning that is not unique to language (in fact it is widespread to all cognitive tasks). The lower levels are constrained by what humans can experience and learn, which is constrained by innate abilities. The higher level, on the other hand, is an emergent system due to the interaction among linguistic agents.

Gesturing in the Mind

According to USA neuroscientist Rhawn Joseph, one of the youngest parts of the brain, the inferior parietal lobe of the left hemisphere, enabled both language, tool making and art itself. It enabled us, in other words, to create visual symbols. It also enabled us to create verbal symbols, i.e. of writing.

The inferior parietal lobe allows the brain to classify and label things. This is the prerequisite to forming concepts and to “abstracting” in general. Surprisingly, this is also the same organ that enables meaningful manual gesturing (a universal language, that it is also shared with many animals). Thus the evolution of writing is somehow related (neurally speaking) to manual gesturing. The inferior parietal lobe was one of the last organs of the brain to evolve, and it is still one of the last organs to mature in the child (which explains why children have to wait for a few years before they can write and do math).

This lobe is much more developed in humans than in other animals (and non-existent in most). The neurons of this lobe are somewhat unique in that they are “multimodal”: they are capable of simultaneously processing different kinds of inputs (visual, auditory, movement, etc). They are also massively connected to the neocortex, precisely to three key regions for visual, auditory and somatosensory processing. Their structure and location makes them uniquely fit to handle and create multiple associations. It is probably this lobe that enables us to understand a word as both an image, a function, a name and many other things at the same time.

Joseph claims that the emotional aspect of speaking is the original one: the motivation to speak comes from the limbic system, the archaic part of the brain that deals with emotions, and that we share with other mammals. The limbic system embodies a universal language that we all understand, a primitive language made of calls and cries. Each species has its own, but within a species all members understand it. Joseph believes that at this stage the “vocal” hemisphere is the right one. Only later, after a few months, does the left hemisphere impose structure to the vocalizing and thus become dominant in language.

Language as A Sexual Organ

The USA evolutionary psychologist Geoffrey Miller believes that the human mind was largely molded by sexual selection and is therefore mainly a sexual ornament. Culture, in general, and language, in particular, are simply ways that males and females play the game of sex. When language appeared, it quickly became a key tool in sexual selection, and therefore it evolved quickly.

Darwin had already speculated that language may have evolved through sexual selection. Miller agrees, finding that the usual explanation (that language helps a group trade key information) is only a small piece of the puzzle (individuals, unless they are relatives, have no motivation to share key information since they are supposed to compete).

Even more powerful is the evidence that comes from observing the behavior of today’s humans: they compete to be heard, they compete to utter the most sensational sentences, they are dying to talk.

Miller also mentions anatomical evidence: what has evolved dramatically in the human brain is not the hearing apparatus but the speaking apparatus. Miller believes that language, whose intended or unintended effect is to deliver knowledge to competitors, must also have a selfish function, otherwise it would not have developed: individuals who simply delivered knowledge to competitors would not have survived. On the other hand, if language is a form of sexual display, then it makes sense that it evolved rapidly, just like any other organ (bull horns or peacock tails) that served that function. It is unique to humans the same way that the peacock’s tail is unique to peacocks. It is pointless to try and teach language to a chimpanzee the same way that it is pointless to expect a child to grow a colorful tail.

The Origin of Communication

Where does language come from is a question that does not only apply to humans, but to all species, each species having its own “language”.

One might as well as the question “where do bee dances come from”? The bees are extremely good at providing details about the route and the location of food. They do so not with words but with dances. The origins of bee dances are no less intriguing than the origins of human language.

The point is that most species develop a social life and the social life depends on a mechanism of communication, and in humans that mechanism is language. But language may be viewed as a particular case of a more general process of nature, the process by which several individuals become aggregated in a group.

There is a bond within the members of a species, regardless of whether they are cooperating or competing: they can communicate. A dog cannot communicate much to a cat. A lion cannot communicate with an ant. And the greatest expert in bees cannot communicate much with a bee. Communication between members of different species is close to impossible. But communication within members of a species is simple, immediate, natural, and, contrary to our beliefs, does not require any advanced skills. All birds communicate; all bees communicate. There is no reason to believe that humans would not communicate if they were not taught a specific language. They might, in fact, communicate better: hand gestures and facial expressions may be a more efficient means of communication among humans than words.

Again, this efficiency is independent of the motives: whether it is for cooperation or for aggression. We can communicate with other members of our species. When we communicate for cooperation, the communication can become very complex and sophisticated. We may communicate that a herd is moving east, that clouds are bringing rain, that the plains are flooded. A bee can communicate similar information to another bee. But an ant cannot communicate this kind of information to a fish and a fish cannot communicate it to a bird. Each species has developed a species-specific form of communication.

The origin of language is but a detail in a much more complex story, the story of how intra-species communication evolved. If all species come from a common ancestor, there must have been only one form of communication at the beginning. Among the many traits that evolved over the ages, intra-species communication is one that took the wildest turns. While the genetic repertoire of bees and flies may be very similar, their system of communication is wildly different.

The fact that communication is different for each species may simply be due to the fact that each species has different kinds of senses, and communication has to be tailored to the available senses.

A reason for this social trait to exist could be both sexual reproduction and altruism.

The Origin of Cellular Communication

Even before social behavior was invented, there was a fundamental language of life. Living cells communicate all the time, even in the most primitive organisms: cell communication is the very essence of being alive.

There are obvious parallels between the language of words and the language of cellular chemicals. Two cells that exchange chemicals are doing just that: “talking” to each other, using chemicals instead of words. Those chemicals are bound in molecular structures just like the words of human language are bound in grammatical structures.

The forms of communication that do not involve chemical exchange still cause some chemical reaction. A bee that changes course or a human brain that learns something have undergone chemical change, that has triggered changes in their cognitive state.

From this point of view, there are at least three main levels of communication: a cellular level, in which living cells transmit information via chemical agents; a bodily level, in which living beings transmit information via “gestures”; and a verbal level, in which living beings transmit information via words.

Each level might simply be an evolution of the previous one.

Who Invented Language?

Linguists, geneticists and anthropologists have explored the genealogical tree of human languages to determine where human language was invented. Was it invented in one place and then spread around the globe (why then so many languages rather than just one?) or was it invented in different places around the same time? (What a coincidence that would be).

The meta-issue with this quest is the role of free will, i.e. whether we humans have free will and decide what happens to us. We often assume that somebody “invented” something and then everybody started using it. The truth could be humbler: all we humans share pretty much the same brain, and that brain determines our behavior. We all sleep, we all care for our children, we all avoid danger. Not because one human “invented” these behaviors, but because our brains are programmed to direct us to behave that way. Our free will (if indeed we have any) is limited to deciding which woman to marry, but the reason we want a wife is sex and children, a need that is programmed in our brain (and, of course, one could claim that the choice of the specific wife is also driven by our brain’s circuits).

In fact, we consider “sick” or “abnormal” any human being who does not love her/his children, any human who does not like sex, etc.

Asking who invented language could be like asking who invented sex or parenting. It may just come with the race. We humans may be programmed to communicate using the human language. It didn’t take a genius to invent language. We started speaking, worldwide, as soon as the conditions were there (as soon as we started living in groups, more and more heterogeneous groups, more and more collaborative groups).

The mystery may not be who invented language, but why we invented so many and so different languages. There are striking differences between Finnish and Chinese, even though those two peoples share pretty much the same brain. The effect of the environment on the specific language we start speaking must indeed be phenomenal.

What Are Jokes And Why Do We Make Them

Language developed because it had an evolutionary function. In other words, it helped us survive. For example, language enabled humans to exchange information about the environment. A member of a group can warn the member of another group about an impending danger or the source of water or the route taken by a predator.

This may be true, but it hardly explains the way we use language every day. When we write an essay, we may be matter of factual, but most of the day we are not. For example, we make jokes all the time. A human being who does not make jokes, or does not laugh at jokes made by others, is considered a case for a psychoanalyst. Jokes are an essential part of the use of language.

Nonetheless, jokes are a peculiar way to use language. We use words to express something that is not true, but could be true, and the brain somehow relates to this inconsistency and… we laugh.

There must be a reason why humans make jokes. There must be a reason why we use language to make jokes.

Upon closer inspection, we may not be so sure that the main function of language is communicating information about the environment.

If a tiger attacks you, I will not read you an essay on survival of the fittest: I will just scream “run!” We don’t need the complex, sophisticated structure of language to “communicate” about us and the environment. If you are starving, I may just point to the refrigerator. For most practical purposes, street signs communicate information about locations better than geography books. It is at least debatable whether we need language to communicate information about the environment that is relevant to survival. We can express most or all of that information in very simple formats, often with just one word or even just a gesture.

On the other hand, if we want to make a joke, we need to master the whole power of the language. Every beginner in a foreign language knows that the hardest part is to understand jokes in that language, and the second hardest is making them. Joking does require the whole complex structure of language, and, at closer inspection, it is the only feature of human life that requires it.

Jokes are probably very important for our survival. A joke is a practice: we laugh because we realize that something terrible would happen in that circumstance: the logic of the world would be violated, or a practical disaster would occur. The situation is “funny” because it has to be avoided. Being funny helps remember that we should avoid it.

Joking may well be an important way to learn how to move in the environment without having to do it first person, without having to pay the consequences for a mistake.

In that case, it would be more than justified that our brain evolved a very sophisticated tool to make jokes: language.

Ultimately, language may have evolved to allow us to make more and more useful (funnier and funnier) jokes.

Tools

The British psychologist Richard Gregory has shown how language is but one particular type of “tool”. The human race, in general, is capable of making and using tools, and language happens to be one of them.

Gregory claims that “tools are extensions of the limbs, the senses and mind.” The fundamental difference between humans and apes is not in the (very small) anatomical differences but in language and tools. Man is both a tool-user and a tool-maker.

Gregory shows that there are “hand” tools (such as level, pick, axe, wheel, etc) and “mind” tools, which help measuring, calculating and thinking (such as language, writing, counting, computers, clocks).

Tools are extensions of the body. They help us perform actions that would be difficult for our arms and legs. Tools are also extensions of the mind. Writing extended our memory. We can make a note of something. So do photographs and recordings. This book extends my mind. It also extends your mind. Tools like books create a shared mind.

Gregory qualifies information as “potential intelligence” and behavior as “kinetic intelligence”. Tools increase intelligence as they enable a new class of behavior. A tool “confers” intelligence to a user, meaning that it turns some potential intelligence into kinetic intelligence.

A person with a tool is a person with a potential intelligence to perform an action that without the tool would not be possible (or much more difficult).

Behavior is often just using that tool to perform that action. It may appear that intelligence is in your action, but, actually, intelligence is in the tool, not in your action. Or, better, they are two different types of intelligence.

And words are just one particular type of tool.

There is also a physical connection in our body between language and tool usage: they are both controlled by the same hemisphere.

Tools as Intentionality

The USA philosopher Daniel Dennett advanced a theory of language developed based on his theory of “intentionality” (the ability to refer to something). Basically, his idea is that different levels of intentionality correspond to different “kinds” of minds.

The “intentional stance” is the strategy of interpreting the behavior of something (a living or non-living thing) as if it were a rational agent whose actions are determined by its beliefs and desires. This is the stance that we adopt, for example, when dealing with ourselves and other humans: we assume that we and others are rational agents whose actions are determined by our beliefs and desires. Intentional systems are those to which the intentional stance can be applied, and they include artifacts such as thermostats and computers, as well as all living beings. For example, we can say that “this computer program wants me to input my name” or that “the tree bends south because it needs more light” (both “wants” and “needs” express desire).

The intentional stance makes the assumption that an intentional system has goals that it wants to achieve; that it uses its own beliefs to achieve its own goals, and that it is smart enough to use the right ones in the appropriate way.

It seems obvious that artifacts possess only “derived” intentionality, i.e. intentionality that was bestowed on them by their creators. A thermostat measures temperature because that is what the engineer designed it for. The same argument, though, applies to us: we are artifacts of nature and nature bestows on us intentionality. (The process of evolution created our minds to survive in an environment, which means that our mind is about the environment).

Dennett speculates that brains evolved from the slow internal communication systems of “sensitive” but not “sentient” beings when they became equipped with a much swifter communication agent (the electro-chemicals of neurotransmitters) in a much swifter communication medium (nerve fibers). Control was originally distributed around the organism in order to be able to react faster to external stimuli. The advent of fast electro-chemicals allowed control to become centralized, because now signals traveled at the speed of electricity. This also allowed control to become much more complex, as many more things could be done in a second.

“Evolution embodies information in every part of every organism”. And that information is about the environment. A chameleon’s skin, a bird’s wings, and so forth, they all embody information about the medium in which their bodies live. This information does not need to be replicated in the brain as well. The organ already “knows” how to behave in the environment. Wisdom is not only in the brain, wisdom is also embodied in the rest of the body. Dennett speculates that this “distributed wisdom” was not enough: a brain can supplement the crudeness, the slowness, the limitations of the organs. A brain can analyze the environment on a broader scale, can control movement in a much faster way and can predict behavior over a longer range.

As George Miller put it, animals are “informavores”. Dennett believes in a distributed information-sucking system, each components of which is constantly fishing for information in the environment. They are all intentional systems, which get organized in a higher-level intentional system, with an “increasing power to produce future”.

This idea, both evolutionarily and conceptually, can be expressed in a number of steps of intentionality, each of which yields a different kind of mind. First there were “Darwinian creatures”, that were simply selected by trial and error on the merits of their bodies’ ability to survive (all living organisms are Darwinian creatures). Then came “Skinnerian creatures”, which were also capable of independent action and therefore could enhance their chances of survival by finding the best action (they are capable of learning from trial and error). The third stage of “mind, “Popperian creatures”, were able to play an action internally in a simulated environment before they performed it in the real environment and could therefore reduce the chances of negative outcomes (information about the environment supplemented conditioning). Popperian creatures include most mammals and birds. They feel pain, but do not suffer, because they lack the ability to reflect on their sensations.

Humans are also “Gregorian creatures”, capable of creating tools, and, in particular, of mastering the tool of language. Gregorian creatures benefit from technologies invented by other Gregorian creatures and transmitted by cultural heritage, unlike Popperian creatures that benefit only from what has been transmitted by genetic inheritance.

A key step in the evolution of “minds” was the transition from beings capable of an intentional stance towards others to beings capable of an intentional stance towards an intentional stance. A first-order intentional system is only capable of an intentional stance towards others. A second-order intentional system is also capable of an intentional stance towards an intentional stance. It has beliefs as well as desires about beliefs and desires. And so forth. Higher-order intentional systems are capable of thoughts such as “I want you to believe that I know that you desire a vacation”.

This was not yet conscious life because there are examples, both among humans and among other animals, of unaware higher-order intentionality. For example, animals cheat on each other all the time, and cheating is possible only if you are capable of dealing with the other animal’s intentional state (with the other animal’s desires and beliefs), but Dennett does not think that animals are necessarily conscious. In other words, he thinks that one can be a psychologist without being a conscious being.

Dennett claims that our greater “intelligence” is due not to a larger brain but to the ability to “off load” as much as possible of our cognitive tasks into the environment. We construct “peripheral devices” in the environment to which those tasks can be delegated. We can do this because we are intentional: we can point to those things in the environment that we left there. In this way the limitations of the brain do not matter anymore, as we have a potentially infinite area of cognitive processing. Most species rely on natural landmarks to find their way around and track food sources. But some species (at least us) have developed the skills to “create” their own landmark, and they can therefore store food for future use. They are capable of “labeling” the world that they inhabit. Individuals of those species alter the environment and then the altered environment alters their behavior. They create a loop to their advantage. They program the environment to program them.

Species that store and use signs in the environment have an evolutionary advantage because they can “off-load” processing. It is like “taking a note” that we can look up later, so we don’t forget something. If you cannot take a note, you may forget the whole thing.

Thanks to these artifacts, our mind can extend out into the environment. For example, the notepad becomes an extension to my memory.

These artifacts shape our environment. Our brains are semiotic devices that contain pointers and indices to the external world.

Semiotics: Signs and Messages

Semiotics provides a different perspective to study the nature and origin of language.

Semiotics, founded in the 1940s by the Danish linguist Louis Trolle Hjelmslev, had two important precursors in the USA philosophers Charles Peirce (whose writings were rediscovered only in the 1930s) and Charles Morris (who in 1938 had formalized a theory of signs).

Peirce reduced all human knowledge to the idea of “sign” and identified three different kinds of signs: the index (a sign which bears a causal relation with its referent); the icon (which bears a relation of similarity with its referent); and the symbol (whose relation with its referent is purely conventional). For example, the flag of a sport team is a symbol, while a photograph of the team is an icon. Movies often make use of indexes: ashes burning in an ashtray mean that someone was recently in the room, and clouds looming on the horizon mean it is about to rain. Most of the words that we use are symbols, because they are conventional signs referring to objects.

Morris defined the disciplines that study language according to the roles played by signs. Syntax studies the relation between signs and signs (as in “the” is an article, “meaning” is a noun, “of” is a preposition, etc.). Semantics studies the relation between signs and objects (“Piero is a writer” means that somebody whose name is “Piero” writes books). Finally, Pragmatics studies the relation between signs, objects and users (the sentence “Piero is a writer” may have been uttered to correct somebody who said that Piero is a carpenter).

The Swiss linguist Ferdinand DeSaussure introduced the dualism of “signifier” (the word actually uttered) and the “signified” (the mental concept). (“Semiology” usually refers to the Saussure-an tradition, whereas “semiotics” refers to the Peirce-an tradition. Semiotics, as opposed to Semiology, is the study of all signs).

The Argentine semiotician Luis Prieto studied signs, in particular, as means of communication. For example, the Braille alphabet and traffic signs are signs used to communicate. A “code” is a set of symbols (the “alphabet”) and a set of rules (the “grammar”). The code relates a system of expressions to a set of contents. A “message” is a set of symbols of the alphabet that has been ordered according to the rules of the grammar. This is a powerful generalization: language turns out to be only a particular case of communication. A sentence can be reduced to a process of encoding (by the speaker) and decoding (by the listener).

The Hungarian semiotician Thomas Albert Sebeok views Semiotics as a branch of communication theory that studies messages, whether emitted by objects (such as machines) or animals or humans. In agreement with Rene’ Thom, Sebeok thinks that human sign behavior has nothing special that can distinguish it from animal sign behavior or even from the behavior of inanimate matter.

The USA linguist Merlin Donald speculated on how the human mind developed. He argued that at the beginning there was only episodic thinking: early hominids could only remember and think about episodes. Later, they learned how to communicate and then they learned how to build narratives. Symbolic thinking came last. Based on this scenario, the Danish semiotician Jesper Hoffmeyer has drawn his own conclusions: in the beginning there were stories, and then little by little individual words rose out of them. Which implies that language is fundamentally narrative in nature; that language is corporeal, has to do with motor-based behavior; and that the unit of communication among animals is the whole message, non the word.

Hoffmeyer has introduced the concept of “semiosphere”, the semiotic equivalent of the atmosphere and the biosphere, that incorporates all forms of communication, from smells to waves: all signs of life. Every living organism must adapt to its semiosphere or die. At all levels, life must be viewed as a network of “sign processes”. The very reason for evolution is death: since organisms cannot survive in the physical sense they must survive in the semiotic sense, i.e. by making copies of themselves. “Heredity is semiotic survival”.

Rene’ Thom, the French mathematician who invented catastrophe theory, aims at extending his method so as to “geometrize thought and language”. Thom is envisioning a Physics of meaning, of significant form, which he calls “Semiophysics”.

Following in this generalization of signs, James Fetzer has even argued in favor of extending Newell and Simon’s theory to signs: the mind not as a processor of symbols, but as a processor of signs.

Collective Cognition

What is, ultimately, the function of language? To communicate? To think? To remember? All of this and more. But, most likely, not only for the sake of the individual. Language’s crucial function is to create a unit out of so many individuals. Once we learn to speak, we become part of something bigger than our selves. We inherit other people’s memories (including the memories of people who have long been dead) and become capable of sharing our own memories with other people (even those who have not been born yet).

Thanks to language, the entire human race becomes one cognitive unit, with the ability to perceive, learn, remember, reason, and so forth. Language turns the minds of millions of individuals into gears at the service of one gigantic mind.

As the USA neuroscientist Paul Churchland once pointed out, language creates a collective cognition, a collective memory and intelligence.

Further Reading

Bickerton, Derek & Calvin, William: LINGUA EX MACHINA (MIT Press, 2000)

Bickerton, Derek: LANGUAGE AND SPECIES (Chicago Univ Press, 1992)

Churchland, Paul: ENGINE OF REASON (MIT Press, 1995)

Darwin, Charles: LANGUAGES AND SPECIES (1874)

Deacon, Terrence: THE SYMBOLIC SPECIES (W.W. Norton & C., 1997)

DeSaussure, Ferdinand: COURSE IN GENERAL LINGUISTICS (1916)

Dennett, Daniel: KINDS OF MINDS (Basic, 1998)

Donald, Merlin: ORIGINS OF THE MODERN MIND (Harvard Univ Press, 1991)

Dunbar, Robin: GROOMING, GOSSIP AND THE EVOLUTION OF LANGUAGE (Faber and Faber, 1996)

Gardenfors, Peter: HOW HOMO BECAME SAPIENS (Oxford Univ Press, 2003)

Gregory, Richard: MIND IN SCIENCE (Cambridge Univ Press, 1981)

Hoffmeyer, Jesper: SIGNS OF MEANING IN THE UNIVERSE (Indiana Univ. Press, 1996)

Joseph, Rhawn: NAKED NEURON (Plenum, 1993)

Lieberman, Philip: THE BIOLOGY AND EVOLUTION OF LANGUAGE (Harvard Univ Press, 1984)

Miller, Geoffrey: THE MATING MIND (Doubleday, 2000)

Morris, Charles: FOUNDATIONS OF THE THEORY OF SIGNS (University Of Chicago Press, 1938)

Niehoff, Debra: THE LANGUAGE OF LIFE: HOW CELLS COMMUNICATE IN HEALTH AND DISEASE (Joseph Henry Press, 2005)

Peirce, Charles: COLLECTED PAPERS (Harvard Univ Press, 1931)

Prieto, Luis: PRINCIPES DE NOOLOGIE (1964)

Sebeok, Thomas Albert: CONTRIBUTION TO A DOCTRINE OF SIGNS (Indian Univ, 1976)

Thom, Rene’: SEMIOPHYSICS (Addison-Wesley, 1990)

Wilson, Frank: THE HAND (Pantheon Books, 1998)

Medio siglo del “Boom”

1 de Diciembre de 2012 Este año se conmemoran 50 años del pistoletazo de salida del más importante movimiento literario continental, bautizado por el crítico chileno Luis Harss como el “Boom de la literatura latinoamericana” en el libro Los nuestros, instituyendo así un nuevo canon literario.

En 1962 se publican dos novelas fundacionales, La ciudad y los perros, de Mario Vargas Llosa; y La muerte de Artemio Cruz, de Carlos Fuentes. En 1963, Rayuela, de Julio Cortázar; y en 1967, Cien años de soledad, de Gabriel García Márquez.

Estos cuatro escritores y países, Perú, México, Argentina y Colombia, son los pilares sobre los que descansará el “Boom”, que revolucionará a escala mundial la literatura de ficción y cuyas repercusiones lanzarán a Latinoamérica, un continente hasta entonces tenido como un conjunto de repúblicas bananeras, hasta las primeras páginas culturales y literarias de todos los medios de comunicación del planeta.

Hay otros convidados de piedra en este festín literario, como el cubano Guillermo Cabrera Infante, que publica en 1968 su magistral obra Tres tristes tigres; o el chileno José Donoso, quien lanza en 1970 su novela El obsceno pájaro de la noche.

La frase, acuñada por el cubano Alejo Carpentier tipificando a Latinoamérica como un continente de “lo real maravilloso”, fue la que inspiró el llamado “realismo mágico”, que se entronizó como “marca de la casa”, a partir de Cien años de soledad.

No todo sin embargo es producto del azar, pues antes de estos cuatro magníficos de la literatura continental, había ya una tradición cimentada por padres fundadores como el premio Nobel guatemalteco Miguel Ángel Asturias, autor de Hombres de maíz; el peruano José María Arguedas y su novela Los ríos profundos; el mexicano Juan Rulfo y su novela Pedro Páramo; el cubano Alejo Carpentier y su obra Los pasos perdidos; así como el uruguayo Juan Carlos Onneti, La vida breve; el argentino Jorge Luis Borges, Historia universal de la infamia; el peruano Ciro Alegría, El mundo es ancho y ajeno; el paraguayo Augusto Roa Bastos, Hijo de hombre; o el argentino Adolfo Bioy Casares, La invención de Morel (1940).

En esta misma década de los sesenta, sin embargo, hay dos grandes escritores que curiosamente no entran en el “Boom”, debido a que ya están ocupadas las cuatro sillas de sus propietarios. Nos referimos al argentino Ernesto Sábato (1961) y su novela Sobre héroes y tumbas, así como a la obra cumbre de la literatura latinoamericana del siglo XX, Paradiso, del cubano José Lezama Lima, publicada en 1966.

Paralelamente, el “Boom” tuvo un soporte poético con gigantes de la poesía universal que continuaron la tradición de Rubén Darío, como Gabriela Mistral, Pablo Neruda, Octavio Paz, Nicanor Parra, Roque Dalton García o Ernesto Cardenal.

Acompañado por una máquina publicitaria de gran calaje, y aupado por la agente literaria española Carmen Balcells así como por el editor catalán Carlos Barral, el “Boom” fue, excepcionalmente, un excelente negocio con excelente literatura, que rompió esquemas, puso la periferia latinoamericana en el centro de la metrópoli del español y arrasó todo el planeta con historias increíbles contadas como lo más natural del mundo.

No en balde provenían del continente de los siete colores, territorio del realismo mágico, donde la ficción supera con creces la realidad, y los sueños son la mejor manera de interpretar esa realidad.

La fábula del águila y la gallina

La globalización representa una nueva etapa en el proceso de cosmogénesis y de antropogénesis. Tenemos que entrar en ella. No de la manera que las potencias controladas del mercado mundial quieren mercado competitivo y nada cooperativo, solamente interesadas en nuestras riquezas materiales, reduciéndonos a meros consumidores.

Nosotros queremos entrar soberanos y conscientes de nuestra posible contribución ecológica, multicultural y espiritual.

Se percibe un desmesurado entusiasmo del actual gobierno por la globalización. El presidente habla de ellas sin los matices que situarían con la debida luz nuestra singularidad. Posee capacidad para ser una voz propia y no eco de la voz de los otros.

Para él y sus aliados, cuento una historia que viene de un pequeño país de África occidental, Gana, narrada por un educador popular, James Aggrey, a principios de este siglo cuando se daban los embates por la descolonización.

Ojalá los haga pensar.

Era una vez un campesino que fue al bosque cercano a atrapar algún pájaro con el fin de tenerlo cautivo en su casa. Consiguió atrapar un aguilucho. Lo colocó en el gallinero junto a las gallinas. Creció como una gallina.

Después de cinco años, ese hombre recibió en su casa la visita de un naturalista. Al pasar por el jardín, dice el naturalista: “Ese pájaro que está ahí, no es una gallina. Es un águila.”

“De hecho”, dijo el hombre. “Es un águila. Pero yo la crié como gallina. Ya no es un águila. Es una gallina como las otras.

“No, respondió el naturalista”. Ella es y será siempre un águila. Pues tiene el corazón de un águila. Este corazón la hará un día volar a las alturas”.

“No, insistió el campesino. Ya se volvió gallina y jamás volará como águila”.

Entonces, decidieron, hacer una prueba. El naturalista tomó al águila, la elevó muy alto y, desafiándola, dijo: “Ya que de hecho eres un águila, ya que tú perteneces al cielo y no a la tierra, entonces, abre tusa alas y vuela!”

El águila se quedó, fija sobre el brazo extendido del naturalista. Miraba distraídamente a su alrededor. Vio a las gallinas allá abajo, comiendo granos. Y saltó junto a ellas.

El campesino comentó. “Yo lo dije, ella se transformo en una simple gallina”.

“No”, insistió de nuevo el naturalista, “Es un águila”. Y un águila, siempre será un águila. Vamos a experimentar nuevamente mañana.

Al día siguiente, al naturalista subió con el águila al techo de la casa. Le susurró: “Águila, ya que tú eres un águila, abre tus alas y vuela!”.

Pero cuando el águila vio allá abajo a las gallinas picoteando el suelo, saltó y fue a parar junto a ellas.

El campesino sonrió y volvió a la carga: “Ya le había dicho, se volvió gallina”.

“No”, respondió firmemente el naturalista. “Es águila y poseerá siempre un corazón de águila. Vamos a experimentar por última vez. Mañana la haré volar”.

Al día siguiente, el naturalista y el campesino se levantaron muy temprano. Tomaron el águila, la llevaron hasta lo alto de una montaña. El sol estaba saliendo y doraba los picos de las montañas.

El naturalista levantó el águila hacia lo alto y le ordenó: “Águila, ya que tú eres un águila, ya que tu perteneces al cielo y no a la tierra, abre tus alas y vuela”.

El águila miró alrededor. Temblaba, como si experimentara su nueva vida, pero no voló. Entonces, el naturalista la agarró firmemente en dirección al sol, de suerte que sus ojos se pudiesen llenar de claridad y conseguir las dimensiones del vasto horizonte.

Fue cuando ella abrió sus potentes alas. Se erguió soberana sobre sí misma. Y comenzó a volar a volar hacia lo alto y a volar cada vez más a las alturas. Voló. Y nunca más volvió.

Pueblos de África (y de Brasil)! Fuimos creados a imagen y semejanza de Dios. Pero hubo personas que nos hicieron pensar como gallinas. Y aun pensamos que efectivamente somos gallinas. Pero somos águilas.

Por eso, hermanos y hermanas, abran las alas y vuelen. Vuelen como las águilas. Jamás se contenten con los granos que les arrojen a los pies para picotearlos.

Tradujo Daniel Rodríguez (MCCLP), México 1997

Declaración Cumbre de los Pueblos, Santiago de Chile

En el marco da la Cumbre de los Pueblos realizada entre los días 25, 26 y 27 de Enero de 2013, en Santiago de Chile, las organizaciones y movimientos sociales y políticos de los diferentes países de América Latina, el Caribe y la Unión Europea declaramos lo siguiente:

Hoy, somos testigos de cómo los bienes naturales, los derechos y las personas han sido mercantilizadas en las naciones y pueblos de América Latina, Europa y el Caribe, producto de la lógica capitalista, que en su vertiente neoliberal y machista, permite su instalación y profundización a través de aparatos cívicos, políticos, militares.

Las relaciones existentes entre la Unión Europea y América Latina y el Caribe que priorizan los privilegios y ganancias de los inversionistas frente a los derechos de los pueblos a través de acuerdos comerciales y acuerdos bilaterales de inversiones, profundizan este modelo que perjudica a los pueblos de ambas regiones.

Es así, que estos Estados mercantilistas, las transnacionales y las corporaciones continúan siendo administradores y profundizadores de la pobreza y la desigualdad social en el mundo, amparados por un tipo de democracia representativa, de mano de la elite, que se aleja de los intereses de las grandes mayorías de nuestro pueblo.

Esta hegemonía del capital financiero se manifiesta entre otros en la privatización y mercantilización de los servicios públicos, el desmantelamiento del Estado de bienestar, la precarización del trabajo, el extractivismo, la usurpación, la destrucción y mercantilización de los bienes naturales y sociales propios del pueblo y el desplazamiento forzoso de los pueblos originarios, provocando las crisis alimentarias, energéticas, climáticas.

En la Unión Europea la crisis capitalista ha significado un verdadero golpe de estado financiero que ha impuesto políticas de austeridad en contra de los derechos de los pueblos, de los derechos laborales, ambientales, etc. La troika europea (FMI, BCE, Comisión Europea) obliga los estados a endeudarse para salvar los bancos para que seamos los pueblos los que paguen la crisis provocada por ellos mismo.

Al mismo, es necesario visibilizar la creciente opresión y discriminación hacia las mujeres en América Latina, el Caribe y Europa.

No obstante, a este panorama que parece adverso, reconocemos procesos históricos y recientes a partir de las luchas de nuestros pueblos en el mundo, que han logrado tensionar y agrietar las actuales lógicas y nos dan la esperanza de que otro mundo es posible.

De este modo, surge la necesidad de construir las bases para un nuevo modelo de sociedad que transforme las actuales lógicas y coordenadas políticas, económicas, sociales y culturales en todas nuestras naciones y pueblos de ambos lados del continente las luchas de los diferentes actores y organizaciones del campo popular.

Para alcanzar estos objetivos proponemos que:

Los derechos y bienes naturales arrebatados a nuestro pueblo deben ser recuperados, por medio de la nacionalización, la comunitarización de los bienes y servicios y los medios de producción y el reconocimiento constitucional de la naturaleza como sujeto de derecho. Esto implica pasar de ser resistencia y movimientos reivindicativos a una alternativa que contenga una propuesta política-social integral de país.

Promover el paradigma del buen vivir basado en equilibrio del ser humano con la naturaleza y el medio ambiente y los derechos de la tierra, al servicio de los pueblos, con una economía plural y solidaria.

Democracia directa, participativa y popular y su concretización desde las bases sociales. Para ello, es necesario la integración de actores sociales y políticos del mundo, valorando prácticas territoriales y haciendo el dialogo entre las instancias locales y globales.

Promover la integración en la participación política de los niños y niñas y las juventudes, desde un enfoque de género. Respeto a la libre determinación de los pueblos originarios del mundo, entendiéndolos como pueblos hermanos no sometidos a la territorialidad impuesta por la colonización. Esto, sumando a la promoción de la soberanía alimentaria en perspectiva de una autotomía territorial que a los pueblos y comunidades decidir qué y cómo producirlo.

En cuanto al avance de la represión y la criminalización de la protesta, movimientos sociales y populares, debemos articularnos de tal manera de generar la fuerza necesaria para frenar el avance de leyes antiterroristas y la inserción en las comunidades indígenas de nuestros pueblos, como a su vez la militarización imperialista que ha instalado bases militares en America Latina, Europa y el Caribe.

Sensibilizar, agitar y promover luchas contra las transnacionales, mediante campaña de denuncias y biocot en todos los niveles.

Posicionar el feminismo con un proyecto político antipatriarcal y anticapitalista. Reconocer y promover los derechos de los migrantes y los derechos de los pueblos de libre tránsito entre las naciones.

Plena solidaridad con el pueblo Palestino y todos aquellos pueblos y naciones oprimidos por el poder colonizador y el imperialismo, así como el repudio a las intervenciones cívicos-militares en Honduras, Haití y Paraguay. Apoyamos los procesos de paz, con la participación de los actores sociales y políticos en Colombia. Solidaridad con el pueblo cubano en contra del bloqueo, con Argentina en el proceso de recuperación de las Malvinas, con Bolivia y su demanda por salida al mar, con el pueblo Venezolano en el proceso Bolivariano y con los movimientos sociales en Grecia y España. En el caso de Chile, solidaridad con el movimiento estudiantil en defensa de la educación pública, gratuita y con el pueblo-nación mapuche contra la represión realizada de parte del Estado.

Acompañando la lucha por la soberanía de nuestros territorios en América Latina, es necesario luchar por el respeto de la soberanía de nuestro cuerpo como territorio propio de las mujeres.

Entendemos que la superación de la precarización laboral a la que se ven expuestos las mayorías de trabajadores en América Lantina y el mundo, pasa por un cambio estructural que altere las relaciones de propiedad y producción de bienes y servicios valorando la labor esencial que desempeñan los trabajadores y trabajadoras como sustento sobre el cual se construye toda sociedad.

De manera transversal, debemos avanzar en la construcción de plataformas de lucha comunicacional que no sólo permitan develar y difundir las demandas y alternativas de nuestros pueblos frente al modelo hegemónico, sino también como forma de explicar las verdaderas causas de los problemas que hoy nos aquejan.

Debemos ser capaces de construir demandas unitarias que aglutinen a todos los actores sociales y pueblos en disputa y que a su vez nos permitan trazar un horizonte estratégico hacia el cual avanzar, articulando y organizando la unidad entre el movimiento sindical, social y político en América Latina, el Caribe y Europa. Esto debiera traducirse en una hoja de ruta de trabajo y de movilizaciones para el presente periodo, pero con perspectivas a largo plazo.

Al mismo tiempo, fortalecer la organización social y popular en cada sector de inserción, potenciando la amplificación de nuestras demandas a las grandes mayorías por medio de la politización y la movilización.

No podemos dividir más las instancias organizativas en las que estamos, conducir hacia un proyecto en la diversidad es el mayor desafío que se nos presenta para la generación de una alternativa real de poder popular. Romper con los sectarismos que fragmentan, dividen e impiden la construcción de unidad del campo popular, es una tarea urgente.

Frente al poder del bloque dominante sólo la unidad y la solidaridad entre nuestros pueblos nos darán la fuerza necesaria para alcanzar nuestros más alto objetivos y vencer.

Santiago de Chile, Enero 2013

Alejo Carpentier o La sangrienta primavera de la historia

Un presupuesto surrealista permite a Carpentier construir el dualismo más constante de su obra: la oposición Europa/América. La percepción anquilosada de la realidad cotidiana, la historia convertida en la pequeña historia, es europea. La posibilidad de romper esta habitualidad y acceder a la “sobrerrealidad” que caracteriza a la visión surrealista, es americana. Tal vez así podamos simplificar la hinchada cuestión de lo real maravilloso, con el curioso agregado de que, para nuestro escritor, América es exclusivamente la cultura afrocaribeña, una cultura sin nada aborigen pues la despoblación de indígenas propició la repoblación a cargo de blancos, negros y amarillos.

América, contrafigura de la historia, opone religión a secularidad, arcaísmo a modernidad, mito a devenir, regeneración utópica a continuidad evolutiva: la promesa de dicha de la historia, la revolución. Esta caracterización cumple distintas derivas en las obras de Carpentier.

En El reino de este mundo estamos ante una América francamente africana, si vale la paradoja, la magia negra contra el arma blanca. El líder rebelde Mackandal pasa a convertirse en figura épica de los himnos populares y en personaje de las liturgias animistas del vudú.

América es mitología o, como prefiere precisar Carpentier, “ontología”, transformación del descubrimiento en revelación, mestizaje fecundo, carácter fantástico de lo negro y lo indígena. Su historia es la crónica de lo real maravilloso, es decir de lo surreal del surrealismo.

Carpentier caracteriza al negro sublevado como vital y potente, opuesto al blanco europeo, racional y desvaído. Aquellas características parecen denunciar su sesgo sobrenatural. Está sobre la realidad cotidiana y sobre la naturaleza. Es paranormal. Tiene una cualidad demiúrgica.

Mackandal puede ir y venir del mundo de los muertos, posee el don de la metamorfosis, controla a sus fieles, hace trabajar a los difuntos como zombies, sale volando de la hoguera donde acaba de ser incinerado. “El manco Mackandal, hecho un houngan del rito Radá, investido de poderes extraordinarios por varias caídas en posesión de dioses mayores, era el Señor del Veneno. Dotado de suprema autoridad por los Mandatarios de la otra orilla, había proclamado la cruzada de exterminio, elegido como estaba para acabar con los blancos y crear un gran imperio de negros libres en Santo Domingo.”

El blanco, aprisionado por su condición histórica, está destinado a pasar, a convertirse en pasado, pues la historia es consumación, exterminio y muerte. En cambio el negro, al poderse comunicar con “la otra orilla”, el mundo de las sombras, tiene acceso a una movediza eternidad, marcada por las reencarnaciones y retornos. Es trascendente y le basta invocar a sus dioses guerreros para asaltar con éxito la fortaleza de la Diosa Razón.

En Los pasos perdidos América es la primavera del tiempo, la tierra donde el mundo se regenera. En América se atesoran las energías que darán nueva vida a la exhausta cultura europea, dormida en el invierno de la razón.

El protagonista es un músico que percibe esas energías instintivas en los instrumentos de percusión. En la historia, el hombre europeo ha perdido sus pasos, alejándose de su origen, que es sagrado, extraviándose. Intentar recuperarlo por medio de la música, es inútil, pues al compositor le falta el trance del sacerdote.

En El siglo de las luces reaparece la superioridad de la magia negra sobre la ciencia blanca cuando el asmático Esteban es curado por los conjuros y bebedizos de Ogé. Mientras el ilustrado francés Victor Hugues cree en la revolución como estallido de la luz, aquél anuncia los trastornos causados por la “llegada de los tiempos” y el Apocalipsis. ¿De qué lado cae el cambio histórico? Carpentier no sabrá contestar.

También el arquitecto y la bailarina de La consagración de la primavera, hartos de la revolución surrealista, la bolchevique y la guerra civil española, marchan a América en busca de una primavera para consagrar. Ella quiere llevar a Europa el ballet de Stravinski que da nombre a la novela, pero “bailado a la cubana”. Tal vez, en clave alegórica, la revolución castrista.

La vuelta al origen hace de América el lugar de la utopía. En Carpentier adquiere la forma de la ciudad ideal, hecha a partir del grado cero de los tiempos, una fundación. En Los pasos perdidos es la obra de El Adelantado y se llama Puerto Anunciación. Es tarea de la libertad y en ella se confunden las direcciones del tiempo, de modo que el porvenir es memoria.

El Adelantado no advierte, sin embargo, que su plan reproduce el modelo de las ciudades históricas. Es una forma disimulada del fracaso utópico, similar a la de Victor en El siglo de las luces, cuando construye en el Amazonas una ciudad ideal destinada a ser devorada por la selva.

Un destino comparable aguarda, en Carpentier, a las revoluciones. Sobre el fondo cíclico y circular del tiempo, la revolución altera la naturaleza de las cosas y las jerarquías establecidas.

Sus líderes son juzgados y condenados por traidores ante los tribunales de la propia revolución, a menos que se conviertan en servidores del orden que intentaron subvertir, y que se restablece como algo natural.

El reino de negros fundado por Henri Christophe reproduce los mandos, crueldades y pompas del antiguo régimen. Victor y Esteban, emisarios de la masonería cubana, viajan a Francia y España en tiempos de la Revolución Francesa y vuelven a Cuba para divulgar sus ideales de igualdad. Llevan una guillotina.

Con el tiempo, Victor se hace militar y brilla por su represión de los sublevados. Los negros son liberados, se los rebautiza con nombres romanos y se les enseña el catecismo jacobino, pero siguen sometidos a los mismos y extenuantes trabajos de siempre. Bajo mosquiteros de tul y servido por hermosas mulatas, Victor decreta las ejecuciones en la guillotina.

La irrealizable utopía, al llevarse a la práctica, se convierte en tiranía. El revolucionario, en comisario terrorista de Estado. En principio, las nuevas autoridades no comercian con esclavos pero acaban haciéndolo cuando los capturan a las potencias enemigas. La conclusión de Esteban es pesimista: “Cuidémonos de las palabras demasiado hermosas, de los Mundos Mejores creados por las palabras. Nuestra época sucumbe por un exceso de palabras. No hay más Tierra Prometida que la que el hombre puede encontrar en sí mismo.” En el exterior de la historia toda promesa decepciona. En el interior del individuo, se cumple. Las Luces se convierten en la sombra de un jardín.

Carpentier declaró su proyecto de escribir una novela sobre la revolución cubana. Nunca lo hizo. Sólo hay algunas referencias en La consagración de la primavera: los últimos combates contra Batista, la instalación de los revolucionarios en el poder, la frustrada invasión de la bahía de Cochinos.

La narradora se entera de esto leyendo revistas francesas, donde los castristas, con sus barbas y melenas, le parecen hombres de una nueva raza, similar a los revolucionarios franceses del 89. Cabe suponer que les espera el destino de Victor Hugues.

La palabra revolución tiene, en Carpentier, el significado de ciclo completo, de vuelta a empezar. Las sociedades se asientan sobre un pacto sagrado y quebrarlo es generar desorden e invocar la restauración. Los negros siguen con sus cultos de santería aunque los franceses les inculquen ideas racionalistas o el caudillo libertador, el culto católico.

Si ha habido algún intento de cambio, su fracaso redunda en decadencia y ruina, esa postrimería barroca que se armoniza con el barroquismo de las descripciones carpenterianas. Su narración tiende a la inmovilidad descriptiva, acentuada por la escasez de los diálogos. La historia se paraliza en tiempos muertos. Si la historia es cíclica como las estaciones del año, su primavera exige sacrificios y se vuelve sangrienta.

Carpentier sale románticamente del Siglo de las Luces: consciencia desdichada, desajuste entre mundo y deseo, desproporción entre lo limitado del hombre y lo inconmensurable del universo. La Ilustración intentó regular socialmente la felicidad, estableciendo un código de cosas razonablemente deseables.

Pero la historia es la antropología de la desdicha, muerte y devoración, tiniebla barroca y noche romántica. El reino del Hombre no es el mundo de los hombres, que se preguntan cuál será. Como dice el barbero Ti Noel, “el hombre nunca sabe para quién padece y espera.”

En algún momento, Carpentier absuelve al hombre de la infelicidad temporal, la mortalidad, por medio del arte. En su busca del momento original, presente absoluto sin antes ni después, donde ha de haber un signo incomparable del origen, el novelista inviste a un músico. Pues, en efecto, es la música, arte de la unidad, y no la literatura, arte de la escisión, la que puede recuperar el instante exento de muerte.

Hay connotaciones sexuales de la escena. Si la historia, reino de la muerte, es paterna, el origen, reino de la inmortalidad, es materno. No ya el Dios masculino de Occidente, sino la Madre de Dios. La identidad fundamental de todo lo existente es femenina. El principio subjetivo masculino introduce la finitud, la asunción de la muerte, la irregularidad y el desorden: la historia.

La paz ordenada y serena es materna, pero es también prenatal y carece de lenguaje articulado. Lo que hace humano al ser humano es desprenderse del origen, nacer. Esa es la marca, el tajo que instaura el tiempo, los pasos contados que se van convirtiendo en pasos perdidos.

Por volver al comienzo, América es la promesa de dicha de la historia porque es la promesa de retorno a la protección materna y al perdido paraíso donde no existe la muerte.

América es la feliz casa sin padre, el cuarto de los juegos, la utopía que es origen y paraíso, pero todo ello ilusorio porque no se pueden desandar los pasos perdidos en el tiempo, no puede retraerse la historia. Si se recupera el origen caótico y dichoso anterior al tiempo será para repetir la creación del tiempo y la refundación de la historia, con lo que ciclo de la revolución volverá a empezar donde terminó para terminar donde empezó.

Obras de Alejo Carpentier

Écue-Yamba-O! (1933).

El reino de este mundo (1949).

Los pasos perdidos (1953).

Guerra del tiempo (1956).

El acoso (1958).

El Siglo de las Luces (1962).

El recurso del método (1974).

Concierto barroco (1974).

El arpa y la sombra (1978).

La consagración de la primavera (1978).

Visión de América.

Los advertidos (cuento).

Semejante la noche (cuento).

Viaje a la semilla (cuento).

Los fugitivos (cuento).

Poderoso caballero es Don Dinero

6 de Febrero de 2013 La diferencia entre quienes tienen 100 millones de dólares y quienes tienen 200 no es de lujos, sino de poder. La cantidad depende del tamaño del país, pero es imposible escapar al poder del dinero en la política. Contra las dictaduras se puede luchar con más voluntad que recursos; pero en democracia, sin dinero no se puede hacer política. La competencia es el factor más importante para la generación de resultados de calidad. Cuando una competencia política es desigual por la concentración de poder económico, aunque sea democrática, el monopolio de poder se mantiene, no importa quién gobierne.

Si en nuestro país tomáramos de base los últimos 150 años, podemos decir que la oposición ha gobernado 8 y quienes detentan el poder económico 142. En otras palabras, estamos mal como resultado de la concentración de poder y no por lo que ha pasado en años recientes. Por eso tuvimos una guerra civil y ahora tenemos polarización política, “maras”, violencia, policía débil, seguridad privada fuerte, abandono del agro, economía improductiva que no crece, emigración masiva de trabajadores, desempleo, olvido de las pequeñas empresas y por eso nos financiamos permanentemente con deuda. Los problemas del actual gobierno son herencia estructural de cómo se gobernó por décadas. La codicia desmedida y la insensibilidad frente a la pobreza son rasgos culturales de los viejos poderes del país. Empobrecieron al Estado, se enriquecieron más, vendieron todo, sacaron el dinero del país, se globalizaron ellos y dejaron desglobalizado al país.

El Salvador es reconocido como un caso clásico de poder oligárquico y el supuesto ha sido que con la democracia ese poder desapareció. Sin embargo, lo que en realidad ocurrió es que el poder económico se concentró más y el control del poder político se refinó. Con una economía tan pequeña ahora “los 14” se volvieron “8”. Tal como lo establecen algunos estudios contemporáneos sobre oligarquías, si antes los oligarcas tenían un ejército de militares y policías para imponerse, en democracia utilizan un ejército compuesto por abogados, medios de comunicación, líderes de opinión, tecnócratas, especialistas en evasión fiscal; financieros que esconden, expatrían y movilizan capitales, gremios empresariales que se subordinan y políticos que les aseguran mantener al Estado como extensión de su patrimonio.

Para enfrentar un poder oligárquico es necesario dispersar el poder económico favoreciendo el surgimiento de nuevas élites. La existencia de diversidad de grupos económicos con visiones políticas distintas favorece el sistema de pesos y contrapesos que necesita la democracia. No bastan las posiciones de gobierno, que en democracia son temporales, es indispensable dispersar el poder económico para lograr un balance integral del poder. Una cosa es asumir la representación de los pobres y otra es ser un partido solo de pobres. Lo segundo sería un sindicato, pero no una fuerza política con perspectiva de poder.

Es indispensable que emerjan, se multipliquen y fortalezcan nuevos ricos con ideas que contribuyan a reconstruir el país. Una competencia democrática con poderes económicos más balanceados aumentaría el valor de los votos de los pobres y les permitiría a jueces, académicos, periodistas y políticos no tener que subordinarse ante los que ahora son los únicos empleadores privados importantes del país.

Muchos empresarios se han quejado siempre de que los grandes capitales les impiden crecer. Nada inquieta más a un sistema oligárquico que perder la exclusividad del poder económico y eso es precisamente lo que está ocurriendo en nuestro país. La llegada de la oposición al gobierno; la separación del expresidente Saca del partido ARENA representando a otros capitales y su proximidad con el presidente Funes; la independencia del PCN y del PDC y el surgimiento de las empresas de ALBA, son todos cambios positivos. Las divisiones sucesivas en la derecha no son oportunistas, son una consecuencia lógica del surgimiento de otros polos de poder económico. La vieja élite está perdiendo el control incluso de su propio candidato, que no representa a la clase, sino a los asalariados de ARENA. Por otro lado, los programas sociales del actual gobierno y la atención que da a los pequeños empresarios y a las mujeres están generando que muchos pobres dejen de ser conservadores y abandonen a la vieja derecha.

Estamos frente a un gran reacomodo histórico en la estructura de poder económico, político y social del país. Las imperfecciones de personas o grupos en este proceso son inevitables, ninguna élite nace santa, no existen cambios a la carta. Las reglas y el orden surgen del nuevo equilibrio. La curva de aprendizaje en la administración de los negocios de ALBA y su transición de empresas políticas a empresas de proyección social o privadas eficientes será complicada y en el camino morirán muchos millones de dólares. Sin embargo, este proceso transformará políticamente al FMLN. Impresiona que a nuestros oligarcas les preocupe ALBA aquí, pero que al mismo tiempo inviertan en la Nicaragua sandinista. Las acusaciones de corrupción a los disidentes de la derecha o los cuestionamientos a las empresas ALBA por el supuesto uso que hacen del Estado es como si los burros se pusieran a criticar a los orejones.

Exhiben mucho dolor las reacciones casi racistas de la vieja élite ante la aparición de nuevos ricos. Están como patrona que llama “igualada” a la sirvienta. Es ridículo que la derecha les demande a los nuevos ricos del FMLN retornar al extremismo consecuente. Cuando los inmigrantes árabes llegaron al país se evitó que compraran tierras y se les despreció y discriminó socialmente. La oligarquía quería evitar que se fortalecieran económicamente. A lo largo de la historia, las divisiones en las filas del poder se han cobrado con muerte, exilio y golpes de Estado. Manuel Enrique Araujo, cafetalero asesinado en 1913; Arturo Araujo, terrateniente derrocado en 1931; Roberto Edmundo Canessa, cafetalero muerto por una golpiza policial en 1961; y Enrique Álvarez Córdova, cafetalero asesinado por la Policía de Hacienda en 1980. Igual mataron cuando perdieron el control de la Iglesia Católica al arzobispo Romero y a decenas de sacerdotes y monjas. La división encabezada por el expresidente Saca es la primera que ocurre bajo condición democrática, sin duda hace medio siglo lo habrían asesinado.

A todos los disidentes de la derecha y a quienes intentaron cambiar al país desde posiciones moderadas, como Napoleón Duarte y muchos otros, se los acusó de corruptos, ladrones, mujereros, homosexuales, comunistas, sidosos y locos. Si ARENA ganara la próxima elección desmantelaría al bloque económico de Saca y a las empresas de ALBA, para quitarle poder a sus competidores y gobernar de nuevo por varias décadas. No son solo las buenas intenciones de los políticos las que obligan a gobernar bien, sino la incertidumbre de que pueden salir del gobierno frente a la existencia de otros polos de poder que les compiten; la construcción de ese nuevo balance de poderes es lo que está en juego en este momento en el país. Nuestra centenaria oligarquía debería aceptar la nueva realidad, competir e influir, pero ya no pretender controlar.

Deleuze and Guattari: Schizos, Nomads, Rhizomes

We live today in the age of partial objects, bricks that have been shattered to bits, and leftovers… We no longer believe in a primordial totality that once existed, or in a final totality that awaits us at some future date (Deleuze and Guattari 1983: p.42)

A theory does not totalize; it is an instrument for multiplication and it also multiplies itself… It is in the nature of power to totalize and … theory is by nature opposed to power (Deleuze 1977a: p.208)

Gilles Deleuze and Felix Guattari have embarked on postmodern adventures that attempt to create new forms of thought, writing, subjectivity, and politics. While they do not adopt the discourse of the postmodern, and Guattari (1986) even attacks it as a new wave of cynicism and conservativism, they are exemplary representatives of postmodern positions in their thoroughgoing efforts to dismantle modern beliefs in unity, hierarchy, identity, foundations, subjectivity and representation, while celebrating counter-principles of difference and multiplicity in theory, politics, and everyday life.

Their most influential book to date, Anti-Oedipus (1983; orig. 1972) is a provocative critique of modernity’s discourses and institutions which repress desire and proliferate fascists subjectivities that haunt even revolutionary movements. Deleuze and Guattari have been political militants and perhaps the most enthusiastic of proponents of a micropolitics of desire that to precipitate radical change through a liberation of desire. Hence they anticipate the possibility of a new postmodern mode of existence where individuals overcome repressive modern forms of identity and stasis to become desiring nomads in a constant process of becoming and transformation.

Deleuze is a professor of philosophy who in the 1950s and 1960s gained attention for his studies of Spinoza, Hume, Kant, Nietzsche, Bergson, Proust and others. Guattari is a practicing psychoanalyst who since the 1950s has worked at the experimental psychiatric clinic, La Borde. He was trained in Lacanian psychoanalysis, has been politically active from an early age, and participated in the events of May 1968. He has collaborated with Italian theorist Antonio Negri (Guattari and Negri 1990) and has been involved in the autonomy’ movement which seeks an independent revolutionary movement outside of the structures of organized parties. Deleuze and Guattari’s separate careers first merged in 1969 when they began work on Anti-Oedipus. This was followed by Kafka: Toward a Minor Literature (1986; orig. 1975), A Thousand Plateaus (1987; orig. 1980), and numerous independent works by each author.

There are many interesting similarities and differences between their work and Foucault’s. Like Foucault, Deleuze was trained in philosophy and Guattari has worked in a psychiatric hospital, becoming interested in medical knowledge as an important form of social control. Deleuze and Guattari follow the general tenor of Foucault’s critique of modernity. Like Foucault, their central concern is with modernity as an unparalleled historical stage of domination based on the proliferation of normalizing discourses and institutions that pervade all aspects of social existence and everyday life.

Their perspectives on modernity are somewhat different, however. Most conspicuously, where Foucault tended toward a totalizing critique of modernity, Deleuze and Guattari seek to theorize and appropriate its positive and liberating aspects, the decoding of libidinal flows initiated b the dynamics of the capitalist economy. Unlike Foucault, Deleuze and Guattari’s work is less a critique of knowledge and rationality than of capitalist society; consequently, their analyses rely on traditional Marxist categories more than Foucault’s. Like Foucault, however, they by no means identify themselves as Marxists and reject dialectical methodology for a postmodern logic of difference, perspectives, and fragments. Also while all three foreground the importance of theorizing microstructures of domination. Deleuze and Guattari more clearly address the importance of macrostructures as well and develop a detailed critique of the state.

Further where Foucault’s emphasis is on the disciplinary technologies of modernity and the targeting of the body within regimes of power/knowledge. Deleuze and Guattari focus on the colonization of desire by various modern discourse and institutions. While desire is a sub-theme in Foucault’s later genealogy of the subject, it is of primary importance for Deleuze and Guattari. Consequently, psychoanalysis, the concept of psychic repression, engagements with Freudo-Marxism, and the analysis of the family and fascism play a far greater role in the work of Deleuze and Guattari than Foucault, although their critique of psychoanalysis builds on Foucault’s critique of Freud, psychiatry, and the human sciences.

In contrast to Foucault who emphasizes the productive nature of power and rejects the repressive hypothesis’, Deleuze and Guattari readily speak of the repression’ of desire and they do so, as we shall argue, because they construct an essentialist concept of desire. In addition, Deleuze and Guattari’s willingness to champion the liberation of bodies and desire stands in sharp contrast to Foucault’s sympathies to the Greco-Roman project of mastering the self. All three theorists, however, attempt to decenter and liquidate the bourgeois, humanist subject. Foucault pursues this through a critical archaeology and genealogy that reduces the subject to an effect of discourse and disciplinary practices, while Deleuze and Guattari pursue a schizophrenic’ destruction of the ego and superego In favor of a dynamic unconscious. Although Foucault later qualified his views on the subject, all three theorists reject the modernist notion of a unified, rational, and expressive subject and attempt to make possible the emergence of new types of decentered subjects, liberated from what they see to be the terror of fixed and unified identities, and free to become dispersed and multiple, reconstituted as new types of subjectivities and bodies.

All three writers have shown high regard for each other’s work. In his book Foucault (1988; orig. 1986 p.14), Deleuze hails Foucault as a radically new thinker whose work represents the most decisive step yet taken in the theory-practice of multiplicities’. For his part, Foucault (1977; p. 213) claims that Deleuze and Guattari’s work was an important influence on his theory of power and has written a laudatory introduction to Anti-Oedipus. In his review of Deleuze’s work in “Theatrum Philosophicum” (1977: pp. 165-96), Foucault praises him for contributing to a critique of Western philosophical categories and to a positive knowledge of the historical event’. Modestly downplaying his own place in history, Foucault even claims (1977; p. 165) that perhaps one day, this century will be known as Deleuzian’. In the dialogue “Intellectuals and Power” (Foucault 1977: pp.205-17), Foucault and Deleuze’s voices freely interweave in a shared project of constructing a new definition of theory which is always -already practice and local and regional’ in character.