How the Language We Speak Affects the Way We Think
Linguistics and neuroscience find better answers to old questions.
Posted Feb 02, 2017 (Psychology Today)
As I teach linguistics, one of the most intriguing questions for my students is whether all human beings think in a similar way—regardless of the language they use to convey their thoughts—or if the language we speak affects the way we think. This question has entertained philosophers, psychologists, linguists, neuroscientists, and many others for centuries. And everyone has strong opinions about it.
The complex relationship between language, thought, and culture
Source: Antonio Benítez-Burraco
At present, we still lack a definitive answer to this question, but we have gathered evidence (mostly derived from typological analyses of languages and psycholinguistic studies) that can give us a good understanding of the problem. As I will try to show, the evidence argues in favor of a universal groundwork for perception and thought in all human beings, while language is a filter, enhancer, or framer of perception and thought.
//www.nutquote.com/quote/Edward_Sapir/7) [Public domain undefined Public domain], undefined
Edward Sapir (1884-1939)
Source: Unkown (http://www.nutquote.com/quote/Edward_Sapir/7) [Public domain undefined Public domain], undefined
The story begins with the first American linguists who described (scientifically) some of the languages spoken by Native Americans. They discovered many awkward differences compared to the languages they had learned in school (ancient Greek, Latin, English, German, and the like). They found sounds never heard in European languages (like ejective consonants), strange meanings encoded in the grammar (like parts of the verb referring to shapes of the objects), or new grammatical categories (like evidentiality, that is, the source of knowledge about the facts in a sentence). Not surprisingly, some of these linguists concluded that such strange linguistic systems should have an effect on the mind of their speakers. Edward Sapir, one of the most influential American linguists, wrote: “The worlds in which different societies live are distinct worlds, not merely the same worlds with different labels attached” (Sapir, 1949: 162). For centuries, people thought that words were just labels for objects, and that different languages merely attached different strings of sounds to things—or, more accurately, to concepts. Now it was suggested that the world might be perceived differently by people speaking different languages. Or, more radically, that people could only perceive aspects of the world for which their languages have words.
Really? A useful (and instructive) way of testing Sapir’s claims focuses on color perception. Color distributes continuously (it depends on the wavelength of the light), but it is perceived categorically. Interestingly, the number of basic terms for colors is far smaller than the number of color tones we can perceive. Moreover, this number differs from one language to another. For instance, Russian has 12 basic terms for colors, whereas Dani, a language spoken in New Guinea, has only two: mili (for cold colors) and mola (for warm colors).
Kurulu Village War Chief Baliem Valley – Papua
Source: By Paul from Working and living in Jayapura (Papua Province) and Jakarta, Indonesia, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=2277626
Researchers found that, not surprisingly, Dani people are able to distinguish among different color tones (like red, yellow, and orange) despite labelling them identically (mola). They also found that people distinguish better between two color tones that are named differently (for instance, blue and green). Because different languages frame the continuum of color in different ways, people speaking different languages are expected to focus differently regarding colors. In some sense, Sapir was half-right.
article continues after advertisement
This effect of framing or filtering is the main effect we can expect—regarding language—from perception and thought. Languages do not limit our ability to perceive the world or to think about the world, but they focus our perception, attention, and thought on specific aspects of the world. This can be useful indeed. Chinese-speaking children learn to count earlier than English-speaking children because Chinese numbers are more regular and transparent than English numbers (in Chinese, “eleven” is “ten one”). Likewise, people speaking some Australian languages orient themselves in space better than English-speaking people (they often know north from south—even in darkness), plausibly because their languages have absolute spatial deictics. This means that when referring to a distant object they do not say “that car” or “that tree over there,” but rather “the car to the north” or “the tree to the south.” Because they need to know direction in order to correctly assembly utterances in their language, they are more accustomed than us to pay attention to the cardinal points.
Australian language families.
Source: By Kwamikagami – Commons map: File:Australian Languages.png, CC BY-SA 3.0, https://en.wikipedia.org/w/index.php?curid=35933046
article continues after advertisement
So, different languages focus the attention of their speakers on different aspects of the environment—either physical or cultural. But how do we know which aspect? Essentially, we see what’s important to the people speaking whatever language. We linguists say that these salient aspects are either lexicalized or grammaticalised. Lexicalizing means that you have words for concepts, which work as shorthands for those concepts. This is useful because you don’t need to explain (or paraphrase) the meaning you want to convey. Instead of saying, “that cold and white thing that falls from the sky in the cold days of winter,” you just say snow.
Obviously, we do not have words for everything. We only have words for concepts that are important or salient in our culture. This explains why lexicons (or set of words) in languages are all quite different. The lexicon is like a big, open bag: Some words are coined or borrowed because you need them for referring to new objects, and they are put into the bag. Conversely, some objects are not used anymore, and then the words for them are removed from the bag.
Some aspects of the world are encoded by languages even more deeply—to the extent that they are part of language grammars. You need to consider them whenever you build a sentence in that language. Linguists say that they are grammaticalised. Dyirbal, a language spoken in Northern Australia, for example, has four noun classes (like English genders). The assignment of nouns to each class is apparently arbitrary: Class I encompasses nouns for animals and human males; class II encompasses nouns for women, water, fire, and names for fighting objects; class III only encompasses nouns for edible plants; and class IV is like a residual class, where all the remaining names are put together. This grammatical classification of nouns involves a coherent view of the world, including an original mythology. For instance, though animals are assigned to class I, bird nouns are found in class II because Dyirbal people believed birds were the spirits of dead women (nouns for women are found in class II). Likewise, the way people think about time is encoded deeply in the grammar of most languages. In some languages like English, time is tripartite: past, present, and future. However, in a language like Yimas, spoken in New Guinea, there are four types of pasts, from recent events to remote past. And there are languages like Chinese that lack grammatical tense, too.
In summary, language functions as a filter of perception, memory, and attention. Whenever we construct or interpret a linguistic statement, we need to focus on specific aspects of the situation that the statement describes. Interestingly, some brain imaging facilities are now allowing us to examine these effects from a neurobiological perspective. For example, in this interesting paper, the authors prove that language affects the categorical perception of color—and that this effect is stronger in the right visual field than in the left visual field. Discrimination of colors encoded by different words also provokes stronger and faster responses in the left hemisphere language regions than discrimination of colors encoded by the same word. The authors conclude that the left posterior temporoparietal language region may serve as a top-down control source that modulates the activation of the visual cortex.
This is a nice example of current biolinguistic research (in a broader sense) helping to achieve a better and more balanced understanding of classic questions in linguistics—like the relationship between language and thought.