Chapter 4 of the book takes a slight detour from sound and music per se and instead provides us with a general introduction to neuroscience. Given the importance of neuroscientific methodologies and approaches that will be utilised in later chapters, it makes sense for the authors to include this before moving on to more specific topics in music. Given that I am already writing another blog series on a neuroscience-focused book, I will cover these aspects of the chapter briefly and, as far as possible, try to link them back to the study of music psychology.
In this post, we'll begin by touching on some general issues facing the neuroscience of music as a discipline, explore the important neural structures and functions related to music, and introduce some common methods for studying the brain. Finally, we'll end off by discussing the link (or lack thereof) between music and language abilities.
Big questions for the neuroscience of music
Tan et al. (2010) start by listing 3 big questions that the neuroscience of music faces more generally. Firstly, to what extent are musical abilities a product of our inherent genetics and/or our environments? This is the classic 'nature' vs. 'nurture' debate, and while often presented in a dichotomous fashion, the reality of how musical abilities develop likely falls somewhere between the 2 ends of the continuum. Additionally, epigenetics, or the study of how environmental influences can change gene expression or even the genes themselves, adds to the challenge of determining whether or not any biological evidence found can be used to support the 'nature' camp.
Secondly, to what extent are behaviours and psychological functions related to different areas of the brain? This is the exact question tackled in my blog series on Pessoa's 2022 book, The Entangled Mind, where he argues against the dominant modular view of the brain in favour of a more complex and networked version. In the context of music, are there brain regions whose sole purpose is the perception/execution of music? Is there a one-to-one function-to-structure mapping in the case of musical abilities, or does a many-to-many mapping exist instead?
The final question is a more philosophical one related to Philosophy of Mind. In other words, what is the exact nature of the mind, and how does it connect with our external body? More relevantly, how does the mind connect with the brain, and can brain activity tell us anything about mental mechanisms? And if so, what can it tell us? While there are many different philosophies of mind one can adopt, the authors claim that neuroscience subscribes to the mind-brain identity thesis, which is a form of monism (i.e., there is only one kind of substance), different from the historically more enduring dualism (i.e., there are 2 substances -- mind and body) promoted by Descartes.
Neural structure and function
The next part of the chapter looks at important neural structures and functions for the study of music psychology. For a brief overview of neuroanatomy, check out this post. Here, the authors point out different structures (see Fig. 1) that play a dominant role in hearing (e.g., the auditory cortex, the superior temporal lobe) and in motor control (e.g., the caudal and ventral frontal lobes, and the cerebellum), 2 functions that are important for the perception and production of music. Fig. 1 Important brain regions for music perception and production (Tan et al., 2010)
One important debate in the neuroscience of music is the degree to which musical abilities are lateralised in the brain. Put another way, are musical abilities mainly performed by brain regions in one hemisphere of the brain, or in both hemispheres of the brain? Early findings provide support for the lateralisation of musical abilities. For instance, some studies have found that while nonmusicians process musical pitch in their right hemispheres, musicians process pitch in their left (Bever & Chiarello, 1974).
More recent discourse has shifted away from functional lateralisation, especially given the increased attention to neural plasticity, which refers to the potential for the brain and nervous system to adapt. In the case of music production, we can look to the existence of cortical maps, or regions in the brain that spatially map to specific muscles in the body. This means that there are literally localised regions in the motor cortex whose activation corresponds to the movement of a particular body part (e.g., activation of the left-hand brain region moves the left hand, and vice versa for the right hand). To demonstrate neural plasticity, studies have generally found increased activation or size in cortical maps for body parts heavily involved when playing different instruments. For example, Bangert & Schlaug (2006) found anatomical differences that bias the left-hand brain regions in violinists and the right-hand brain regions in pianists.
A brief intermission: for now, I disagree with the idea of cortical maps, as they remind me of motor control theories that rely on the assumption of motor programs/representations and that view the brain as a master controller or puppet master of the body's degrees of freedom. Notwithstanding some issues like storage and novelty problems, I generally believe that movement control is largely governed by self-organisation principles, and while the brain does have its role, the part it plays is much smaller and different from what is conventionally argued. But enough said for now -- I'll definitely cover this in more detail in the future. Back to the chapter!
Methodologies of cognitive neuroscience
In this section, we'll cover some commonly used methods of measuring brain structure and activity, as well as inferring brain function. First up, we have the lesion method, which I have covered in this blog post. Crudely, this method allows us to infer that one brain region is necessary for a particular function if damage to that region leads to deficits in that function.
Next, we have electroencephalography (EEG, see Fig. 2), which measures changes in electrical activity in a part of the cortex by placing electrodes on the scalp directly above that region. The idea here is that when a brain area is engaged, neurons in this region fire and generate electrical impulses; hence, electrical activity is a direct measure of brain activity. In music, researchers typically look for event-related potentials (ERPs, see Fig. 2), which are changes in electrical activity right after a participant performs a task. While EEG technology provides excellent temporal resolution (i.e., it gives us a precise indicator of WHEN a process happens), it features poor spatial resolution (i.e., it gives us a rough indicator of WHERE a process occurs).
Fig. 2 Outputs from EEG and ERPs (source:
link)
Moving on, positron emission tomography (PET) begins with injecting glucose molecules (with radioactive isotopes attached) into participants' bloodstream. Under gamma ray detectors, the location where glucose molecules were absorbed can be determined, hence giving an indicator of the location of brain activity. Here, it is assumed that when a brain area is engaged, neurons in this region fire and use up energy (i.e., glucose) in the process; hence, glucose uptake is a proxy measure for brain activity. Still, this process has poor spatial resolution -- it can take some time before gamma rays are emitted (from the collision between electrons and positrons emitted from the radioactive isotope), hence determining the exact location of glucose uptake can be challenging.
Finally, we have magnetic resonance imaging (MRI), which provides a high-resolution measure of brain structure. Note that this is a purely anatomical measure, and unlike EEG or PET, MRI provides no information about brain activity. This can be achieved using functional MRI (fMRI), which measures the proportion of oxygenated and deoxygenated haemoglobin at 2-second intervals in regions of interest. Here, the assumption is that when a brain area is engaged, neurons in this region fire and engage in oxygen-consumption processes; hence, oxygen uptake is an indirect measure of brain activity.
The music-language link
In the final part of the chapter, Tan et al. provide an example of studying music with neuroscientific tools. Specifically, they discuss the link between music and language, and ask: To what extent do musical and language abilities utilise the same neural resources? Among many reasons, this is an interesting question because both language and music can be seen as media through which individuals communicate (primarily through auditory, but also visual, means). To answer this question, researchers have typically used evidence for integration and independence, which are covered below.
Evidence for integration
On the one hand, we have evidence for integration, or the idea that music and language functions map to similar brain regions. In studies measuring ERPs (Besson & Schön, 2003), violations of expectations in language (e.g., hearing an unexpected word) and in music (e.g., hearing a pitch that is out of key) lead to ERP changes in (presumably) similar brain regions, albeit in the opposite directions (i.e., decrease in polarity in language; increase in polarity for music).
Meanwhile, fMRI studies have revealed activation in regions surrounding Broca's area (a brain area commonly associated with speech production) after an unexpected chord is played (Koelsch et al., 2002) or when participants listen to unscrambled, compared to scrambled, versions of classical music (Levitin & Menon, 2003). These findings have led some to broaden the primary role of Broca's area -- rather than simply being responsible for speech production, it also seems to be heavily involved in processing grammatical elements, whether it be in music or language.
Finally, some lesion studies have shown that damage to the temporal lobe causes disruption not only to music recognition, but also to the processing of 'music-like' language features. Here, Patel et al. (1998) found that stroke victims with temporal lobe damage were unable to distinguish between a rising pitch (signalling a question) and a falling pitch (signalling a statement) at the end of sentences, in addition to music recognition deficits.
Evidence for independence
On the other hand, we have evidence for independence, or the idea that the brain regions for music and language are spatially separate. For example, there is a general consensus that regions in the right and left hemispheres are responsible for pitch recognition and language processing, respectively.
More evidence for independence comes from studying the processing of lexical tone, or pitch in tonal languages, such as Mandarin, which utilises tone to convey semantic meaning. Using fMRI technology, Wong et al. (2004) found that English monolinguals processed pitch contours of both English and Mandarin sentences in the right insular cortex. Meanwhile, English and Mandarin bilinguals had higher right insula activation when listening to pitch contours from English sentences, and greater left insula activation for Chinese sentences.
This suggests that pitch is processed in a more 'music-like' fashion when English is used, potentially because pitch plays a limited to non-existent role in conveying semantic meaning. However, when paired with a tonal language like Mandarin, pitch is processed in more language-related regions in the left hemisphere.
Here, the authors argue that because the stimulus in both cases is the same (i.e., both monolinguals and bilinguals are exposed to both languages), the distinction is not in the input information, but rather in the mental processes occurring in the individual, depending on whether they speak one or two languages. This is something that I don't agree with, and is highly motivated by my preference for ecological psychology. Why does the difference have to be in the mental processing? What if, instead, there are many different kinds of information present (even for the same stimulus!), and differently-lingual individuals simply have different levels of sensitivity to different informational sources? That is, perhaps bilingual participants are highly attuned to a class of pitch information that specifies which of the 4 Mandarin pitches is communicated, whereas the monolingual English speakers aren't. In this case, the difference is in their ability to detect specifying information, and not to process, compute, or represent ambiguous nonspecifying information.
Concluding remarks
One more interesting argument the authors make is that the necessity of a brain region is revealed by lesion studies, while the contribution of regions is revealed by brain imaging. Here, they suggest that while there are brain structures which are specialised for specific musical functions, there are other structures which simply contribute resources without actually being a necessity.
While seemingly recognising the many-to-many function-to-structure mapping advocated by Pessoa (2022), I suspect that the argument above is still firmly rooted in the enduring tradition of modularity. For now, we'll end here and look forward to subsequent chapters that delve deeper into specific areas of music psychology research. Next up, the perception of musical pitch and melody!
References
Bangert, M., & Schlaug, G. (2006). Specialization of the specialized in features of external human brain morphology. European Journal of Neuroscience, 24(6), 1832–1834. https://doi.org/10.1111/j.1460-9568.2006.05031.x
Besson, M., & Schön, D. (2003). Comparison between language and music. In I. Peretz & R. Zatorre (Eds.), The cognitive neuroscience of music (pp. 269–293). New York: Oxford University Press.
Bever, T. G., & Chiarello, R. J. (1974). Cerebral dominance in musicians and nonmusicians. Science, 185(4150), 537–539. https://doi.org/10.1126/science.185.4150.537
Koelsch, S., Gunter, T. C., Cramon, D. V., Zysset, S., Lohmann, G., & Friederici, A. D. (2002). Bach speaks: a cortical “Language-Network” serves the processing of music. NeuroImage, 17(2), 956–966. https://doi.org/10.1006/nimg.2002.1154
Levitin, D. J., & Menon, V. (2003). Musical structure is processed in “language” areas of the brain: a possible role for Brodmann Area 47 in temporal coherence. NeuroImage, 20(4), 2142–2152. https://doi.org/10.1016/j.neuroimage.2003.08.016
Patel, A. D., Peretz, I., Tramo, M., & Labreque, R. (1998). Processing prosodic and musical Patterns: a Neuropsychological investigation. Brain and Language, 61(1), 123–144. https://doi.org/10.1006/brln.1997.1862
Pessoa, L. (2022). The entangled brain. Journal of Cognitive Neuroscience, 35(3), 349–360. https://doi.org/10.1162/jocn_a_01908
Tan, S., Pfordresher, P., & Harré, R. (2010). Psychology of Music: From Sound to Significance. http://ci.nii.ac.jp/ncid/BB01824497
Wong, P. C. M., Parsons, L. M., Martinez, M., & Diehl, R. L. (2004). The role of the insular cortex in pitch pattern Perception: The effect of Linguistic contexts. Journal of Neuroscience, 24(41), 9153–9160. https://doi.org/10.1523/jneurosci.2225-04.2004
No comments:
Post a Comment