• Language and music

     

    Abstract: The hiphop community of practice encompasses a range of aesthetic values, norms, patterns, and traditions. Because of its growth over the last three decades, the community has come to include regionally specific networks linked together by community members who engage in meaningful practices and experiences. Expressed through common language ideologies, these practices contribute to the members’ communal and individual identity while simultaneously providing platforms to articulate social understandings. Using the constructs of community of

    practice and social networks, this research project is an interpretive study grounded primarily in the use of lyrics and interviews to investigate the linguistic patterns and language norms of hiphop’s southern network, placing emphasis on the Atlanta, Georgia southern hiphop network. The two main goals are to gain an understanding of the role of school in the cultivation of the network and identify the network’s relationship to schooling and education. The purpose is to identify initial steps for implementing a hiphop pedagogy in curriculum and instruction.

     

    Keywords: Hiphop community of practice, social network, language ideology, hiphop generation, indigenous research, schooling, education

     

    Summary: This study adds to the body of research on incidental vocabulary acquisition by investigating a previously overlooked source of vocabulary learning in EFL contexts: English pop songs. Pop songs are, in fact, an ideal source for incidental vocabulary learning because teenagers often spend large amounts of their free time listening to music and in particular to pop songs (cf. Murphey 1990: 14), most of which are in English today. In addition, songs combine music and language and there is some general evidence from neuroscientific research (cf. Schön et al. 2008; Kolinsky et al. 2009) that music may indeed enhance language learning. However, few empirical studies have addressed the specific issue of incidental vocabulary acquisition from songs1 , so that language teachers’ beliefs (cf. e.g. Medina 1990; Abbott 2002: 10) and anecdotal evidence are the main sources to support such claims. By systematically analysing the belief that vocabulary can be learned from oral input such as pop songs, this paper attempts to approach the issue from an empirical applied linguistics perspective. In addition, it adds an additional focus by investigating incidental vocabulary learning in out-of-school contexts. Given the ubiquity of English language media nowadays, EFL learners’ contact with the foreign language outside school is ever increasing.

     

     

     

     

    Abstract: Music and rhythm have been defined as powerful aids to language learning, memory, and recall. But is this due to structural and motivational properties of instrumental music and songs or is there a relation between learners’ language aptitude and musical intelligence? It seems that everyone who feels motivated to do it is able to learn other languages to some degree as long as an appropriate learning method is used. However, learning foreign languages is not easy, as many variables need to be considered if the desired result is optimal language learning in a non bilingual environment. Probably, one of the main obstacles to learning a foreign language in this context is the lack of continuous target language auditory input. While in first language acquisition babies start receiving sonorous stimuli in their mother’s womb, in foreign language learning opportunities to receive auditory input are mainly limited to the classroom, the teacher, the classmates and situations in which listening is included in the lesson. Language acquisition depends on interaction. With interactions affect has been shown to be a mediating force for communication to become successful. For instance, teacher talk and parental talk share many similar features. Both can be described as simplified codes created to help the hearer to learn and understand language (Arnold and Fonseca-Mora, 2007). They share features such as the frequent use of repetition, of formulaic expressions, expansions, preference for simplified vocabulary, change in voice volume, and modification of intonational contours. These speech melodies are indicators of emotions and they have a great impact on communication because, as Berger and Schneck (2003: 689) state,: "Humans are not thinking machines that feel, but rather, feeling machines that think". These melodies become a help for language learning. Exaggerated melodic contours found in adult-directed-to-infant-speech are considered to be parental intuitive behaviour to guide their babies’ musical beginnings (Papousek 1996), but they are also seen as a species-specific learning guidance towards language (Feu and Piñero 1996, Wermke and Mende 2009). Melodies and music in general, are present in the language teaching context as well.

     

     

    Keywords: language acquisition, melodies, music, language teaching

     

    Abstract: Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing ‘‘song lexicon’’ as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate

    song recognition.

     

    Abstract: Songs naturally bind lyrics and melody into a unified representation. Using a subsequent memory paradigm, we examined the neural processes associated with binding lyrics and melodies during song encoding. Participants were presented with songs in two conditions: a unified condition (melodies sung with lyrics), and a separate condition (melodies sung with the syllable “la”). In both cases, written lyrics were displayed and participants were instructed to memorize them by repeating them covertly or by generating mental images of the songs. We expected the unified condition to recruit the posterior superior temporal gyrus, known to be involved in

    perceptual integration of songs, as well as the left inferior frontal gyrus (IFG). Conversely, we hypothesized that the separate condition would engage a larger network including the hippocampus to bind lyrics and melodies of songs, and the basal ganglia and the cerebellum to ensure the correct sequence coupling of verbal and musical information in time. Binding lyrics and melodies in the unified condition revealed activation of the left IFG, bilateral middle temporal gyrus (MTG), and left motor cortex, suggesting a strong linguistic processing for this condition. Binding in the separate compared to the unified condition revealed greater activity in the right hippocampus as well as other areas including the left caudate, left cerebellum, and right IFG. This study provides novel evidence for the role of the right hippocampus in binding lyrics and melodies in songs. Results are discussed in light of studies of binding in the visual domain and highlight the role of regions involved in timing and synchronization such as the basal ganglia and the cerebellum.

     

    Keywords: Memory, Binding, Lyrics, Melodies, Songs, Hippocampus, Basal ganglia, IFG

     

     

    Abstract: We conducted two functional magnetic resonance imaging (fMRI) experiments to investigate the neural underpinnings of knowledge and misperception of lyrics. In fMRI experiment 1, a linear relationship between familiarity with lyrics and activation was found in left-hemispheric speech-related aswell as bilateral striatal areas which is in line with previous research on generation of lyrics. In fMRI experiment 2, we employed so called Mondegreens and Soramimi to induce misperceptions of lyrics revealing a bilateral network including middle

    temporal and inferior frontal areas aswell as anterior cingulate cortex (ACC) and mediodorsal thalamus. ACC activation also correlatedwith the extent towhich misperceptions were judged as amusing corroborating previous

    neuroimaging results on the role of this area in mediating the pleasant experience of chills during music perception.

    Finally, we examined the areas engaged during misperception of lyrics using diffusion-weighted imaging

    (DWI) to determine their structural connectivity. These combined fMRI/DWI results could serve as a neurobiological model for future studies on other types of misunderstanding which are events with potentially strong impact on our social life.

     

    Keywords: Connectivity, DWI, fMRI, Lyrics, Misperception, Music

     

    Abstract: Expectations and prior knowledge can strongly influence our perception. In vision research, such top-down modulation of perceptual processing has been extensively studied using ambiguous stimuli, such as reversible figures. Here, we propose a novel method to address this issue in the auditory modality during speech perception by means of Mondgreens and Soramimi which represent song lyrics with the potential for misperception within one or across two languages, respectively. We demonstrate that such phenomena can be induced by visual presentation of the alternative percept and occur with a sufficient probability to exploit them in neuroscientific experiments. Song familiarity did not influence the occurrence of such altered perception indicating that this tool can be employed irrespective of the participants’ knowledge of music. On the other hand, previous knowledge of the alternative percept had a strong impact on the strength of altered perception which is in line with frequent reports that these phenomena can have long-lasting effects. Finally, we demonstrate that the strength of changes in perception correlated with the extent to which they were experienced as amusing as well as the vocabulary of the participants as source of potential interpretations. These findings suggest that such perceptional phenomena might be linked to the pleasant experience of resolving ambiguity which is in line with the long-existing theory of Hermann von Helmholtz that perception and problem-solving recruit similar processes.

     

     

     

    Abstract: The medial prefrontal cortex (MPFC) is regarded as a region of the brain that supports self-referential processes, including the integration of sensory information with self-knowledge and the retrieval of autobiographical information. I used functional magnetic resonance imaging and a novel procedure for eliciting autobiographical memories with excerpts of popular music dating to one’s extended childhood to test the hypothesis that music and autobiographical memories are integrated in the MPFC. Dorsal regions of the MPFC (Brodmann area 8/9) were shown to respond parametrically to the degree of autobiographical salience experienced over the course of individual 30 s excerpts. Moreover, the dorsal MPFC also responded on a second, faster timescale corresponding to the signature movements of the musical excerpts through tonal space. These results suggest that the dorsal MPFC associates music and memories when we experience emotionally salient episodic memories that are triggered by familiar songs from our personal past. MPFC acted in concert with lateral prefrontal and posterior cortices both in terms of tonality tracking and overall responsiveness to familiar and autobiographically salient songs. These findings extend the results of previous autobiographical memory research by demonstrating the spontaneous activation of an autobiographical memory network in a naturalistic task with low retrieval demands.

     

    Keywords: emotion, episodic memory, fMRI, medial prefrontal cortex, tonality