Skip to main content

Music listening while you learn: No influence of background music on verbal learning

An Erratum to this article was published on 08 February 2010

Abstract

Background

Whether listening to background music enhances verbal learning performance is still disputed. In this study we investigated the influence of listening to background music on verbal learning performance and the associated brain activations.

Methods

Musical excerpts were composed for this study to ensure that they were unknown to the subjects and designed to vary in tempo (fast vs. slow) and consonance (in-tune vs. out-of-tune). Noise was used as control stimulus. 75 subjects were randomly assigned to one of five groups and learned the presented verbal material (non-words with and without semantic connotation) with and without background music. Each group was exposed to one of five different background stimuli (in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, and noise). As dependent variable, the number of learned words was used. In addition, event-related desynchronization (ERD) and event-related synchronization (ERS) of the EEG alpha-band were calculated as a measure for cortical activation.

Results

We did not find any substantial and consistent influence of background music on verbal learning. There was neither an enhancement nor a decrease in verbal learning performance during the background stimulation conditions. We found however a stronger event-related desynchronization around 800 - 1200 ms after word presentation for the group exposed to in-tune fast music while they learned the verbal material. There was also a stronger event-related synchronization for the group exposed to out-of-tune fast music around 1600 - 2000 ms after word presentation.

Conclusion

Verbal learning during the exposure to different background music varying in tempo and consonance did not influence learning of verbal material. There was neither an enhancing nor a detrimental effect on verbal learning performance. The EEG data suggest that the different acoustic background conditions evoke different cortical activations. The reason for these different cortical activations is unclear. The most plausible reason is that when background music draws more attention verbal learning performance is kept constant by the recruitment of compensatory mechanisms.

Background

Whether background music influences performance in various tasks is a long-standing issue that has not yet been adequately addressed. Most published studies have concentrated on typical occupational tasks like office work, labour at the conveyer belt, or while driving a car [1–13]. These studies mainly concluded that background music has detrimental influences on the main task (here occupational tasks). However, the influence of background music was modulated by task complexity (the more complex the task the stronger was the detrimental effect of background music) [4], personality traits (with extraverts being more prone to be influenced by background music) [2, 4–6], and mood [14]. In fact, these studies mostly emphasise that mood enhancement by pleasant and alerting background music enhances performance of monotonous tasks such as those during night shifts.

Whether background music influences performance of academic and school-related skills has also been investigated. A broad range of skills have been considered, including the impact of background music on learning mathematics, reading texts, solving problems, perceiving visual or auditory information, learning verbal material (vocabulary or poems), or during decision making [2, 8, 15–40]. The findings of these studies are mixed, but most of them revealed that background music exerts a detrimental influence on the primary academic task.

The present study was designed to readdress the question of whether background music enhances verbal learning. There are a number of reasons for this renewed interest: Firstly, only a few of the preceding studies have examined the effects of background music on verbal learning in particular [31, 41–45], reporting more or less detrimental effects of background music on verbal learning, whereas several scientifically weak contributions have been published suggesting that listening to background music (in particular classical music) should have beneficial effects on learning languages [46, 47]. Since verbal learning is an important part of academic achievement, we find it important to study the influence of background music on verbal learning more thoroughly. Secondly, the published studies used music of different genres (pop, classic) and vocals or they used instrumentals including musical pieces to elicit different emotions, music with different tempi, or simple tones as background music. Not one study has as yet controlled for the effects of emotion, complexity, tempo, and associated semantic knowledge of the musical pieces. The major aim of the present study was therefore to control these variables.

  1. 1.

    We used musical pieces unknown to the subjects. For this, we composed new musical pieces, avoiding any resemblance to well-known and familiar tunes. In doing this, we circumvented the well-known effect that particular contents of episodic and semantic memory are associated with musical pieces [48–50]. Thus, hearing a familiar musical piece might activate the episodic and semantic memory and lead to preferential/biased processing of the learned or the to be learned verbal stimuli.

  2. 2.

    A further step in avoiding activation of a semantic or episodic network was to use meaningless words. In combination with using unfamiliar musical pieces, this strategy ensures that established (or easy to establish) associations between musical pieces and particular words are not activated.

  3. 3.

    The musical pieces were designed in order to evoke pleasantness and activation to different degrees. Based on the mood-activation hypothesis proposed by Glenn Schellenberg and colleagues, we anticipated that music that evokes more pleasant affect might influence verbal learning more positively than music that evokes negative emotions [51–53].

  4. 4.

    Within the framework of the theory of changing state effects [54, 55], we anticipated that rapidly changing auditory information would distract verbal learning more seriously than slowly changing music. Thus, slower musical pieces would exert less detrimental effects on verbal learning than faster music.

  5. 5.

    Given that the potentially beneficial effects of background music are also explained by a more or less unspecific cortical activation pattern, which should be evoked by the music and would change the activation of the cortical network involved in controlling learning and memory processes, we also registered EEG measures during learning and recognition. Here, we used event-related desynchronization and event-related synchronization of the EEG alpha band as indices of cortical activation. Our interest in event-related synchronization and desynchronization in the alpha band relates to the work of roughly the last 2 decades on alpha power demonstrating the relationship between alpha band power and cortical activity [56]. In addition, several recent combined EEG/fMRI and EEG/PET papers strongly indicate that power in the alpha-band is inversely related to activity in lateral frontal and parietal areas [57–59], and it has been shown that the alpha-band reflects cognitive and memory performance. For example, good memory performance is related to a large phasic (event-related) power decrease in the alpha-band [56].

Methods

Subjects

77 healthy volunteers took part at this experiment (38 men and 39 women). Two subjects were excluded because of data loss during the experiment. All subjects were recruited through advertisements placed at the University Zurich and ETH Zurich. All subjects underwent evaluation to screen for chronic diseases, mental disorders, medication, and drug or alcohol abuse. Normal hearing ability was confirmed for all subjects using standard audiometry. For intelligence assessment, a short test [60] was used that is known to correlate with standard intelligence test batteries (r = 0.7 - 0.8). In addition, the NEO-FFI [61] was used to measure the personality trait "extraversion" because of its strong correlation with dual task performance [4–6]. All subjects were tested for basic verbal learning ability using a standard German verbal learning test (Verbaler Lern- und Merkfähigkeitstest) [62, 63]. All subjects were consistently right-handed, as assessed with the Annett-Handedness-Questionnaire [64]. All subjects indicated not having received formal musical education for more than five years during their school years and that they had not played any musical instrument in the last 5 years. We also asked the subjects whether they had previously learned while listening to music. Most of them conformed having done so, and a few (n = 5) indicated having done so frequently. The sample characteristics of the tested groups are listed in Table 1. There were no statistical between-group differences in these measures. Each subject gave written, informed consent and received 30 Swiss Francs for participation. The study was carried out in accordance with the Declaration of Helsinki principles and was approved by the ethics committee of the University of Zurich.

Table 1 Mean sample characteristics of the five groups studied.

Study design

The basic principle of this study was to explore verbal memory performance under different acoustic background stimulation conditions. The subjects performed a verbal memory test (see below) while acoustic background stimuli were present (background+) or not present (background-). Four different musical pieces and a noise stimulus were used as acoustic background stimuli (in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise; for a description of these acoustic stimuli see below). The 75 subjects were randomly assigned to one of these five groups, each group comprising therefore 15 subjects. These five groups did not differ in terms of age, IQ, or extraversion/introversion (tested with Kruskal-Wallis-U-test).

In the background+ condition, participants were required to learn while one of the above-mentioned background stimuli was present. Thus, the experiment comprised two factors: a grouping factor with five levels (Group: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise) and a repeated measurements factor with two levels (Background: without acoustic background [background-] and with acoustic background [background+]). We also measured the electroencephalogram (EEG) during the different verbal learning conditions to explore whether the different learning conditions are associated with particular cortical activation patterns. The order of background stimulation (verbal learning with or without background stimulation) was counterbalanced across all subjects. There was an intermittent period of 12-14 minutes between the two learning sessions during which the subjects rated the quality of the background stimuli and rested for approximately 8 minutes.

Background stimuli

Several studies have shown that tempo and the level of consonance/dissonance of musical excerpts strongly determine the level of arousal and emotional feelings [51, 65, 66]. We therefore designed 4 different 16 minute-long musical pieces differing in musical tempo and tuning. The musical excerpts were computerised piano sounds designed using FL Studio 4 software [67]. We composed a musical excerpt in C-major consisting of a melody and accompanying fundamental chords. This original musical excerpt was systematically varied in terms of tuning (in-tune, out-of-tune) and tempo (fast, slow), resulting in four different musical background stimuli (Figure 1). Two of these background stimuli were fast (in-tune fast, out-of-tune fast), and two of them were slow (in-tune slow, out-of-tune slow) (musical excerpts can be downloaded as supplementary material [68]). The in-tune excerpts comprised the typical semitone steps between the tones while in the out-of-tune excerpts the melody was pitch-shifted by one quarter-tone above the original pitch, resulting in the experience of out-of-tune. The tempo of the musical excerpts was varied by changing the beats per minute (160 bpm for fast and 60 bpm for slow) [51]. In addition, we designed a noise stimulus (also 16 minutes long; brown noise) with a temporal envelope similar to that of the other four musical excerpts. In summary, we applied five different kinds of background stimuli: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, and noise.

Figure 1
figure 1

Schematic description of the used musical excerpts (left panel). On the right the experimental design is depicted.

In a pilot study, 21 subjects (who did not take part in the main experiment) evaluated these stimuli according to the experienced arousal and valence on a 5-point Likert scale (ranging from 0 to 4). In-tune music was generally rated as more pleasant than both out-of-tune music and noise (mean valence rating: in-tune fast: 3.04, in-tune slow: 2.5, out-of-tune fast: 1.7, out-of-tune slow: 0.9, noise: 0.6; significant differences between all stimuli). In terms of arousal, the slow musical excerpts were rated as less arousing than the fast excerpts and the noise stimulus (mean arousal rating: in-tune fast: 2.4, in-tune slow: 0.8, out-of-tune fast: 2.14, out-of-tune slow: 1.05, noise: 1.95; significant differences between all stimuli). These five acoustic stimuli were used as background stimuli for the main verbal learning experiment. In this experiment, background stimuli were binaurally presented via headphones (Sennheiser HD 25.1) at approximately 60 dB.

Verbal memory test

Verbal learning was examined using a standard verbal learning test, which is frequently used for investigations with German-speaking subjects (Verbaler Lerntest, VLT). This test has been shown to validly measure verbal long-term memory [62, 63]. The test comprises 160 items and includes neologisms, which evoke either strong (80 items) or weak (80 items) semantic associations. In the test, most of the neologisms are novel (i.e., presented for one time), while 8 of the neologisms are repeatedly presented (7 repetitions), resulting in a total of 104 novel trials and 56 repetition trials. In the procedure used here, subjects were seated in front of a PC screen in an electromagnetically shielded room, and they were asked to discriminate between novel neologisms (NEW) and those that were presented in previous trials (OLD, i.e. repetitions). Subjects were instructed to respond after the presentation of every word by pressing either the right or left button of a computer mouse (right for OLD, left for NEW). Each trial started with a fixation cross (0 - 250 ms) followed by the presentation of a particular word (1150 - 2150 ms). The inter-trial interval, that is, the time between the onsets of the words of two consecutive trials, was 6 seconds. The performance in this memory test was measured using the number of correct responses for recognition of new and old words. For this test we had two parallel versions (version A and B) which allowed for testing the same subjects in two different background conditions (i.e., [background-] and [background+]).

Psychometrical measures

Several psychological measures were obtained after each experimental condition. First, the participants rated their subjective mood state using the MDBF questionnaire (Multidimensionaler Befindlichkeitsfragebogen) [69]. The MDBF comprises 12 adjectives (content, rested, restless, bad, worn-out, composed, tired, great, uneasy, energetic, relaxed, and unwell) for which the subjects had to indicate on a 5-point scale whether the particular adjective describes their actual feeling (1 = not at all; 5 = perfect). These evaluations are entered into summary scores along the three dimensions valence, arousal, and alertness. The acoustic background stimuli were evaluated using an adapted version of the Music Evaluation Questionnaire (MEQ) [70]. This questionnaire comprises questions evaluating the preference for the presented musical stimuli and how relaxing they are. In this questionnaire, subjects were also asked how they feel after listening to the music (i.e. cheerful, sad, aggressive, harmonious, drowsy, activated, and excited). All items were rated on 5-point Likert scales ranging from (1) not at all to (5) very strongly. The 10 scales were reduced to 3 scales (on the basis of a factor analysis), that is the subjective feeling of pleasantness, activation (arousal), and sadness (sadness).

EEG recording

The electroencephalogram (EEG) was recorded from 30 scalp electrodes (Ag/AgCl) using a Brain Vision amplifier system (BrainProducts, Germany). Electrodes were placed according to the 10-20 system. Two additional channels were placed below the outer canthi of each eye to record electro-oculograms (EOG). All channels were recorded against a reference electrode located at FCz. EEG and EOG were analogue filtered (0.1-100 Hz) and recorded with a sampling rate of 500 Hz. During recording impedances on all electrodes were kept below 5 k.

EEG preprocessing

EEG data were preprocessed and analysed by using BrainVision Analyzer (BrainProducts, Munich, Germany) and Matlab (Mathworks, Natick, MA). EEG data were off-line filtered (1-45 Hz), and re-referenced to a common average reference. Artefacts were rejected using an amplitude threshold criterion of ± 100 μV. Independent component analysis was applied to remove ocular artefacts [71, 72]. EEG data were then segmented into epochs (-1000 - 4000 ms) relative to the onset of the word stimulus.

Analysis of the time course of event-related desynchronization and event-related synchronization was performed according to the classical method described elsewhere [73, 74]. We included NEW (i.e., neologisms) and OLD (i.e., presented previously) trials in the event-related synchronization/desynchronization analysis, and only those trials correctly identified as NEW and OLD by the participants. In this study, we calculated event-related synchronization/desynchronization in the alpha band. Several recent combined EEG/fMRI and EEG/PET papers strongly indicate that power in the alpha-band is inversely related to activity in lateral frontal and parietal areas [57–59], and it has been shown that the alpha-band reflects cognitive and memory performance. In the procedure used here, event-related synchronization/desynchronization in the alpha band was analyzed by filtering the artefact-free segments with a digital band-pass filter (8-12 Hz). Amplitude samples were then squared and averaged across all trials, and a low-pass filter (4 Hz) was used to smooth the data. The mean alpha-band activity in latency band -1000 - 0 ms relative to word stimulus onset was defined as intra-experimental resting condition (i.e., the baseline condition). To quantify the power changes during verbal learning, event-related synchronization/desynchronization were calculated according to the following formula: event-related desynchronization (ERD)/event-related synchronization (ERS) = ((band powertask - band powerbaseline) * 100/band powerbaseline). Note that negative values indicate a relative decrease in the alpha-band (event-related desynchronization) during the experimental condition compared to the baseline condition, while positive values indicate an increase of alpha-band power during the experimental condition (event-related synchronization). In order to avoid multiple comparisons, event-related synchronization/desynchronization values were averaged over 10 time windows with a duration of 400 ms, and were collapsed for the frontal (FP1, FP2, F7, F3, Fz, F4 and F8), central-temporal (T7, C3, Cz, C4 and T8), and parieto-occipital (P7, P3, Pz, P4, P8, O1, Oz and O2) electrode locations [75] (see also Figure 2). Taken together, we obtained a time course of event-related synchronization/desynchronization changes over three different cortical regions (frontal, central-temporal, parieto-occipital), and over a time course of 4000 ms after stimulus presentation (10 event-related synchronization/desynchronization values for the entire time course). For this paper, we restrict our analysis to the first 5 time segments after word presentation, and thus concentrate on a time interval of 0 - 2000 ms after word presentation.

Figure 2
figure 2

Schematic description of the definition of the electrode clusters which were used for averaging.

In order to examine possible hemispheric differences, we subsequently calculated left- and right-sided event-related synchronization/desynchronization for the frontal, central-temporal, and parieto-occipital electrodes (left frontal: FP1, F7, F3; right frontal: FP2, F4 and F8; left central-temporal: T7, C3; right-central-temporal: C4 and T8; left parieto-occipital: P7, P3, and O1; right parieto-occipital: P4, P8 and O2). Asymmetric brain activation patterns during music perception have been reported by some studies [76, 77]. Different findings have been reported by other studies [78, 79].

Statistical analysis of event-related synchronization/desynchronization

For the main analysis of the event-related synchronization/desynchronization, a four-way ANOVA with the repeated measurements on the following factors was applied: Time Course (5 epochs after word stimulus presentation), Brain Area (3 levels: frontal, central-temporal, parieto-occipital), Background (2 levels: learning with acoustic background = background+, learning without acoustic background = background-), and the grouping factor Group (5 levels: in-tune fast, in-tune-slow, out-of-tune fast, out-of-tune slow, noise). Following this, we computed a four-way repeated measurements ANOVA, including the event-related synchronization/desynchronization data obtained for the left- and right-sided electrodes of interest, to examine whether hemispheric differences might influence the overall effect. For this, we used the multivariate approach to handle with the problem of heteroscedasticity [80]. Results were considered as significant at the level of p < 0.01. We used this more conservative approach in order to guard against problems associated with multiple testing. All statistical analyses were performed using the statistical software package SPSS 17.01 (MAC version). In case of significant interaction effects post-hoc paired t-tests were computed using the Bonferroni-Holm correction [81].

In order to assess whether there are between hemispheric differences in the cortical activations during listening to background music, we subjected the event-related synchronization/desynchronization data of the frontal, central-temporal, and parietal-occipital ROIs separately to a four-way ANOVA with Hemisphere (left vs. right), Group, Background, and Time as factors (Hemisphere, Group, and Background are repeated measurements factors). If background music and especially background music of different valence would evoke different lateralization patterns than the interaction between Hemisphere, Group or Hemisphere, Group, and Background should become significant. Thus, we were only interested in these interactions.

Results

Learning performance

Subjecting the verbal learning data to a 2-way repeated measurements ANOVA with repeated measurements on one factor (Background: background+ and background-) and the grouping factor (Group: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise) revealed no significant main effect (Background: F(1,70) = 0.073, p = 0.788, eta2 = 0.001; Group: F(4,70) = 1.42,p = 0.235, eta2 = 0.075) nor a significant interaction (F(4,70) = 0.90, p = 0.47, eta2 = 0.049) (Figure 3 shows the means and standard errors).

Figure 3
figure 3

Mean verbal learning performance (number of correct responses) for the five experimental groups broken down for learning with (background+ in red) and without (background- in blue) musical background music. ITF: in-tune fast, ITS: in-tune slow, OTF: out-of-tune fast, OTS: out-of-tune slow. Shown are means and standard errors (as vertical bars).

Emotional evaluation of acoustic background

The valence and arousal ratings for the three background conditions were subjected to separate repeated measurements ANOVAs. The valence measures were significantly different for the five background conditions (F(4,70) = 4.75, p < 0.002). Subsequently performed Bonferroni corrected t-tests revealed significant differences between the in-tune and out-of-tune conditions (mean valence rating ± standard deviation: ITF = 3.5 ± 1.1; ITS = 3.1 ± 0.7; OTF = 2.7 ± 1.0; OTS = 2.3 ± 0.9; Noise = 2.2 ± 1.1). For the arousal rating we obtained no significant difference between the five conditions (F(4,70) = 1.07, p = 0.378).

The MEQ rating data were subjected to three 2-way ANOVAs with one repeated measurements factor (Background: background+ vs. background-) and the grouping factor (Group). There was a significant main effect for Group with respect to pleasantness, with the in-tune melodies receiving the highest pleasantness ratings and the noise stimulus the lowest (F(4,70) = 12.5, p < 0.001 eta2 = 0.42). There was also a trend for interaction between Background and Group (F(4,70) = 2.40, p = 0.06, eta2 = 0.12), which is qualified by reduced pleasantness ratings for the in-tune melodies for the condition in which the subjects were learning. For the sadness scale we obtained a main effect for Background (F(1,70) = 8.14, p = 0.006, eta2 = 0.10) qualified by lower sadness ratings for the music heard while the subjects were learning.

The kind of acoustic background also influenced the subjective experience of pleasantness (F(1,70) = 24.6, p < 0.001, eta2 = 0.26), with less pleasantness during learning while acoustic background stimulation was present. The subjective feeling of arousal and sadness did not change as a function of different acoustic background conditions.

EEG data

The event-related synchronization/desynchronization data of the alpha band were first subjected to a 4-way ANOVA with repeated measurements on three factors (Time = 5 levels; Brain Area = 3 levels: frontal, central-temporal, and parieto-occipital; Background = 2 levels: background+ and background-) and one grouping factor (Group; 5 levels: in-tune fast, in-tune slow, out-of-tune fast, out-of-tune slow, noise). Where possible we used the multivariate approach to test the within-subjects effects, since this test is robust against violations of heteroscedasticity [80]. Figure 4 demonstrates the mean ERDs and ERSs as topoplots broken down for the 10 time segments and for learning with (background+) and without (background-) musical background. Figure 5 depicts the grand averages of event-related synchronization and desynchronization for the frontal, central-temporal, and parieto-occipital leads.

Figure 4
figure 4

Mean ERDs and ERSs (in %) as topoplots broken down for the 10 time segments and for learning with (background+) and without (background-) musical background music. Blue indicates ERD and red ERS. The time segments are printed under the topoplots (upper panel). In the lower panel the topoplots are shown broken down for the two most interesting time segments (3 and 5) and the different musical background conditions.

Figure 5
figure 5

Time courses of the changes in alpha-band power after word presentation broken down for the three brain regions of interest. Each time segment represents the mean ERD/ERS over a 400 ms segment (averaged across the different groups and the 2 background conditions). Indicated are the 10 time segments after word presentation. Negative values indicate a decrease in alpha-band power (event related desynchronization: ERD) while positive values indicate an increase of alpha-band power (event related synchronization: ERS) during learning compared to the baseline condition.

This complex ANOVA revealed several main effects and interactions. The event-related synchronization/desynchronization data showed a typical time course with a strong event-related desynchronization peaking at the second time segment (400 - 800 ms after word presentation). After reaching the maximum event-related desynchronization, the alpha-band synchronises again with the strongest event-related synchronization at the 5th time segment (1600 - 2000 ms after word presentation). This time course is highly significant (F(4,67) = 106.2, p < 0.001, eta2 = 0.86). The time courses of event-related synchronization/desynchronization are different for the different brain areas with larger event-related desynchronization for the parieto-occipital leads at the second time segment and larger event-related synchronization also at the parieto-occipital leads for the 5th time segment after word presentation (F(8, 63) = 34.50, p < 0001, eta2 = 0.81) (Figure 5). There was also a significant Background × Time × Group interaction (F(16,280) = 2.27, p = 0.004, eta2 = 0.11). In order to delineate this three-way interaction we conducted two-way ANOVAs with Background and Group as factors separately for each time segment. There were only significant Background × Group interactions for the time segments 3 and 5 (T3: F(4,70) = 2.6, p = 0.038, eta2 = 0.13; T5: F(4,70) = 2.7, p = 0.036, eta2 = 0.13). The interaction at time segment 3 (800 - 1200 ms after word presentation) was qualified by a larger event-related desynchronization for verbal learning with background music (background+) for the in-tune-fast group (F(1,14) = 9.2, p = 0.009, eta2 = 0.397 significant after Bonferroni-Holm correction at the level of p = 0.05). The interaction at time segment 5 (1600 - 2000 ms after word presentation) was qualified by larger event-related synchronization for the out-of-tune-fast group during background+ (F(1,14) = 6.3, p = 0.025, eta2 = 0.13, marginally significant after Bonferroni-Holm correction at the level of p = 0.05). Figure 6 shows the mean event-related synchronization/desynchronization for the time segments 3 and 5.

Figure 6
figure 6

Means and standard errors for event related desynchronization (ERD): (A) and event related synchronization (ERS): (B) values at time segment 3 and 5 (800 - 1200 ms and 1600 - 2000 ms after word presentation, respectively). A) Mean ERD for the five experimental groups broken down for learning with (background+ in red) and without (background- in blue) musical background music. B) Mean ERS for the five experimental groups broken down for learning with (background+ in red) and without (background- in blue) musical background music. ITF: in-tune fast, ITS: in-tune slow, OTF: out-of-tune fast, OTS: out-of-tune slow).

The ANOVA analysis conducted to examine potential between-hemisphere differences in terms of event-related synchronization/desynchronization revealed that none of the interactions of interest (Hemisphere × Group, Hemisphere × Group × Background) turned out to be significant, even if the statistical threshold was lowered to p = 0.10.

Discussion

The present study examined the impact of background music on verbal learning performance. For this, we presented musical excerpts that were systematically varied in consonance and tempo. According to the Arousal-Emotion hypothesis, we anticipated that consonant and arousing musical excerpts would influence verbal learning positively, while dissonant musical excerpts would have a detrimental effect on learning. Drawing on the theory of changing state effects, we anticipated that rapidly changing auditory information would distract verbal learning more seriously than slowly changing music. Thus, slower musical pieces were expected to exert less detrimental effects on verbal learning than faster music. In order to control for the global influence of familiarity with and preference for specific music styles, we designed novel in-house musical excerpts that were unknown to the subjects. Using these excerpts, we did not uncover any substantial and consistent influence of background music on verbal learning.

The effect of passive music listening on cognitive performance is a long-standing matter of research. The findings of studies on the effects of background music on cognitive tasks are highly inconsistent, reporting either no effect, declines or improvements in performance (see the relevant literature mentioned in the introduction). The difference between the studies published to date and the present study is that we used novel musical excerpts and applied experimental conditions controlling for different levels of tempo and consonance. According to the Arousal-Emotion hypothesis, we hypothesised that positive background music arouses the perceiver and evokes positive affect. However, this hypothesis was not supported by our data since there was no beneficial effect of positive background music on verbal learning. In addition, according to the theory of changing state effects we anticipated that rapidly changing auditory information would distract verbal learning more seriously than slowly changing music. This theory also was not supported by our data.

As mentioned above, the findings of previously published studies examining the influence of background music on verbal learning and other cognitive processes are inconsistent, with more studies reporting no influence on verbal learning and other cognitive tasks. But before we argue too strongly about non-existing effects of background music on verbal learning we will discuss the differences between our study and the previous studies in this research area. Firstly, it is possible that the musical excerpts used may not have induced sufficiently strong arousal and emotional feelings to exert beneficial or detrimental effects on verbal learning. Although the four different musical excerpts significantly differed in terms of valence and arousal, the differences were a little smaller in this experiment than in the pilot experiment. Had we used musical excerpts that induce stronger emotional and arousal reactions, the effect on verbal learning may have been stronger. In addition, the difference between the slow and fast music in terms of changing auditory cues might not have been strong enough to influence verbal learning. There is however no available data to date to indicate "optimal" or more "optimal" levels of arousal and/or emotion and of changes in auditory cues for facilitating verbal learning.

A further aspect that distinguishes our study from previous studies is that the musical pieces were unknown to the subjects. It has been shown that music has an important role in autobiographical memory formation such that familiar and personally enjoyable and arousing music can elicit or more easily facilitate the retrieval of autobiographical memories (and possible other memory aspects) [48–50]. It is conceivable that the entire memory system (not only the autobiographical memory) is activated (aroused) by this kind of music, which in turn improves encoding and recall of information. Recently, Särkämo et al. [82] demonstrated that listening to preferred music improved verbal memory and attentional performance in stroke patients, thus supporting the Arousal-Emotion hypothesis. However, the specific mechanisms responsible for improving memory functions while listening to music (if indeed present) are still unclear. It is conceivable that the unknown musical pieces used in our study did not activate the memory system, thus, exerting no influence on the verbal learning system.

Although verbal learning performance did not differ between the different background stimulation conditions, there were some interesting differences in the underlying cortical activations. Before discussing them, we will outline the similarities in cortical activations observed for different background conditions. During learning, there was a general increase of event-related desynchronization 400-1200 ms after word presentation followed by an event-related synchronization in a fronto-parietal network. This time course indicates that this network is cortically more activated during encoding and retrieval in the first 1200 ms. After this, the activation pattern changes to event-related synchronization, which most likely reflects top-down inhibition processes supporting the consolidation of learned material [56].

Although the general pattern of cortical activation was quite similar across the different background conditions, we also identified some considerable differences. In the time window between 800 - 1200 ms after word presentation, we found stronger event-related desynchronization at frontal and parietal-occipital areas for the in-tune-fast group only. Clearly, frontal and parieto-occipital areas are stronger activated during verbal learning in the ambient setting of in-tune-fast music but not in the other conditions. Frontal and parietal areas are strongly involved in different stages of learning. The frontal cortex is involved in encoding and retrieval of information, while the parieto-occipital regions are part of a network involved in storing information [83–87]. One of the reasons for this event-related desynchronization increase could be that the fronto-parietal network devotes greater cortical resources to verbal learning material in the context of in-tune fast music (which by the way is the musical piece rated as being most pleasant and arousing). But we did not find a corresponding behavioural difference in learning performance. Thus, the activation increase may have been too small to be reflected in a behaviour-enhancing effect. A further possibility is that the in-tune-fast music is the most distracting of the background music types, therefore eliciting more bottom-up driven attention compared with the other musical pieces and constraining attentional resource availability for the verbal material. The lack of a decline in learning performance suggests that attentional resource capacity was such that this distracting effect (if indeed present) was compensated for.

A further finding is the stronger event-related synchronization in the fronto-parietal network at time segment 5 (1600 - 2000 ms after word presentation) for the out-of-tune groups and especially for the out-of-tune-fast group. Event-related synchronization is considered to reflect the action of inhibitory processes after a phase of cortical activation. For example, Klimesch et al. [56] propose that event-related synchronization reflects a kind of top-down inhibitory influence. Presumably, the subjects exert greater top-down inhibitory control to overcome the adverse impact on learning in the context of listening to out-of-tune background music.

We did not find different lateralisation patterns for event-related synchronization/desynchronization in the context of the different background music conditions. This is in contrast to some EEG studies reporting between hemispheric differences in terms of neurophysiological activation during listening of music of different valence. Two studies identified left-sided increase of activation especially when the subjects listened to positively valenced music opposed to negative music evoking a preponderance of activation on the right hemisphere (mostly in the frontal cortex) [76, 77, 88]. These findings are in correspondence with studies demonstrating dominance of left-sided activation during approach-related behaviour and stronger activation on the right during avoidance-related behaviour [89]. Thus, music eliciting positive emotions should also evoke more left-sided activation (especially in the frontal cortex), but not all studies support these assumptions. For example, the studies of Baumgartner et al. [78] and Sammler et al. [79] did not uncover lateralised activations in terms of Alpha power changes during listening to positive music. Sammler et al. identified an increase in midline Theta activity and not a lateralised activation pattern. In addition, most fMRI studies measuring cortical and subcortical activation patterns during music listening did not report lateralised brain activations [90–95]. All of these studies report mainly bilateral activation in the limbic system. One study also demonstrates that the brain responses to emotional music substantially changes over time [96]. In fact, a differentially lateralised activation pattern due to the valence of the presented music is not a typical finding. However, we believe that the particular pattern of brain activation and possible lateralisation patterns are due to several additional factors influencing brain activation during music listening. For example, experience or preference for particular music are potential candidates as modulatory factors. Which of these factors are indeed responsible for lateralised activations can not be clarified on the basis of current knowledge.

Future experiments should use musical pieces with which subjects are familiar and which evoke strong emotional feelings. Such musical pieces may influence memory performance and the associated cortical activations entirely differently. Future experiments should also seek to examine systematically the effect of pre-experimentally present or experimentally induced personal belief in the ability of background music to enhance performance and, depending on the findings, this should be controlled for in other subsequent studies. There may also be as yet undocumented, strong inter-individual differences in the modulatory impact of music on various psychological functions. Interestingly, subjects have different attention and empathy performance styles, and this may influence the performance of different cognitive functions.

Limitations

A methodological limitation of this study is the use of artificial musical stimuli, which are unknown to the subjects. In general we listen to music we really like when we have the opportunity to deliberately chose music. Thus, it might be that in case of listening to music we really appreciate to listen to while we learn the results might be entirely different. However, this has to be shown in future experiments.

Conclusion

Using different background music varying in tempo and consonance, we found no influence of background music on verbal learning. There were only changes in cortical activation in a fronto-parietal network (as measured with event-related desynchronization) around 400 - 800 ms after word presentation for the in-tune-fast-group, most likely reflecting a larger recruitment of cortical resources devoted to the control of memory processes. For the out-of-tune groups we found stronger event-related synchronization around 1600 - 2000 ms in a fronto-parietal network after word presentation, this thought to reflect stronger top-down inhibitory influences on the memory system. We suggest that this top-down inhibitory influence is at least in part a response to the slightly more distracting out-of-tune music that enables the memory system to adequately reengage in processing the verbal material.

References

  1. Boyle R, Coltheart V: Effects of irrelevant sounds on phonological coding in reading comprehension and short-term memory. Q J Exp Psychol A. 1996, 49: 398-416. 10.1080/027249896392702.

    Article  CAS  PubMed  Google Scholar 

  2. Crawford HJ, Strapp CM: Effects of vocal and instrumental music on visuospatial and verbal performance as moderated by studying preference and personality. Pers Indiv Differ. 1994, 16: 237-245. 10.1016/0191-8869(94)90162-7.

    Article  Google Scholar 

  3. Fox JG, Ebrey E: Music: an aid to productivity. Appl Eergon. 1972, 3: 202-205. 10.1016/0003-6870(72)90101-9.

    Article  CAS  Google Scholar 

  4. Furnham A, Allas K: The influence of musical distraction of varying complexity on the cognitive performance of extroverts and introverts. Eur J Personality. 1999, 13: 27-38. 10.1002/(SICI)1099-0984(199901/02)13:1<27::AID-PER318>3.0.CO;2-R.

    Article  Google Scholar 

  5. Furnham A, Trew S, Sneade I: The distracting effects of vocal and instrumental music on the cognitive test performance of introverts and extraverts. Pers Indiv Diff. 1999, 27: 381-392. 10.1016/S0191-8869(98)00249-9.

    Article  Google Scholar 

  6. Furnham A, Bradley A: Music while you work: The differential distraction of background music on the cognitive test performance of introverts and extraverts. Appl Cognitive Psych. 1997, 11: 445-455. 10.1002/(SICI)1099-0720(199710)11:5<445::AID-ACP472>3.0.CO;2-R.

    Article  Google Scholar 

  7. Hirokawa E: Effects of music listening and relaxation instructions on arousal changes and the working memory task in older adults. J Music Ther. 2004, 41: 107-127.

    Article  PubMed  Google Scholar 

  8. Hirokawa E, Ohira H: The effects of music listening after a stressful task on immune functions, neuroendocrine responses, and emotional states in college students. J Music Ther. 2003, 40: 189-211.

    Article  PubMed  Google Scholar 

  9. Jancke L, Musial F, Vogt J, Kalveram KT: Monitoring radio programs and time of day affect simulated car-driving performance. Percept Motor Skill. 1994, 79: 484-486.

    Article  CAS  Google Scholar 

  10. Kellaris JJ, Kent RJ: The influence of music on customer's temporal perception. J Consum Psychol. 1992, 4: 365-376. 10.1016/S1057-7408(08)80060-5.

    Article  Google Scholar 

  11. Miskovic D, Rosenthal R, Zingg U, Oertli D, Metzger U, Jancke L: Randomized controlled trial investigating the effect of music on the virtual reality laparoscopic learning performance of novice surgeons. Surg Endosc. 2008

    Google Scholar 

  12. Nomi JS, Scherfeld D, Friederichs S, Schafer R, Franz M, Wittsack HJ, Azari NP, Missimer J, Seitz RJ: On the neural networks of empathy: A principal component analysis of an fMRI study. Behav Brain Funct. 2008, 4: 41-10.1186/1744-9081-4-41.

    Article  PubMed Central  PubMed  Google Scholar 

  13. Oldham G, Cummings A, Mischel L, Schmidhe J, Zhan J: Listen while you work? Quasi-experimental relations between personal-stereo headset use and employee work responses. J Appl Psychol. 1995, 80: 547-564. 10.1037/0021-9010.80.5.547.

    Article  Google Scholar 

  14. Sousou SD: Effects of melody and lyrics on mood and memory. Percept Motor Skill. 1997, 85: 31-40. 10.2466/PMS.85.5.31-40.

    Article  CAS  Google Scholar 

  15. Hallam S, Price J, Katsarou G: The effects of background music on primary school pupils' task perfromance. Educ Stud. 2002, 28: 111-122. 10.1080/03055690220124551.

    Article  Google Scholar 

  16. Bailey N, Areni CS: Background music as a quasi clock in retrospective duration judgments. Percept Motor Skill. 2006, 102: 435-444. 10.2466/PMS.102.2.435-444.

    Article  Google Scholar 

  17. Balch WR, Bowman K, Mohler L: Music-dependent memory in immediate and delayed word recall. Mem Cognition. 1992, 20: 21-28.

    Article  CAS  Google Scholar 

  18. Belsham RL, Harman DW: Effect of vocal vs non-vocal music on visual recall. Percept Motor Skills. 1977, 44: 857-858.

    Article  Google Scholar 

  19. Brochard R, Dufour A, Després O: Effect of musical expertise on visuospatial abilities: evidence from reaction times and mental imagery. Brain Cognition. 2004, 54: 103-109. 10.1016/S0278-2626(03)00264-1.

    Article  PubMed  Google Scholar 

  20. Clark L, Iversen SD, Goodwin GM: The influence of positive and negative mood states on risk taking, verbal fluency, and salivary cortisol. J Affect Disord. 2001, 63: 179-187. 10.1016/S0165-0327(00)00183-X.

    Article  CAS  PubMed  Google Scholar 

  21. Cockerton T, Moore S, Norman D: Cognitive test performance and background music. Percep Motor Skill. 1997, 85: 1435-1438.

    Article  CAS  Google Scholar 

  22. Davidson CW, Powell LA: The effects of easy listening background music on the on-task-performance of fith-grade children. J Educ Res. 1986, 80: 29-33.

    Article  Google Scholar 

  23. Etaugh C, Michals D: Effects of reading comprehension on preferred music and frequency of studying music. Percept Motor Skill. 1975, 41: 553-554.

    Article  Google Scholar 

  24. Etaugh C, Ptasnik P: Effects of studying to music and post-study relaxation on reading comprehension. Percept Motor Skill. 1985, 55: 141-142.

    Article  Google Scholar 

  25. Felix U: The contribution of background music to the enhancement of learning in suggestopedia: A critical review of the literature. Journal of the Society of Accelerated Learning and Teaching. 1993, 18: 277-303.

    Google Scholar 

  26. Fogelson S: Music as a distractor on reading-test performance of eight grade students. Percept Motor Skill. 1973, 36: 265-1266.

    Article  Google Scholar 

  27. Hilliard MO, Tolin P: Effect of familarity with background music on performance of simple and difficult reading comprehension tasks. Percept Motor Skill. 1979, 49: 713-714.

    Article  Google Scholar 

  28. Johnson JK, Cotman CW, Tasaki CS, Shaw GL: Enhancement of spatial-temporal reasoning after a Mozart listening condition in Alzheimer's disease: a case study. Neurol Res. 1998, 20: 666-672.

    CAS  PubMed  Google Scholar 

  29. Lois H: Listening to music enhances spatial-temporal reasoning: evidence for the "Mozart Effect. J Aesthet Edu. 2000, 34: 105-148. 10.2307/3333640.

    Article  Google Scholar 

  30. Miller LK, Schyb M: Facilitation and interference by background music. J Music Ther. 1989, 26: 42-54.

    Article  Google Scholar 

  31. Nittono H: Background instrumental music and serial recall. Percept Motor Skill. 1997, 84: 1307-1313.

    Article  CAS  Google Scholar 

  32. Nittono H, Tsuda A, Akai S, Nakajima Y: Tempo of background sound and performance speed. Percept Motor Skill. 2000, 90: 1122-10.2466/PMS.90.3.1122-1122.

    Article  CAS  Google Scholar 

  33. Register D, Darrow AA, Standley J, Swedberg O: The use of music to enhance reading skills of second grade students and students with reading disabilities. J Music Ther. 2007, 44: 23-37.

    Article  PubMed  Google Scholar 

  34. Salamé P, Baddeley A: Effects of background music on phonological short-term memory. Q J Exp Psychol A. 1989, 41: 107-122.

    Article  Google Scholar 

  35. Schreiber EH: Influence on music on college students' achievements. Percept Motor Skill. 1988, 66: 338-

    Article  Google Scholar 

  36. Schueller M, Bond ZS, Fucci D, Gunderson F, Vaz P: Possible influence of linguistic musical background on perceptual pitch-matching tasks: a pilot study. Percept Motor Skill. 2004, 99: 421-428.

    Article  Google Scholar 

  37. Stainback SB, Stainback WC, Hallahan DP: Effect of background music on learning. Except Child. 1973, 40: 109-110.

    CAS  PubMed  Google Scholar 

  38. Stainback SB, Stainback WC, Hallahan DP, Payne JS: Effects of selected background music on task-relevant and task-irrelevant learning of institutionalized educable mentally retarded students. Train Sch Bull (Vinel). 1974, 71: 188-194.

    CAS  Google Scholar 

  39. Tucker A, Bushman BJ: Effects of rock and roll music on mathematical, verbal, and reading comprehension performance. Percept Motor Skill. 1991, 72: 942-10.2466/PMS.72.3.942-942.

    Article  Google Scholar 

  40. Wolf RH, Wejner FF: Effects of four noise conditions on arithmetic performance. Percept Motor Skill. 1972, 35: 928-930.

    Article  CAS  Google Scholar 

  41. Ellermeier W, Hellbruck J: Is level irrelevant in "Irrelevant speech"? Effects of loudness, signal-to-noise ratio, and binaural unmasking. J Exp Psychol Human. 1998, 24: 1406-1414. 10.1037/0096-1523.24.5.1406.

    Article  CAS  Google Scholar 

  42. Iwanaga M, Ito T: Disturbance effect of music on processing of verbal and spatial memories. Percept Motor Skill. 2002, 94: 1251-1258.

    Article  Google Scholar 

  43. Klatte M, Kilcher H, Hellbruck J: The effects of temporal structure of background noise on working memory: Theoretical and practical implications. Z Exp Psychol. 1995, 42: 517-544.

    Google Scholar 

  44. Klatte M, Meis M, Sukowski H, Schick A: Effects of irrelevant speech and traffic noise on speech perception and cognitive performance in elementary school children. Noise Health. 2007, 9: 64-74. 10.4103/1463-1741.36982.

    Article  PubMed  Google Scholar 

  45. Wallace WT, Rubin DC: Wreck of the olg 97: A real event remembered in song. Remembering reconsidered: Ecological and traditional approaches to the study of memory. Edited by: Neisser U, Winograd E. 1988, Cambridge, England: Cambridge University Press, 283-310.

    Chapter  Google Scholar 

  46. Lozanov G: Suggestology and outlines of suggestopedy. 1978, New York: Gordon & Breach

    Chapter  Google Scholar 

  47. Schiffler L: Suggestopädie und Superlearning - empirisch geprüft. 1989, Frankfurt am Main: Moritz Diesterweg Verlag

    Google Scholar 

  48. Eschrich S, Munte TF, Altenmuller EO: Remember Bach: an investigation in episodic memory for music. Ann N Y Acad Sci. 2005, 1060: 438-442. 10.1196/annals.1360.045.

    Article  PubMed  Google Scholar 

  49. Eschrich S, Munte TF, Altenmuller EO: Unforgettable film music: the role of emotion in episodic long-term memory for music. BMC Neurosci. 2008, 9: 48-10.1186/1471-2202-9-48.

    Article  PubMed Central  PubMed  Google Scholar 

  50. Jancke L: Music, memory and emotion. J Biol. 2008, 7: 21-

    Article  PubMed Central  PubMed  Google Scholar 

  51. Husain G, Thompson W, Schellenberg E: Effects of musical tempo and mode on arousal, mood, and spatial abilities. Music Percept. 2002, 20: 151-171. 10.1525/mp.2002.20.2.151.

    Article  Google Scholar 

  52. Nantais K, Schellenberg E: The Mozart Effect: An artifact of preference. Psychol Sci. 1999, 10: 370-373. 10.1111/1467-9280.00170.

    Article  Google Scholar 

  53. Thompson WF, Schellenberg EG, Husain G: Arousal, mood, and the Mozart effect. Psychol Sci. 2001, 12: 248-251. 10.1111/1467-9280.00345.

    Article  CAS  PubMed  Google Scholar 

  54. Banbury SP, Macken WJ, Tremblay S, Jones DM: Auditory distraction and short-term memory: phenomena and practical implications. Hum Factors. 2001, 43: 12-29. 10.1518/001872001775992462.

    Article  CAS  PubMed  Google Scholar 

  55. Jones DM, Alford D, Macken WJ, Banbury SP, Tremblay S: Interference from degraded auditory stimuli: linear effects of changing-state in the irrelevant sequence. J Acoust Soc Am. 2000, 108: 1082-1088. 10.1121/1.1288412.

    Article  CAS  PubMed  Google Scholar 

  56. Klimesch W: EEG alpha and theta oscillations reflect cognitive and memory performance: a review and analysis. Brain Res Brain Res Rev. 1999, 29: 169-195. 10.1016/S0165-0173(98)00056-3.

    Article  CAS  PubMed  Google Scholar 

  57. Laufs H, Holt JL, Elfont R, Krams M, Paul JS, Krakow K, Kleinschmidt A: Where the BOLD signal goes when alpha EEG leaves. Neuroimage. 2006, 31: 1408-1418. 10.1016/j.neuroimage.2006.02.002.

    Article  CAS  PubMed  Google Scholar 

  58. Laufs H, Kleinschmidt A, Beyerle A, Eger E, Salek-Haddadi A, Preibisch C, Krakow K: EEG-correlated fMRI of human alpha activity. Neuroimage. 2003, 19: 1463-1476. 10.1016/S1053-8119(03)00286-6.

    Article  CAS  PubMed  Google Scholar 

  59. Oakes TR, Pizzagalli DA, Hendrick AM, Horras KA, Larson CL, Abercrombie HC, Schaefer SM, Koger JV, Davidson RJ: Functional coupling of simultaneous electrical and metabolic activity in the human brain. Hum Brain Mapp. 2004, 21: 257-270. 10.1002/hbm.20004.

    Article  PubMed  Google Scholar 

  60. Lehrl S, Gallwitz A, Blaha L: Kurztest für Allgemeine Intelligenz. 1992, Göttingen: Hogrefe Testzentrale

    Google Scholar 

  61. Borkenau P, Ostendorf F: NEO-FFI NEO-Fünf-Faktoren Inventar. 2008, Göttingen: Göttingen: Hogrefe Testzentrale

    Google Scholar 

  62. Helmstaedter C, Durwen HF: VLMT: Verbaler Lern- und Merkfähigkeitstest: Ein praktikables und differenziertes Instrumentarium zur Prüfung der verbalen Gedächtnisleistungen. Schweiz Arch Neurol. 1990, 141: 21-30.

    CAS  Google Scholar 

  63. Helmstaedter C, Lendt M, Lux S: VLMT -Verbaler Lern-und Merkfähigkeitstest. 2001, Göttingen: Hogrefe Testzentrale

    Google Scholar 

  64. Annett M: A classification of hand preference by association analysis. Br J Psychol. 1970, 61: 303-321.

    Article  CAS  PubMed  Google Scholar 

  65. Blood AJ, Zatorre RJ: Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. P Natl Acad Sci USA. 2001, 20: 11818-11823. 10.1073/pnas.191355898.

    Article  Google Scholar 

  66. Blood AJ, Zatorre RJ, Bermudez P, Evans AC: Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nat Neurosci. 1999, 4: 382-387. 10.1038/7299.

    Article  Google Scholar 

  67. FL studio 4 software. http://flstudio.image-line.com/

  68. Soundfiles used in this experiment. http://www.neurowissenschaft.ch/mmeyer/BBF

  69. Steyer R, Schwenkmezger PNP, Eid M: 1997, Göttingen: Hogrefe

  70. Nater UM, Abbruzzese E, Krebs M, Ehlert U: Sex differences in emotional and psychophysiological responses to musical stimuli. Int J Psychophysiol. 2006, 62: 300-308. 10.1016/j.ijpsycho.2006.05.011.

    Article  PubMed  Google Scholar 

  71. Jung TP, Makeig S, Humphries C, Lee TW, McKeown MJ, Iragui V, Sejnowski TJ: Removing electroencephalographic artifacts by blind source separation. Psychophysiology. 2000, 37: 163-178. 10.1017/S0048577200980259.

    Article  CAS  PubMed  Google Scholar 

  72. Jung TP, Makeig S, Westerfield M, Townsend J, Courchesne E, Sejnowski TJ: Removal of eye activity artifacts from visual event-related potentials in normal and clinical subjects. Clin Neurophysiol. 2000, 111: 1745-1758. 10.1016/S1388-2457(00)00386-2.

    Article  CAS  PubMed  Google Scholar 

  73. Pfurtscheller G, Lopes da Silva FH: Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol. 1999, 110: 1842-1857. 10.1016/S1388-2457(99)00141-8.

    Article  CAS  PubMed  Google Scholar 

  74. Pfurtscheller G, Andrew C: Event-Related changes of band power and coherence: methodology and interpretation. J Clin Neurophysiol. 1999, 16: 512-519. 10.1097/00004691-199911000-00003.

    Article  CAS  PubMed  Google Scholar 

  75. Jausovec N, Habe K: The "Mozart effect": an electroencephalographic analysis employing the methods of induced event-related desynchronization/synchronization and event-related coherence. Brain Topogr. 2003, 16: 73-84. 10.1023/B:BRAT.0000006331.10425.4b.

    Article  PubMed  Google Scholar 

  76. Schmidt LA, Trainor LJ: Frontal brain electrical activity (EEG) distinguishes valence and intensity of musical emotions. Cognition Emotion. 2001, 15: 487-500.

    Article  Google Scholar 

  77. Tsang CD, Trainor LJ, Santesso DL, Tasker SL, Schmidt LA: Frontal EEG responses as a function of affective musical features. Ann N Y Acad Sci. 2001, 930: 439-442.

    Article  CAS  PubMed  Google Scholar 

  78. Baumgartner T, Esslen M, Jancke L: From emotion perception to emotion experience: emotions evoked by pictures and classical music. Int J Psychophysiol. 2006, 60: 34-43. 10.1016/j.ijpsycho.2005.04.007.

    Article  PubMed  Google Scholar 

  79. Sammler D, Grigutsch M, Fritz T, Koelsch S: Music and emotion: Electrophysiological correlates of the processing of pleasant and unpleasant music. Psychophysiology. 2007, 44: 293-304. 10.1111/j.1469-8986.2007.00497.x.

    Article  PubMed  Google Scholar 

  80. O'Brien RG, Kaiser MK: MANOVA method for analyzing repeated measures designs: An extensive primer. Psychol Bull. 1985, 97: 316-333. 10.1037/0033-2909.97.2.316.

    Article  PubMed  Google Scholar 

  81. Holm S: A simple sequentially rejective multiple test procedure. Scand J Stat. 1979, 65-70.

    Google Scholar 

  82. Särkämö T, Tervaniemi M, Laitinen S, Forsblom A, Soinila S, Mikkonen M, Autti T, Silvennoinen HM, Erkkilä J, Laine M, Peretz I, Hietanen M: Music listening enhances cognitive recovery and mood after middle cerebral artery stroke. Brain. 2008, 131: 866-876. 10.1093/brain/awn013.

    Article  PubMed  Google Scholar 

  83. Eustache F, Desgranges B: MNESIS: towards the integration of current multisystem models of memory. Neuropsychol Rev. 2008, 18: 53-69. 10.1007/s11065-008-9052-3.

    Article  PubMed Central  PubMed  Google Scholar 

  84. Gabrieli JD: Cognitive neuroscience of human memory. Annu Rev Psychol. 1998, 49: 87-115. 10.1146/annurev.psych.49.1.87.

    Article  CAS  PubMed  Google Scholar 

  85. Gabrieli JD, Poldrack RA, Desmond JE: The role of left prefrontal cortex in language and memory. Proc Natl Acad Sci USA. 1998, 95: 906-913. 10.1073/pnas.95.3.906.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  86. Poldrack RA, Gabrieli JD: Memory and the brain: what's right and what's left?. Cell. 1998, 93: 1091-1093. 10.1016/S0092-8674(00)81451-8.

    Article  CAS  PubMed  Google Scholar 

  87. Tulving E: [Episodic memory: from mind to brain]. Rev Neurol (Paris). 2004, 160: S9-23.

    Article  CAS  Google Scholar 

  88. Altenmüller E, Schürmann K, Lim VK, Parlitz D: Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns. Neuropsychologia. 2002, 13: 2242-2256. 10.1016/S0028-3932(02)00107-0.

    Article  Google Scholar 

  89. Davidson RJ: Anterior electrophysiological asymmetries, emotion, and depression: conceptual and methodological conundrums. Psychophysiology. 1998, 35: 607-614. 10.1017/S0048577298000134.

    Article  CAS  PubMed  Google Scholar 

  90. Gosselin N, Samson S, Adolphs R, Noulhiane M, Roy M, Hasboun D, Baulac M, Peretz I: Emotional responses to unpleasant music correlates with damage to the parahippocampal cortex. Brain. 2006, 129: 2585-2592. 10.1093/brain/awl240.

    Article  PubMed  Google Scholar 

  91. Green AC, Baerentsen KB, Stødkilde-Jørgensen H, Wallentin M, Roepstorff A, Vuust P: Music in minor activates limbic structures: a relationship with dissonance?. Neuroreport. 2008, 19: 711-715. 10.1097/WNR.0b013e3282fd0dd8.

    Article  PubMed  Google Scholar 

  92. Khalfa S, Schon D, Anton JL, Liégeois-Chauvel C: Brain regions involved in the recognition of happiness and sadness in music. Neuroreport. 2005, 16: 1981-1984. 10.1097/00001756-200512190-00002.

    Article  PubMed  Google Scholar 

  93. Mitterschiffthaler MT, Fu CH, Dalton JA, Andrew CM, Williams SC: A functional MRI study of happy and sad affective states induced by classical music. Hum Brain Mapp. 2007, 28: 1150-1162. 10.1002/hbm.20337.

    Article  PubMed  Google Scholar 

  94. Koelsch S, Fritz T, Schlaug G: Amygdala activity can be modulated by unexpected chord functions during music listening. Neuroreport. 2008, 19: 1815-1819. 10.1097/WNR.0b013e32831a8722.

    Article  PubMed  Google Scholar 

  95. Lerner Y, Papo D, Zhdanov A, Belozersky L, Hendler T: Eyes wide shut: amygdala mediates eyes-closed effect on emotional experience with music. PLoS ONE. 2009, 4: e6230-10.1371/journal.pone.0006230.

    Article  PubMed Central  PubMed  Google Scholar 

  96. Koelsch S, Fritz TV, Cramon DY, Muller K, Friederici AD: Investigating emotion with music: an fMRI study. Hum Brain Mapp. 2006, 27: 239-250. 10.1002/hbm.20180.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to thank Monika Bühlmann, Patricia Meier, Tim Walker, and Jasmin Sturzenegger for their diligent work in collecting the data. They also used these data for finalising their Master's thesis in psychology. We thank Marcus Cheetham for helpful comments on an earlier draft of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lutz Jäncke.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

LJ and PS both designed the experimental paradigm, performed the statistical analysis and drafted the manuscript. All authors read and approved the final manuscript.

An erratum to this article is available at http://dx.doi.org/10.1186/1744-9081-6-11.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Jäncke, L., Sandmann, P. Music listening while you learn: No influence of background music on verbal learning. Behav Brain Funct 6, 3 (2010). https://doi.org/10.1186/1744-9081-6-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1744-9081-6-3

Keywords