Musical syntax

They are built up out of the 12 possible pitch classes per octave (A, A♯,B, C, C♯, D, D♯, E, F, F♯, G, G♯) and the different scale tones are not equal in their structural stability.

Considering the last two basic aspects of linguistic syntax, namely the considerable significance of the order of subunits for the meaning of a sentence as well as the fact that words undertake abstract grammatical functions defined through context and structural relations, it seems to be useful to analyse the hierarchical structure of music to find correlations in music.

The meaning of the word "ornamentation" points to the fact that there are events in a musical context that are less important to form an idea of the general gist of a sequence than others.

In fact the most common hypothesis implies, that music is organized into structural levels, which can be pictured as branches of a tree.

But in recent time there is strong evidence for the second point of view that syntax reflects abstract cognitive relationships.

That means that it would be the wrong way just to search for musical analogies of linguistic syntactic entities such as nouns or verbs.

The second aspect is to compare the processing of musical and linguistic syntax to find out, if they have an effect upon each other or if there even is a significant overlap.

The verification of an overlap would support the thesis, that syntactic operations (musical as well as linguistic) are modular.

As these regularities are stored in a long-term memory, predictions about following chords are made automatically, when listening to a musical phrase.

The violation of these automatically made predictions lead to the observation of so-called ERPs (event related potential, a stereotyped electrophysiological response to an internal or external stimulus).

One is the MMN (mismatch negativity), which has first been investigated only with physical deviants like frequency, sound intensity, timbre deviants (referred to as phMMN) and could now also be shown for changes of abstract auditory features like tone pitches (referred to as afMMN).

Both the ERAN and the MMN are ERPs indicating a mismatch between predictions based on regularities and actually experienced acoustic information.

As for a long time it seemed to be, that the ERAN is a special variant of the MMN, the question arises, why they are told apart today.

These are consonant chords when played solitary, but which are added into a musical phrase of in which they are only distantly related to the harmonic context.

In opposition, the ERAN rests upon representations of music-syntactic regularities which exist in a long-term memory format and which are learned during early childhood.

Out of these observation the thesis can be built that the MMN is essential for the establishment and maintenance of representations of the acoustic environment and for processes of the auditory scene analysis.

But only the ERAN is completely based on learning to build up a structural model, which is established with reference to representations of syntactic regularities already existing in a long-term memory format.

Other hints for this thesis emerge from the fact that under a propofol sedation which mainly affects the frontal cortex, the ERAN is abolished while the MMN is only reduced.

At last, the amplitude of the ERAN is reduced under ignore conditions whereas the MMN is largely unaffected by attentional modulations.

This method deals with the question, how structure and function of the brain relate to outcomes in behaviour and other psychological processes.

In case reports it was possible to show that amusia ( a deficiency in fine-grainded perception of pitch which leads to musical tone-deafness and can be congenital or acquired later in life as from brain damage) is not necessarily linked to aphasia (severe language impairments following brain damage) and vice versa.

Furthermore, research using the method of electroencephalography has shown that a difficulty or irritation in musical as well as in linguistic syntax elicit ERPs which are similar to each other.

In fact, the concept of modularity itself can help to understand the different and apparently contradicting findings in neuropsychologic research and neuroimaging.

The comparison of the syntactic processing of language and music is based on three theories which should be mentioned but which are not explained in detail.

The language theories contribute to the concept that in order to conceive the structure of a sentence, resources are consumed.

As in language this is associated with a "processing cost due to the tonal distance" (Patel, 2008) and therefore means that more resources are needed for activating low-activation items.

Overall these theories lead to the "shared syntactic integration resources hypothesis" as the areas from which low-activation items are activated could be the correlate to the overlap between linguistic and musical syntax.

Strong evidence for the existence of this overlap comes from studies, in which music-syntactic and a linguistic-syntactic irregularities were presented simultaneously.

They showed an interaction between the ERAN and the LAN (left anterior negativity;ERP which is elicited by linguistic-syntactic irregularities).

From this facts it can be reasoned that the ERAN relies on neural resources related to syntactic processing (Koelsch 2008).