Melisma Stochastic Melody Generator: Tonality Parameters

This page explains the parameters of the MSMG relating to tonality.

The MSMG chooses pitches from a probability distribution or "key profile", stating the probability of each pitch as a function of the current key. [1] The distribution used depends on the parameter "key profile type". When key profile type=1, the distribution is as follows:

Major keys


 1  b2  2  b3  3   4  #4  5  b6  6  b7  7      <- scale degree
.2  0 .12  0 .16 .12  0 .16  0 .12  0 .12      <- probability

Minor keys


 1  b2  2  b3  3   4  #4  5  b6  6  b7  7      <- scale degree
.2  0 .12 .16  0  .12  0 .16 .12 0  0 .12      <- probability

For example, in major keys, the tonic scale degree (1) has a probability of .2. This means that, if the chosen key is C major, a given note has a probability of .2 of being a C (which is 1 of C major). D, which is scale-degree 2, has a probability of .12; Db(C#), which is b2 of C, has a probability of 0. Notice that the pitches of the current major or minor scale are relatively probable (this assumes the harmonic minor scale in minor); other pitches have zero probability. Within the scale, pitches of the tonic triad are higher than others, and the tonic is highest of all.

If "key profile type" is set to 0, a "binary" distribution is used which simply assigns probabilities of .142 (1/7) to all tones within the current major or minor scale and 0 to non-scalar tones.

The "tonality factor" controls the "flatness" of the distribution. If the parameter is set at 1.0, the distribution will be exactly as described above (depending on the key profile type). As the parameter is lowered towards 0, the distribution becomes flatter, so that chromatic degrees have a higher probability of occurring. If the parameter is 0, the distribution is perfectly flat: all 12 scale degrees are equally likely.

The parameters "key" and "mode" allow one to determine the key: "key" determines the tonic (C, Db, D, etc.) and "mode" determines major or minor. If "key" is set to -1, a tonic will be randomly chosen; if set to a number 0 to 11, the corresponding tonic will be used (C=0, Db=1, D=2, etc.). If "mode" is set to -1, a random choice will be made between major and minor; if 0, a major key will be chosen; if 1, a minor key.

An important qualification to all this: There is another factor involved in the choice of pitches, namely, proximity. Unless the "proximity factor" is very low, the program prefers small intervals between adjacent pitches. (This also explains how the program decides between different pitches of the same pitch class.) Essentially, the profiles given above are deformed to give higher values to pitches close to the previous pitch. This may mean that the overall distribution of pitch-classes in a melody is somewhat different from that given by the key-profiles (and analysis of the program's output has shown this to be true--scale-degrees on either side of a half-step occur more often than others).

The parameter "tonic endpoints" determines whether or not the first and last notes of the melody will be the tonic pitch: 0=no, 1=yes.

Try this experiment: Set tonality factor=1.0, key=-1 (random), mode=0 (major), tonic_endpoints=0 (no obligatory tonic pitch at endpoints), profile type=0 (scalar profile). Generate a melody and listen to it. Does this melody project a strong sense of "key" to you? Can you sing the tonic pitch? (Do you feel you know what note the melody will end on?) This experiment tests the degree to which we are able to orient ourselves tonally simply from hearing an evenly-distributed major scale.

Another similar experiment: Set the parameters as above, with mode set to -1; a random choice is then made between major and minor, and a melody is generated using an even distribution of pitches from the scale. Can you tell which mode is being used? (Try this experiment also with a small child: does the child sense that certain melodies are "happy" and others are "sad"?)

As a final point about tonality, you will probably sense that the melodies generated by the program are in some ways quite unsatisfactory - rambling and incoherent. This is largely because the model knows nothing about harmony. Indeed, an important (negative) lesson of the current model is how important harmonic structure is in the creation of satisfying melodies. For an alternative melody generator that does take harmonic structure into account, see the OPTM model of Dirk-Jan Povel [2].

1. Carol Krumhansl, Cognitive Foundations of Musical Pitch (Oxford: Oxford University Press, 1990); David Temperley, The Cognition of Basic Musical Structures (Cambridge: MIT Press, 2001).

2. Dirk-Jan Povel (2002), "A Model for the Perception of Tonal Melodies." In C. Anagnostopoulou, M. Ferrand, A. Smaill (Eds.), Music and Artificial Intelligence (Heidelberg: Springer Verlag). Povel's model can be downloaded here.