Monday, November 3, 2014

Modeling Rests in Composed Music

There are at least two types of rests in music. The first are the ones the composer wrote into the score. The second are the ones that naturally occur during playing. Musicians pause, extend, chop, attack, cheat, and move notes around within the measure in order to emote, interpret, or express.

There are many more of the second type of rests in human-performed music than the first. The first type are represented on the score, but the second type makes an enormous difference in how the music is perceived stylistically. Recognizing, categorizing, and modeling both types of these rests is a goal of Organ Donor, with the expectation that introducing proper amounts of "space" into algorithmically produced music will create music that sounds more like a human is playing it. Being able to create different models of resting based on desired style would be a very powerful and useful result.

Another area of investigation is the minimum return distance from root notes, or Mean First Passage Times. I suspect that there might be some utility from these statistics in terms of creating believable phrasing - or uncovering patterns that reveal other hidden structures in composed music. Examining the minimum return distance for both types of rests as well as notes will help improve the understanding of the role and effect rests play in composition and style.
This area of math (MFPT) is used in a wide variety of fields to answer pragmatic questions about physical systems. There's no guarantee that it will provide repeatable or useful results in music, but we have the tools (thanks to the python library pykov) to start the process of finding out.
Here's some results from parsing MIDI files of performed music. The rests captured don't exist in the score. Some notes are staccato, and you can see the additional "distance" the musician included to create the desired effect. You can also see additional distance punctuating the end of the phrase. This is the violin 2 part from Beethoven's 7th, second movement.
Note 64 had tick duration 409
a rest had tick duration 66
Note 64 had tick duration 110 (staccato note)
a rest had tick duration 138 (results in longer space between notes)
Note 64 had tick duration 116 (staccato note)
a rest had tick duration 121 (results in longer space between notes)
Note 64 had tick duration 410
a rest had tick duration 67
Note 64 had tick duration 392 (ended a bit early, end of a phrase)
a rest had tick duration 91
Modeling music requires obedience to aesthetics, and this is where the difficulty - possibly the impossibility - lies. However, I cannot think of anything more worthy of analysis, modeling, machine learning, and algorithmic design. More soon!

No comments:

Post a Comment