Showing posts with label software. Show all posts
Showing posts with label software. Show all posts
Saturday, October 10, 2015
Software published
We've caught up on publishing the Organ Donor software to our GitHub project. Not only have we updated the Organelle software to the latest version as of our most recent deployment at the San Diego Maker Faire, but we've also released the Arduino console software for both versions of the Opus 1 console, capturing all the deployed versions back to the beginning.
Thursday, September 17, 2015
Organ Donor Successfully Completes Deployment at Burning Man
Organ Donor successfully completed a deployment at Burning Man
2015 as part of Sol Diego's Wonderlust Arcade installation. The five-day
deployment was located with 28 other regional projects under tents at the base
of The Man. The “Midway” was open 24 hours a day from event
start until 5pm Friday before the burn.
The Wonderlust Midway
installation included a forced perspective building and a variety of arcade and
midway games and a Zoltar booth. Games were designed and built by members of
the Sol Diego team. An article about the forced perspective construction and
the games can be found at:
http://www.sdcitybeat.com/sandiego/article-14497-sol-diego-brings-immersive-art-to-burning-man.html
Setup took Organ Donors Paul and Abraxas about 8 hours over two
days to complete, including ferrying components and tools out to the Midway
(with a 5 mph speed limit). Conditions were windy and dusty, with visibility
falling to zero at times. Organ Donor setup had to work with and around all the
other teams setting up their art installations.
Substantial changes to the console and software were made from
the previous deployment. A new console design was introduced. The minimum
desired software functionality was completed on the second day, about an hour before
event start. More ambitious software plans, including touchscreen support and
graphical user interface features, would have to wait for a later deployment.
The new organ console (version 2.0) improved stability and
function. Sturdy legs from IKEA, a cut down IKEA tabletop, and a custom
laser-cut cabinet were key elements of the new console, replacing the
lightweight folding stage stand and small custom control panel. The console
would no longer tip over (or blow away!) as easily, and had improved aesthetics.
The two manuals (that is, organ keyboards) and MIDI combiner and coupler
management software were carried over from version 1.0 of the console. Version
2.0 added a sheet music stand, an LCD touch-screen, a selector knob, and a
laser-engraved diagram to label and explain
the stops and coupler buttons. The touch-screen and selector knob were managed
by a Raspberry Pi 2 with software written in Python, and the active coupler
diagram was managed by an Arduino MEGA 2560.
No substantial changes were made to the pipes, racks, windchest,
or blower box. A minor rearrangement of the pipe positions around the rack was
necessary to accommodate the shape of the limited space available.
The first failure was with the windchest, which is made of
laser-cut acrylic. The front bottom left edge of the windchest leaked during
the first pressurization. While the proper solvent-based acrylic cement could
be purchased from Reno, that would involve a lengthy trip. Fortunately, Organ
Donor Bigun had acrylic cement in his kit. We borrowed a tube, applied the
cement, and clamped the windchest closed. This seam held for the duration of
the event, possibly because we left the clamps in place. Organ Donor Bigun
recommended the addition of a square acrylic rod glued along the seam on the
inside of the windchest as a reinforcement. Since the seam is somewhat long,
this reinforcement would reduce the amount of flex that probably caused the
seam to pop.
The second failure was of both keyboards. When tested after a few
hours of dust storm during
setup, about half the keys on both manuals were no longer working. We suspected
dust fouling the contacts inside the keyboard. With the dust storm continuing
and worsening, the keyboards were removed and taken back to Copper Home, Organ
Donor's support trailer at Wonderlust Camp. The keyboards were disassembled and
inspected. Each key has a series of blue rubber boots that provide domes for
each key to press down upon. A contact beneath each dome is actuated when the
dome is compressed. Dust had worked its way beneath the rubber boots. The factory design looked more than
adequate for normal conditions, but wasn't up to being inundated with playa
dust.
A repair was proposed. We would thoroughly clean the contacts and
the rubber boots, then use silicone sealant to completely seal the dust boots
to the circuit board. The rest of the interior of the keyboard would be allowed
to collect dust. Since the rest of the keyboard consisted of mechanical action
and the components on the circuit board, confidence was high that the repair
would work.
Both dusty keyboards, and the clean pair of backup keyboards,
were treated with silicone sealant. In order to replace the dust boots, tool
improvisation was required. The rubber plugs that anchor the dust boots would
not fit back into the holes by finger pressure. Very small holes were observed
at the top of each of the rubber plugs/feet. An unwound paper clip worked
perfectly to refit the rubber anchor feet. The strip of dust boots was placed
in the correct position, then the paper clip gently pressed into the hole over
the top of each plug/foot. The foot then slipped into the hole with no
difficulty.
Photos can be found here:
https://www.flickr.com/photos/w5nyv/albums/72157658511444682
After the keyboards were treated with sealant, they were returned
to the console in the Midway. On the final day, one of the repaired keyboards
failed, with just two keys no longer responding to key presses. This was
swapped out for one of the backup keyboards. This keyboard worked the rest of
the day until close of Midway. The other repaired keyboard lasted the entire
event without failures. Later examination showed that we left gaps in the
silicone sealant at each of the places where keys failed.
Software functionality for the Midway exhibit consisted of two
modes, keyboard and jukebox. Jukebox mode was where Organ Donor played files
from the songs directory in the Raspberry Pi. Keyboard mode was where the
participant played the keyboard. Participants could play the keyboard at any
time, but keyboard mode turned off any MIDI signals being sent to the windchest
from the Raspberry Pi.
The Organ Donor received overwhelmingly positive feedback. Conservatory
students, amateur musicians, and people that don't have any experience playing
a keyboard all were encouraged to play Organ Donor.
One participant, Anthony Decognito, made up songs
extemporaneously about other participants. He inquired as to their city of
origin, made up a melody, and improvised a song. This was hugely successful.
Several pop-up concerts were held by people that happened to have
large amounts of music memorized. The team greatly appreciated the willingness
of so many participants that freely shared their talents and training. Crowds
gathered in waves to listen and play.
The jukebox mode was freely used. While several lost and found
items were recovered, no obvious abuse occurred. While at least one participant
used a very unconventional body part to play the keyboard, Organ Donor was
unscathed by heavy participant use.
Complete set of photos from the deployment can be found here
https://www.flickr.com/photos/w5nyv/albums/72157656113224673
We found that most people didn't really study the coupler
diagram, and were generally unwilling to read the verbose text-mode displays on
the LCD to understand how to switch between keyboard and jukebox modes. This
wasn't entirely surprising, but it did spark some discussion and decisions on
how to improve the console for version 3.0.
With some strategic text placement, the coupler diagram could
perhaps be improved to the point of not requiring a lot of explanation. During
exhibition, it did not take much additional explanation to make the coupler
diagram come alive, but the fact that it did take some additional explanation
at all means there is room for improvement in this particular interface. Plans
are in place to improve this particular interface for San Diego Maker Faire
(3-4 October 2015).
For the LCD screen that showed status and gave instructions for
jukebox vs. keyboard mode, it was felt that a big image on the screen and
callouts on the knob would improve ease of operation.
Upon return to San Diego, the blower box, windchest, and pipes
were cleaned with compressed air and damp cloths, and Organ Donor was set up
for San Diego Maker Faire improvements.
Anyone interested in the project is welcome to follow along and
is invited to consider becoming an Organ Donor. The project needs skills of all
types, including machine learning, coding, user interface design, game theory,
carpentry, laser cutting, 2D and 3D modeling, 3D printing, and many other
areas. Contact Abraxas or Paul by sending a message through this site.
Tuesday, June 16, 2015
Digging Into Questions About Entropy
Here's a graph that represents two things. First, a lot of work completed. Second, a lot of work that needs to be done!
These three curves are the bits of entropy per sliding window location in a work by Buxtehude (Prelude and Fugue in G Minor). The width of the window is set by the Kemeny constant of the MIDI track. There are three tracks: Swell, Great, and Pedal.
These three curves are the bits of entropy per sliding window location in a work by Buxtehude (Prelude and Fugue in G Minor). The width of the window is set by the Kemeny constant of the MIDI track. There are three tracks: Swell, Great, and Pedal.
On a pipe organ, the main manual (keyboard) is called the Great. It is usually the bottom manual on two-manual instruments, or the middle manual on three-manual instruments. The upper manual is called the Swell. The Pedal is usually the very lowest notes in the piece. This is often played with the feet on the pedalboard.
Each of these tracks represents the music that would be played on the corresponding part of the instrument. Each part can be voiced on a completely separate rank of pipes, creating a layered sound.
In the MIDI version, each of the tracks is examined mathematically. First, a Markov chain is derived. This is a table that shows how likely it is for any particular note to follow a particular note. Starting with the first note and going all the way to the end, all combinations of the notes that follow the previous note are recorded, and then the probabilities calculated.
Next, the Kemeny Constant is found. This is the number of steps from a starting note to a randomly selected note chosen from the Markov chain's stationary distribution. No matter which starting note is selected from the piece, it takes about the same amount of steps to reach the randomly selected note from the piece. This number of steps is the width of our sliding window. The window function slides over the track. For each window, the entropy of the windowed sample is measured.
What we're looking for is places where the entropy changes dramatically. This would potentially indicate a local change in the entropy of the piece, which may indicate a compositional change or transition in the work. Identifying macro-phrases like this may be helpful in constructing algorithmic compositions that better emulate human composition.
As you can tell from the graph, the tracks do not line up. The pedal track is much shorter than the great, which is shorter than the swell. Therefore, the number of windows evaluated is not the same across the three tracks. This means that the samples are not aligned in time if they are simply listed along the horizontal axis. The samples need to be normalized for observed time. This (using the timestamps in the MIDI file to align the samples) is the next task in the design of this part of the software.
Saturday, May 23, 2015
What is a chord?
We've been working hard on the math that is underneath the surface of Organ Donor behavior. We're now beginning to grapple with pitches, or notes, instead of just pitch classes. A pitch class in western classical music is the name of one of the 12 semitones on the scale (C, F, G, etc).
A pitch also includes the octave or register (C4, F6, G2, etc)
So, up until now we've been working with 12-element vectors, where each position stands for a pitch class. A triad would be something like this:
[1,0,0,0,1,0,0,1,0,0,0,0]
The 1's indicate the presence of a note at that semitone. There are 12 semitones, and three of them are played.
But this doesn't tell you which octave the pitches are in.
With a small change, this information can be recorded in the vector.
[1,0,0,0,2,0,0,3,0,0,0,0]
A pitch also includes the octave or register (C4, F6, G2, etc)
So, up until now we've been working with 12-element vectors, where each position stands for a pitch class. A triad would be something like this:
[1,0,0,0,1,0,0,1,0,0,0,0]
The 1's indicate the presence of a note at that semitone. There are 12 semitones, and three of them are played.
But this doesn't tell you which octave the pitches are in.
With a small change, this information can be recorded in the vector.
[1,0,0,0,2,0,0,3,0,0,0,0]
Here, the first note of the triad (at semitone position 0) is in the first octave. The second note of the triad (at semitone position 4) is in the second octave. The third note of the triad (at semitone position 7) is in the third octave.
Now, when handling this vector, I can make a list of which octaves the pitches are in. When I rearrange or invert the chord, I can make sure that the octave information survives.
Some transformations we don't yet have a method for preserving the information. For example, when getting the prime form of a chord, we are really calculating what class the chord falls into. The registration of the notes in the chords doesn't matter because prime form encompasses many variations of registration. Going from prime form to vectors means that when you generate the vectors, you are generating lists of pitch-class representations (the vector has all 1's), but not pitch representations (the vector has many numbers).
Something that occurred to us is that using 1's for generic presence of a pitch class gets confusing if you also take 1 to mean that it's from the lowest octave. This will be fixed, but we're not yet sure what the very best and most clever way forward is!
-Abraxas3d
Tuesday, May 19, 2015
And Another Software Repository on GitHub
Greetings all! Here is the GitHub software repository for the user interface code for Organ Donor:
https://github.com/OrganDonor/Organelle
This is the code that runs on the "Organelle", which will be the console from which functions and basic operations can be controlled.
This is distinct from the console code, which is the code that makes sure that all the manuals (keyboards) are properly integrated into a single MIDI stream, and makes sure that not too many keys are pressed at once (which would pull too much current and pop a fuse).
https://github.com/OrganDonor/Organelle
This is the code that runs on the "Organelle", which will be the console from which functions and basic operations can be controlled.
This is distinct from the console code, which is the code that makes sure that all the manuals (keyboards) are properly integrated into a single MIDI stream, and makes sure that not too many keys are pressed at once (which would pull too much current and pop a fuse).
Friday, May 15, 2015
Monday, November 3, 2014
Modeling Rests in Composed Music
There are at least two types of rests in music. The first are the ones the composer wrote into the score. The second are the ones that naturally occur during playing. Musicians pause, extend, chop, attack, cheat, and move notes around within the measure in order to emote, interpret, or express.
There are many more of the second type of rests in human-performed music than the first. The first type are represented on the score, but the second type makes an enormous difference in how the music is perceived stylistically. Recognizing, categorizing, and modeling both types of these rests is a goal of Organ Donor, with the expectation that introducing proper amounts of "space" into algorithmically produced music will create music that sounds more like a human is playing it. Being able to create different models of resting based on desired style would be a very powerful and useful result.
Another area of investigation is the minimum return distance from root notes, or Mean First Passage Times. I suspect that there might be some utility from these statistics in terms of creating believable phrasing - or uncovering patterns that reveal other hidden structures in composed music. Examining the minimum return distance for both types of rests as well as notes will help improve the understanding of the role and effect rests play in composition and style.
There are many more of the second type of rests in human-performed music than the first. The first type are represented on the score, but the second type makes an enormous difference in how the music is perceived stylistically. Recognizing, categorizing, and modeling both types of these rests is a goal of Organ Donor, with the expectation that introducing proper amounts of "space" into algorithmically produced music will create music that sounds more like a human is playing it. Being able to create different models of resting based on desired style would be a very powerful and useful result.
Another area of investigation is the minimum return distance from root notes, or Mean First Passage Times. I suspect that there might be some utility from these statistics in terms of creating believable phrasing - or uncovering patterns that reveal other hidden structures in composed music. Examining the minimum return distance for both types of rests as well as notes will help improve the understanding of the role and effect rests play in composition and style.
This area of math (MFPT) is used in a wide variety of fields to answer pragmatic questions about physical systems. There's no guarantee that it will provide repeatable or useful results in music, but we have the tools (thanks to the python library pykov) to start the process of finding out.
Here's some results from parsing MIDI files of performed music. The rests captured don't exist in the score. Some notes are staccato, and you can see the additional "distance" the musician included to create the desired effect. You can also see additional distance punctuating the end of the phrase. This is the violin 2 part from Beethoven's 7th, second movement.
Note 64 had tick duration 409
a rest had tick duration 66
Note 64 had tick duration 110 (staccato note)
a rest had tick duration 138 (results in longer space between notes)
Note 64 had tick duration 116 (staccato note)
a rest had tick duration 121 (results in longer space between notes)
Note 64 had tick duration 410
a rest had tick duration 67
Note 64 had tick duration 392 (ended a bit early, end of a phrase)
a rest had tick duration 91
a rest had tick duration 66
Note 64 had tick duration 110 (staccato note)
a rest had tick duration 138 (results in longer space between notes)
Note 64 had tick duration 116 (staccato note)
a rest had tick duration 121 (results in longer space between notes)
Note 64 had tick duration 410
a rest had tick duration 67
Note 64 had tick duration 392 (ended a bit early, end of a phrase)
a rest had tick duration 91
Modeling music requires obedience to aesthetics, and this is where the difficulty - possibly the impossibility - lies. However, I cannot think of anything more worthy of analysis, modeling, machine learning, and algorithmic design. More soon!
Monday, October 20, 2014
Independent Parallel Tracks and Hidden Markov Models
OK, so now we have code that analyzes each track of a multi-track midi file, and creates transition tables. These transition tables are used to generate new music that has some influence from the analyzed track.
However, each new track is used independently. While the behavior of each new track is based completely on the statistics of the analyzed track, they will not sound like they are "composed together” when they are recombined.
Organ Donor Frank Brickle says, "To keep them together you actually need to model the interaction in some way. Looking ahead a bit, you can probably see why a hidden Markov model of all the activity is one of the best ways of coordinating the subordinate parts."
Ok so what does this mean?
A hidden markov model is where the observation and the state are separated. The simplest example is a coin. Usually you see the coin, and can read whether it came up heads or tails. A hidden markov model has the coin hidden, like behind a screen, and the observations (heads or tails) are read out to an audience (or user, or participant, or contestant).
One of the jobs of the contestant is to figure out how many states are required to best explain the observations. For example, for five minutes the coin flipping produced about half heads and half tails. Then it suddenly changed, and the observations were mostly tails for four minutes. Then mostly heads for three minutes. Then it went back to a fair distribution for the rest of the session.
One way to explain this is with three coins. A fair coin, a heads-heavy coin, and a tail-heavy coin. The person behind the screen switched from one coin type to another and read off the resulting observations. The number of states in this hidden markov model would be three. Each coin is a state. Each state has an alphabet of two possible values. Each state’s alphabet has a different distribution of likelihoods. The probability distribution for each state (each coin) is different.
I believe our job is to figure out how to keep the tracks working together when new music is created. Analyzing each track separately stays in the toolbox, but analyzing the entire piece, and using that analysis to coordinate the production of new tracks must be done as well.
Tuesday, October 7, 2014
First New Music from Bach Violin Solo - Quick Sample
Here's the first simple example of the sort of files we're trying to produce for Organ Donor.
This file was created by taking the statistics of a Bach Violin Solo (the Partita No.2 Gigue), and analyzing them in the following way. Each note was examined (with a software program) to find out what note followed. After all the notes were counted up, we then calculated the probabilities of which note would follow any particular note that had occurred in the piece.
We then picked a random note, and "rolled the loaded dice" to see which note we should go with next. Once we saw which note we came up with, we did it again. We collected a 100-note-long sample.
For the original song that was analyzed, here's Hillary Hahn playing it:
https://www.youtube.com/watch?v=7eXzlg2Xcgg
For our very simplified example song:
http://www.delmarnorth.com/audio/bach_nmo.mid
midi file
This file was created by taking the statistics of a Bach Violin Solo (the Partita No.2 Gigue), and analyzing them in the following way. Each note was examined (with a software program) to find out what note followed. After all the notes were counted up, we then calculated the probabilities of which note would follow any particular note that had occurred in the piece.
We then picked a random note, and "rolled the loaded dice" to see which note we should go with next. Once we saw which note we came up with, we did it again. We collected a 100-note-long sample.
For the original song that was analyzed, here's Hillary Hahn playing it:
https://www.youtube.com/watch?v=7eXzlg2Xcgg
For our very simplified example song:
http://www.delmarnorth.com/audio/bach_nmo.mid
midi file
Monday, October 6, 2014
Friday, July 18, 2014
Subscribe to:
Comments (Atom)

