Wednesday, October 5, 2011

Audio applications in music21

My name is Jordi Bartolomé Guillén and I spent this last summer collaborating at MIT in the music21 project. It was a fantastic experience for me!

I started my days in Cambridge learning music21 a little bit. First, I developed a converter from NoteWorthy Composer notation into music21.

Later, I developed different audio applications using music21:

- Transcriber: It is a software that records monophonic music and shows you the corresponding score in the laptop.

- Score Follower: It shows you the score that you want to play in the screen and, using the microphone of the laptop, detects which part of the score you are playing. Consequently, it slides the page automatically when it decides that is the best moment to do it.
Using this software you can play a song without having to slide the pages manually. It could be useful in concerts!

- Repetition game: Finally, I developed a two-people game which consists of the repetition of the note played by the other player and the addition of a new one. The player that fails first one of the notes loses the game!

These are some pictures of the applications. I hope you can enjoy them soon!



P.S.: Michael, Chris, Tina, Jose and Neena, thank you for this summer!

Thursday, September 29, 2011

BMT: The All-Purpose Braille Music Transcriber

I'm continuing my quest to develop an all-purpose Braille music transcriber in music21. This is a funding proposal submitted to the Undergraduate Research Opportunities Program (UROP) at MIT for the fall 2011 term. Hopefully by December we'll be able to transcribe chords into Braille!

Reading and writing over the centuries has mostly relied on a person's ability to see. It was not until the 1800s that Louis Braille invented a system of raised dots that allowed the visually-impaired to read by substituting the sense of touch. In similar fashion, the blind have been deprived of playing and singing from musical scores for most of history, since reading and writing music on a staff also relies on the ability of sight. Fortunately, Braille also developed a system to represent written music as raised dots.

Braille music notation is noticeably different from the standard musical notation of representing notes on a staff, as shown below.


Happy Birthday" in Braille Music


⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠩⠼⠉⠲⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀
⠼⠁⠀⠐⠑⠄⠵⠫⠱⠀⠳⠟⠀⠑⠄⠵⠫⠱⠀⠪⠗⠀⠑⠄⠵⠨⠱⠺⠀⠓⠄⠷⠻⠫
⠨⠙⠄⠽⠺⠳⠀⠪⠗⠣⠅


Braille music notation is not only different from standard musical notation, but also just as complex and difficult to master. Not only do blind persons need special training in reading Braille music notation, but those sighted persons seeking to transcribe music into Braille need extensive training in how the two notations correspond and where they differ.

This raises the question: what happens the day when a visually-impaired yet excellent singer wants to sing in the MIT Concert Choir? Assuming he or she is able to read Braille music notation, there needs to be a way of providing him or her with the translated repertoire.

One way of doing so is to invoke the services of a transcriber. However, because of the complexities explained beforehand, there are very few transcribers available to do this. Another solution is to use transcription software available on the open market. The problem is that said software tends to be very expensive. But what if there existed free software that could do the same? And that is exactly what the Braille music transcriber, BMT, I have been developing is.

I started developing BMT in July 2011 under a direct funding UROP as part of the music21 project, a project in MIT Music and Theater Arts which aims to develop software useful for musical analysis. Using knowledge borrowed from my years of training in both music and computer science, borrowing from code already written for music21, and consulting very closely with a certified Braille transcription manual[1], I have come up with a rudimentary transcriber.

Right now, BMT is capable of translating melodies such as the example presented beforehand––melodies containing musical elements including, but not limited to, notes, rests, bar lines, key signatures, and time signatures. Each of the musical elements corresponds to a Braille character or series of characters. Some more complex elements supported include fingering, slurs, changes of key or time signatures, and beaming. In short, most of the concepts in twelve of the first sixteen chapters of the transcription manual have been implemented, with more than 100 examples found within those chapters being transcribed correctly.

However, there is functionality that I aim to add during this UROP. The most notable of these is support for chords, or set of notes played or sung at once. Not only are chords omnipresent in keyboard music, but also harmonies of instruments in pieces with multiple parts played or sung in unison can always be boiled down to chords. Hence, chords are used as a method of instruction in musical composition classes not only at MIT but also at universities across the world. Implementing chords would be a big step towards being able to translate an introductory composition text such as Marjorie Merryman's The Music Theory Handbook[2] which is used at MIT. Chords and related concepts are covered over five chapters, approximately 100 pages, and are a leap beyond single melodies because one needs to deal with the vertical aspect and the horizontal aspect of the transcription simultaneously.

Another concept to implement is musical repeats, bar lines at the beginning and ending of sections which are meant to be repeated for a given piece of music. This is a relatively simple concept in print, but the manual spends three chapters covering it, approximately 100 pages.

Other important concepts to add include division of long measures whose transcription would fall at the end of a Braille line limited to forty characters in length, as well as increased support for musical expressions and special note modifiers such as ornaments. In fine, the first sixteen chapters comprise only 150 pages of a 500+ page manual––there is much to be done before BMT can "become an official Braille music transcriber."

With a completed BMT, all the editable musical scores available online will become available to visually-impaired musicians. As alpha code, the music21 project also benefits, because every example needs to be translated into music21 objects before being transcribed into Braille, uncovering plenty of bugs and resulting in many new features being implemented or improved upon. I also believe that new coding techniques developed will carry over towards my musical improvisation software whose further development is currently on hiatus.[3] But most of all, I believe that this project will continue to give me more of the engineering confidence which I have been seeking over the course of my time at MIT.

[1] Introduction to Braille Music Transcription, Second Edition 2005, by Mary Turner De Garmo, revised and edited by Lawrence R. Smith, Music Braille Transcriber.

[2] Marjorie Merryman. The Music Theory Handbook. Schirmer, 1996.

[3] Jose Cabal-Ugaz. "fbRealizer: A 21st-Century Approach to a Centuries-Old Musical System." UROP Summer Proposal, 2011.

Wednesday, August 31, 2011

Abjad v.2 released


Woke up this morning to the great news that our friends Trevor Bača and Víctor Adán have released v.2 of their python-based, lilypond-powered, flexible music notation system Abjad (docs and installation instructions here). Abjad is a system for composers to build up scores from reusable, flexible elements and have precise control over notational elements. Some music21 users may have already noticed an "abj" directory in music21 and seen documentation at http://web.mit.edu/music21/doc/html/moduleAbjTranslate.html for how notes and simple streams can be translated from music21 to abjad. Since both projects use similar hierarchies including spanners and containers (Streams in music21), there is a lot of compatibility between the two. If a complete translator could be implemented (volunteers?), abjad would offer to music21 users high quality lilypond output and better tools for working with tuplets, staves with independent time signatures, and other rhythmic and layout tools that music21 does not yet have. Abjad users would gain access to music21's ability to parse many notational formats, work natively with intervals and harmonics, extensive scale collections (including Scala microtonal scales), and more. Getting these two projects closer to work more closely with each other is a win-win for everyone. Congrats to Trevor and Victor!

Tuesday, August 30, 2011

Nancarrow level tempo changes in music21

The newest SVN releases (pre-alpha 12) of music21 include tempo change information to/from MIDI (we've already supported tempo i/o from musicxml since previous versions and read it in from Humdrum/Kern, Noteworthy, abc, Musedata and probably others. And newest releases export it to Braille Music Notation). Here's an example of using a tempo change after each note (default = quarter note) to create a smooth change of tempo (tracing a sine wave) from 60 bpm to 600 bpm over 60 notes.


from music21 import tempo, note, stream
import math
min = 60
max = 600
period = 50
s = stream.Stream()
for i in range(100):
    scalar = (math.sin(i * (math.pi*2) / period) + 1) * .5
    n = ((max-min) * scalar) + min
    s.append(tempo.MetronomeMark(number=n))
    s.append(note.Note('g3'))
s.show('midi')



And the output:



This feature lets you create pieces with the types of precise changes of tempo that Conlon Nancarrow painstakingly created in his piano-roll compositions. We'll soon have demos to show how you can use these features to create independent tempo marks in different parts to weave independent strands of music in your works. On the analytical side, importing precise tempo marks can open up new avenues for research on performance and interpretation, comparing the tempos chosen with those marked in the score.

To use the latest (not thoroughly tested) releases of music21 via SVN, we recommend developing using Eclipse. See Using Music21's SVN version with Eclipse. Otherwise, wait a few weeks for the latest release with lots of other improvements.

Tuesday, August 23, 2011

Alpha 11 released (and recent updates)

I haven't done a good job keeping this blog abreast of changes to music21 lately, but there have been a ton. Here are the new features of alphas 9-11.


new in alpha 9

· IMPORTANT: corpus.parseWork() is now corpus.parse() to better match converter.parse()

· TimeSignatures’ .beatCount is now read-write. Additional partition options for MeterSequences.

· Added features to the RomanText format for specifying analyses (via Roman numerals) for pieces

· TimeSignatures can include “slow 6/8” “fast 6/8” etc. to specify if it’s a 6 or 2 beat measure.

· Better configuration options and configuration assistant

· node.py renamed to xmlnode.py – useful primarily if you’re writing a new translator for an xml-based format

· stripTies() has more options

· Changes to environment.UserSettings objects now propagate instantly (usually) so you can keep working without changing anything

· figuredBass.realizer() – new module for automatic realization of figured bass. (paper coming soon)

· text.TextExpression() class for handling most text expressions, with fonts etc. – can be positioned by quaterLength offsets to represent their occurring between beats. – displays properly in musicxml (thanks Michael Good for the help)

· bug fix: pitch.flipEnharmonic() no longer has octave problems.

· added subclasses of key.WeightKeyAnalysis for other weights such as AardenEssen and others (as discussed on this list recently)

· Key analyses routines now return a Key object with a .correlationCoefficient attribute (thanks Rachel Hall!)

· Dynamics objects can be freely positioned to take place between notes.

· plots now plot chords properly (please re-generate any old images that worked on chord data!)

· derviationChain() method and derivesFrom property on Streams will return the Stream that generated this one (via .flat, .notes, etc.) – a really useful method for Context checking.

· analysis/patel -- Tools for testing Aniruddh D. Patel's analysis theories, such as nPVI: Normalized Pairwise Variability Index and Melodic Interval Variability

· note.lyric – adding a lyric with a hyphen at the start or end will (unless overridden with applyRaw = True) automatically set it as a beginning or end syllable.

· Unicode accidentals via pitch.accidental.unicode

· MICROTONES! C~ = C-half-sharp, D` = D-half-flat. Microtone objects allow for setting any amount of cents between notes. Note that .ps now always returns a float representing the midi-note with microtonal precision. .frequency works with Microtones too!

· pitch.isTwelveTone() will say whether the note is a half-sharp, etc. or not.

· pitch.convertMicrotonesToQuarterTones() and pitch.convertQuarterTonesToMicrotones() let you decide if you want to represent C4 + 57cents as C4+57c or C~4+7c.


· harmonicFromFundamental() and harmonicAndFundamentalFromPitch() will let you get pitches representing, say the 7th harmonic of D#3 – with proper microtones! Or for spectral composers will let you find out potential fundamentals for a given pitch (with the number of cents off that this pitch is)

· Note.fullName, Duration.fullName, and Pitch.fullName gives a verbose description of the element

· Stream.recurse() will recursively find every element in the stream. What stream that element is in is set as .activeSite – this is different from .flat or .semiflat in that the .offset is still relative to the element’s container. recurse(streamsOnly = True) is a good way to only get substreams.

· chord.fromForteClass() will give you a chord (including “C4”) that matches the forteclass

· chord.fromIntervalVector() does the same thing if you have an interval vector. If it’s a Z-related chord, the first form of the chord is returned.

· chord.getZRelation() will return the other Z-related form of a chord.

· REPEATS! repeat.py has Repeat marks for dal segno, da capo, 1st 2nd (3rd 4th nth) endings, etc. all of which play back properly on .show. stream.expandRepeats() will expand repeats. (big plus on abc import)

· instrument.instrumentFromMidiProgram() gives a full-fledged music21 object given a midi program (0-127)

· stream.Transpose() is now recursive.

· better Mac installation docs.



new in apha 10

· Incompatible change: stream.notes now does not return rests; just notes and chords. Use stream.notesAndRests for the old functionality.

· improvements to Scale including new documentations to be presented at ICMC next month (links coming soon)

· FEATURE EXTRACTION: 60% of jSymbolic features and many native features implemented in the features modules. (paper to be presented soon – stay tuned!)

· Better docs for 64-bit windows and all tests pass on 64-bit systems.

· Expansions of ornaments – see expressions.realizeOrnaments()

· corpus includes Mozart and Haydn string quartets and more folk airs. (see acknowledgements for thanks)

· search.py – rhythmic (and future melodic) search module with wildcards (first version)

· musedata stage1 files are now supported

· Scala scales – scales that represent potentially microtonal scales from scala format. Music21 can now read any file in .scl format!

· Repeats are correctly translated in/out of musicxml

· Augmented 6th classification in chord.

Newest updates in alpha 11:

· Huge performance boost on stream manipulation – you’ll notice it just from using it.

· Repeat brackets display properly

· Improved abc conversion of pickup measures and repeats.

· Figured basses correctly handle resolutions of augmented 6ths and many other chords.

· Bug fixes on some accidental display output.

· transparent caching of streams (i.e., stream.flat will be faster the second call if the underlying stream hasn’t been changed)

· Empty voices (often outputted by Finale) are silently removed when converting from musicxml.

· Automatic correct MIDI channel distribution for instruments AND MICROTONAL PITCHBENDS! Your 19-tone piano trio should playback properly now (at least on the default synths on Mac and PC).

· medren – convertToHouseStyle and subroutines will change the default style for printing music to better reflect some editor’s ideas of proper representation of Renaissance music.

· makeChords – bug fix on overlapping rests.

· TimeSignatures and KeySignatures are imported properly from conductor tracks in midi import.

· ConcreteScale(pitches = ‘[pitchList’]) will now create a scale from the given pitches. Useful for treating a chord as an infinite scale of notes.

· tuning = ScalaScale(‘py12.scl’); tuning.tune(score) – will retune a score to a given temperament (with playback).

· Tempo import and export from music21 to musicxml

· additional corpus items (including incipits of 14th c. Virelais)

· most modules are now more unicode compliant.



Newest (super-beta features)

· Preliminary conversion of Noteworthy composer .nwctxt files (input only)

· Very preliminary output into Braille Music notation

· Noteheads are properly output from music21 (but not converted in yet).



And of course new demos, docs, and examples.

Check it all out at http://web.mit.edu/music21/

Monday, April 18, 2011

music21 on EchoNest

Thanks to the work of Jonathan Marmor, music21 now has a proof-of-concept linking with The Echo Nest, a great mostly-open framework for audio analysis with many python links. See the Transcribe Melodies link for more information. Full source code is available here. Thanks to Jonathan and to Echo Nest for making this linkage possible. While music21's primary focus will always be on symbolic data, we love audio analysis and hope we can add more features to make music information retrieval and analysis on audio files easier.

http://the.echonest.com/platform/showcase/

Tuesday, January 4, 2011

Music Informatics at City University, London

Passing on this good information from Tillman Weyde at City University, London:

The City University London is offering 75 Research Studentships to begin in October 2011. The Music Informatics research group in the Department of Computing at City University would particularly
like to encourage PhD applications in the area of Music Computing and Music Informatics.


Research interests in the Music Informatics research group include:

- music information retrieval
- computational musicology
- musical applications of machine learning
- semantic music representation
- systems and user interfaces for music e-learning and music information retrieval

Information about the Music Informatics research group and
the Research Studentship application procedures can be found
online at

http://www.soi.city.ac.uk/organisation/doc/research/mi/

and

http://www.city.ac.uk/research/resdegrees/studentships.html

The closing date for applications is the 31st January 2011.
If you are interested in applying get in touch with Tillman
Weyde (tweyde@uos.de) soon to discuss further details.