Sunday, October 28, 2012

New Orleans news and Josquin Research Project

Several items of interest:

1) The joint meeting of the American Musicological Society, Society for Music Theory, and Society for Ethnomusicology will take place on Nov 1-4 in New Orleans.  There are several sessions of interest for music21 users: a panel on Corpus Research by ELVIS collaborator Ian Quinn on Saturday morning, a discussion of MEI the day before the conference on Oct 31, a panel on musical databases for medieval and Renaissance music on Thursday afternoon (concurrent with my less digital paper on Italian influence in early fifteenth-century music), the meeting of the Computational Music Theory group, and several other meetings that are slipping my mind right now -- all in all an important place for digital musicology!

2) The New England Chapter of the American Musicological Society will host its Winter meeting on Saturday, Feb 2 at Tufts University in Medford, MA (a Boston suburb).  The call for papers has just gone out requesting:
abstracts of up to 300 words for papers and roundtable sessions. Submissions in the area of digital musicology are of particular interest, but proposals on all musicological topics are welcome. Abstracts should be submitted by Monday, 26 November 2012 via email to jsholes at bu.edu (Jacquelyn Sholes)
I hope that people working on digital musicology will choose to apply.   
3) Craig Sapp's recent blog post details the work being done by our friends in the Josquin Research Project at Stanford University.  In addition to creating a number of wonderful tools for analyzing their data online or with Humdrum, the JRP has made all their data available, primarily in Humdrum's KERN format.  Music21 reads these files, including the new KERN rhythm extension.  The results of the JRP so far will be part of the Thursday afternoon AMS session in New Orleans.

Happy analyzing!

Thursday, September 27, 2012

Music21 in Polish, in Germany, and in Montreal

In Polish

A nice description of what we do, and music21's connection (I'd say, deep indebtedness) to Humdrum, appears in this article in Ruch Muzyczny, Rok LVI, Nr. 19:

Na szczególną uwagę zasługują dwa programy pracujące z danymi zakodowanymi symbolicznie. Jeszcze w latach dziewięćdziesiątych powstał opracowany przez Davida Hurona system analityczny Humdrum Toolkit6, w którym różne wymienialne formaty reprezentacji partytury (podstawowym jest **kern) stanowią materiał podlegający manipulacji (np. klasyfikacji, restrukturyzacji, kontekstualizacji) i prowadzą do wyszukiwania rozmaitych wzorów lub podobieństw między różnymi typami informacji. Alternatywą dla Humdruma jest dziś zbiór narzędzi dla "komputerowo wspomaganej muzykologii" - system Music21 (http://mit.edu/music21), oparty na języku programowania Python. Symboliczne dane do analizy pozyskiwane mogą być z rozmaitych źródeł (np. z edytora Finale i MusicXML) i eksportowane do rozmaitych formatów, w tym do **kern i MIDI, co znacznie rozszerza funkcjonalność systemu. Dając polecenie "from music21 import *", można wykonać wiele prostych zadań, jak wizualizacja krótkiej melodii, utworzenie macierzy dwunastotonowej lub wypisanie pod nutami ich nazw (w wybranej konwencji językowej). Siłą pakietu są jednak obiekty wyższego poziomu - Pitches, Chords, Durations, TimeSignatures, Intervals albo Instruments, które dokonują bardziej skomplikowanych analiz, jak znajdowanie dźwięku prowadzącego w aktualnej (zmieniającej się w przebiegu utworu) tonacji. Wyrafinowanym komponentem Music21 są moduły graficzne pozwalające na przykład na wizualizację profilu tonalnego wybranego utworu lub ukazanie korelacji między określonymi parametrami muzyki (np. między wysokością a dynamiką), co jest zwykle trudne do zauważenia w partyturze.

(Machine translation of the second half:)
An alternative to Humdrum today is a set of tools for "computer-aided musicology" - the Music21 system (http://mit.edu/music21), based on the Python programming language. Symbolic data for analysis can be obtained from a variety of sources (e.g., Finale and MusicXML) and exported to various formats, including the **kern [MSC: actually, we don't export Kern yet, but we do import it] and MIDI, which greatly extends the functionality of the system. Giving the command "from music21 import *", you can perform many simple tasks, such as visualization of a short melody, creating a matrix of twelve-tone or print the notes of their names (in the selected language conventions). The strength of the package, however, are higher-level objects - Pitches, Chords, Durations, TimeSignatures, Intervals or Instruments, who make more sophisticated analyzes, such as finding the leading tone in the current (evolving work in progress) key. Music21 is a sophisticated component of graphical modules allowing, for example, to visualize the tonal profile of the selected song or show a correlation between certain parameters of music (for example, between the height and velocity), which is usually difficult to see in the score.
 

Reports from Germany

In July of 2012, the music21 project went to Germany, sponsored by the generosity of the Germany Seed Fund of the MIT MISTI program, of the German government, and the Seaver Institute.  There we met with Hans-Peter Kriegel of the LMÜ and his great database lab, including Vladimir Viro of Peachnote, and worked together on future collaboration projects.  After recovering from jet-lag for a day with a quick trip to Salzburg, we worked all day (and most of the night) on the ICE to Hamburg.  The next day we presented our work (along with our absent collaborator Christopher Reyes) on "Interoperable Digital Musicology Research via music21 Web Applications"at the Digitial Humanities Conference.  We then continued our discussion and coding in Berlin where we met up with members of the musicology community there and took a trip to Leipzig to pay our homage to Bach. 

Thanks to Beth Hadley, a video of the trip is now available from the MISTI website.  Thanks to all our funders, collaborators, and friends.

Montreal ELVIS Collaboration

Work on the ELVIS (Electronic Locator of Vertical Interval Successions) project--a multinational cooperation between the US, Canada, and the UK, via the Digging into Data challenge grant--continues.  I'm ecstatic to learn about VIS, a Python-based music visualization system by Christopher Antila of the McGill team, built on top of music21 and Lilypond.  (Read about it here)

The McGill ELVIS group is greatly increasing our understanding of Renaissance and later polyphony, and music21 is proud to be a part of their work.

v.1.3 released; music21 at Grace Hopper

Two important items in the music21 world:

Beth Hadley and music21 at Grace Hopper

If you're in Baltimore, whether you're attending the Grace Hopper Women in Computing conference or not, you'll definitely want to attend Beth Hadley's presentation on "Porting Computer-Aided Musicology using music21 to the Cloud" on Thursday Evening.  Beth has done fantastic work integrating feature extraction, analysis of popular music leadsheets, and (most recently and coming soon) Vladimir Viro's Peachnote extractions of IMSLP with music21 and Amazon Web Services.  She is also a key player in the music21 TheoryAnalyzer tool (with Lars Johnson) that will play a big part in the future of online music theory education.  Beth is a sophomore undergraduate in Computer Science (Course VI) at MIT.

Music21 v.1.3 Released

Music21 version 1.3 has been released, and is available at http://code.google.com/p/music21/downloads/list for Mac (.tar.gz), PC (.exe) or as a Python .egg file.  Upgrading consists of simply downloading the new version and using the installation instructions (http://mit.edu/music21/doc/html/install.html).  You should not need to uninstall a previous version unless it’s extremely old.

Version 1.3 contains a number of bug fixes, much improved documentation (we’re beginning a rewrite of our user’s guide over the next few months), and new features.  Of particular importance is greatly increased support for Lilypond output, support that will continue to expand soon.  N.B. -- this is the first version that removes some method calls that were not placed in the best modules, were obsolete, or duplicated functionality that could be found elsewhere, so for the first time in a while, we caution that upon upgrading you may need to change some parts of your code, especially if you were using some of these features:

* Advanced musicxml features (beyond .show(‘musicxml’) or .write(‘musicxml’)) have been changed: .mx and .musicxml have been removed from music21 objects; the musicxml subpackage has been broken into smaller modules.  To get the same output as obj.musicxml call musicxml.m21ToString.fromMusic21Object(obj); to get the same functionality as .mx look for the appropriate method in musicxml.toMxObjects or musicxml.fromMxObjects.

* Advanced MIDI features (beyond .show(‘midi’) or .write(‘midi’)) have been changed.  .midifile is now midi.translate. music21ObjectToMidiFile(obj). 

* Stream freezing/unfreezing (now “thawing”) is handled by the new freezeThaw module.  jsonpickle has been removed as an option since it was not dereferencing objects properly.

* obsolete methods of note (compactNoteInfo, pitchNames, setAccidental [use .accidental = Accidental], noteFromDiatonicNumber, sendNoteInfo) have been removed

* The __repr__ (representation) for pitch objects is now instead of G#4 (etc.).  A large-scale standardization of all __repr__ to begin with is underway.  TimeSignatures have also been affected. String representations of both classes remain the same.

* Almost all keyword attributes that mapped to Python reserved words (dir, format, map, min, max) have been renamed.  In the vast majority of cases, users will have been using unnamed attributes, so nothing will have changed.  In the few cases where we believe many people will have used named attributes, we have left them alone.

So those are the incompatibilities.  What about the reasons to upgrade:

* Much better docs mean that the documentation will match v.1.3.

* harmony improvements, esp. in chordSymbolFromChord.

* melodic voiceleading analysis in the analysis.theoryAnalyzer package.

* bug fixes and improvements in scale, abc, medren.  Octaveless pitches now choose more sensible octaves in scales.

* improved serial module

* 50% speedup in startup.  Full IDLE compatibility.   Lots of little speedups everywhere.

In other news: The music21list is being split into two lists – a discussion list (music21list) and an announcement-only list (music21-announce).   All messages sent to music21-announce will also appear on music21list, so there’s no need to subscribe there if you want discussions and announcements.  But if you would like to move to a lower-level of email activity, please subscribe to the announce list and unsubscribe from this list (or, better, don’t subscribe but turn off emails so you can turn them back on easily when you have a question).

Sunday, July 29, 2012

Music21 in LinuxMagazin.de (auf Deutsch)

Eine kurze Einführung in music21 hat in einem Blog-Post in Linuxmagazin erschienen:
http://www.linux-magazin.de/NEWS/Music-21-Python-Toolkit-fuer-Musikwissenschaftler

Meine Lieblings-Satz:
"In den vergangenen Jahren habe sich der Einsatz von Informationstechnologie in Geistes- und Kunstwissenschaften von einem randständigen Hobby interessierter Geeks zu einem anerkannten Werkzeug für alle Forscher entwickelt, schreibt der Projektleiter Michael Scott Cuthbert"

If only I could actually write that well auf Deutsch!

The music21 team had a great time in Germany visiting with our colleagues at LMÜ München and at the DH2012 conference in Hamburg, in addition to sampling Currywurst in Berlin, Bach-arcana in Leipzig, and fine beer and warm people everywhere.  Thank you to our German friends and the German government for supporting our work.

music21 v.1.1 released

A new version of music21, v.1.1, has been released.  It incorporates 6 weeks worth of feature enhancements, documentation improvements, and bug fixes.  For the next few 1.x releases, we're focusing (in this order) on better documentation and tutorials, making our method calls more robust (for instance, on larger scores with many voices), and applying music21 to more musicological topics.  But there will always be some time for adding new features as well.

---
Music21 has added a robust service-oriented-architecture and a set of web applications that should enable music21 users to work more easily over the web.  See the paper (with Beth Hadley, Lars Johnson, and Christopher R. Reyes) at http://web.mit.edu/music21/papers/Cuthbert_Hadley_Johnson_Reyes_Music21_SOA.pdf .  These tools are still in beta, so the interface may change slightly in future releases.

A new architecture for producing Lilypond output has been released.  Most end users will see little change, but we will be able to continue improving our Lilypond output with this rewrite.

Improvements in serial and post-tonal tools.  Commands such as isLinkChord(), isCombinatorial(), isAllInterval(), etc.will help people working on the music of Elliott Carter and other recent non-tonal composers.  Fixed some bugs in our provided tone rows.

Many bugs in docs have been squashed with our new documentation test suite.

For those working with Bach Chorales, see the corpus.chorales module which allows you to get chorales according to your favorite numbering system and lazily parses them for speed purposes.

Methods for finding repeated or similar sections have been added to repeat.py -- they are very powerful but still not made easy to use.  Version 1.2 will add a simple interface to this.

Incompatible change:  TinyNotation now supports time signatures in the input.  It is best to preface the string with "tinynotation: ".  For instance, "converter.parse('tinynotation: 3/4 C4 D E').makeMeasures()" will give a measure of 3 quarter notes in 3/4.

The Goldberg Variations have been added to the corpus (Thanks to the Open Goldberg Variations project ).  The Art of Fugue was already there, but you might not have known that it was bwv1080.  Now it's "bach/artOfFugue_bwv1080"

Basic support for realtime MIDI playback of streams in midi.realtime for users with pygame installed.  Thanks to Joe Codeswell for the code (original post)  .  A portaudio version (with less lag at the end of a Stream) is on its way...

Further improvements to variants: now you can load in two streams and mark the differences between them as variants.  Works even if Streams are of different lengths!

Bug fixes and improvements in Harmony objects, RomanNumeral processing (including the rntxt format) and more.  A sample of 20 Bach Chorales in rntxt format is now included with music21 thanks to Dmitri Tymoczko for the contribution and suggestions
---

Thanks to Evan Lynch, Varun Ramaswarmy, Carl Lian, Daniel Manesh, Beth Hadley, and Lars Johnson for many contributions to the latest code.

We also thank the Seaver Institute and the NEH/Digging into Data Challenge and our colleagues on the ELVIS project for their continued support.

Tuesday, July 10, 2012

Music21 in the Boston Globe; Lilypond

Sunday's Boston Globe has an excellent article by Leon Neyfakh titled, "When Computers Listen to Music, What do they Hear? "  which includes a great discussion of the latest techniques in computational musicology including a number of references to music21 (including a graphic only in the print edition).  Here's one of my quotes in the article:

“You get a bird’s eye view of something where the details are so fascinating—where the individual pieces are so engrossing—that it’s very hard for us to see, or in this case hear, the big picture...of context, of history, of what else is going on,” said Cuthbert. “Computers are dispassionate. They can let us hear things across pieces in a way that we can’t by even the closest study of an individual piece.”
For anyone who already knows about music21, I'd appreciate it if any Lilypond hackers/users who are adventurous would be willing to upgrade to the newest SVN and test out Lilypond support there.  We've completely rewritten our Lilypond exporter as an object-oriented system with the aim of getting it caught up to MusicXML in the near future.  It's a ground-up reconception, so there may be some bugs.  It doesn't support a lot of things still, but it's now a flexible system that we can expand in the future.

Wednesday, July 4, 2012

music21 at Hamburg Digital Humanities conference

The music21 team will be presenting at the Hamburg Digital Humanities conference on July 16 (Monday) in the workshop, Service-oriented Architectures (SOAs) for the Humanities: Solutions and Impacts.  Our paper is:

Michael Scott Cuthbert, Beth Hadley, Lars Johnson and Christopher Reyes - Interoperable Digital Musicology Research via music21 Web Applications

Hope to see many of you in Hamburg.  Beth, Lars, and I will also be in Munich (before) and Berlin (after) in case there is music21 interest there.

Friday, June 15, 2012

music21 v.1.0 released!

Dear music21 users,

I’m extremely proud to announce that Version 1.0 (not alpha, not beta, not even omicron or omega) of music21 has been released. The toolkit has undergone five years of extensive development, application, documentation, and testing, and we are confident that the fundamentals of the software will be sufficient for computational musicology and digital humanities needs both large and small.

This release marks a milestone in our software development, but also is reflective of so many wider changes in the scope of digital humanities and computational music research. Over the past few years, digital humanities has gone from the small province of a few geeks who were also interested in the arts and humanities to being recognized as fundamentally important tools for all researchers. This welcome change has brought with it the obligation to those of us who create such tools to make them more accessible and easy to use for people whose programming skills are non-existent or just developing, while at the same time not crippling advanced features for expert users. MIT’s cuthbertLab hopes that music21 has met these goals, but we know that it’s the community that will let us know how we’re doing.

Music21 is based on computational musicology research principles that were first developed in the work of Walter Hewlett, of David Huron (author of the amazing Humdrum toolkit from which we take our inspiration), Michael Good (of MusicXML), and many others. We stand on the shoulders of giants and know that a third generation of research tools will someday make music21 obsolete; we look forward to that time, but hope that the work we’ve done will put that day into the quite distant future.

I want to thank the contributors on this list and the developers we’ve had at MIT and elsewhere. Of particular note are the students who have contributed including (hoping I’m not leaving anyone out): Thomas Carr, Nina Young, Amy Hailes, Jackie Rogoff, Jared Sadoian, Jane Wolcott, Jose-Cabal Ugaz, Neena Parikh, Jordi Bartolomé-Guillen, Tina Tallon, Beth Hadley, Lars Johnson, Chris Reyes, Daniel Manesh, Lawson Wong, Ryaan Ahmed, Carl Lian, Varun Ramaswamy, and Evan Lynch. Thanks also to MIT’s Music and Theater Arts Section (Janet Sonenberg, Chair) and the School of Humanities Arts and Social Sciences (Deborah Fitzgerald, Dean), and to the NEH and other organizations participating in the Digging into Data Challenge Grant. A special note of gratitude goes to the Seaver Institute, the earliest supporter of music21, whose generous contributions over the past three years have made everything possible.

The release of v. 1.0 also marks a turnover in the staff of music21. Christopher Ariza, a prominent composer, music technologist, and music theorist has been Lead Programmer since summer 2009 and Visiting Assistant Professor at MIT. In addition to his incredible vision for music21, Chris has brought new directions in music technology to MIT. We are sad to lose him, but wish him the best in his future work in Python for industry and hope to occasionally see his work as Lead Programmer Emeritus for the project. But we take a moment to welcome Ben Houge as new Lead Programmer for 2012-13. Ben is a composer of acoustic and electronic compositions, primarily for video games, and brings a wealth of knowledge on embedding musical systems within large, complex computer systems.

Version 1.0 brings the following features added since beta 0.6.3 in April:
  • Improved documentation for Humdrum users at http://web.mit.edu/music21/doc/html/moduleHumdrum.html
  • Serialization/freezing/unfreezing (still beta): store complete streams with all data as cPickle (local) or json (alpha; interchangeable) via the Stream.freeze() and Stream.unfreeze() commands.
  • Variant objects – store Ossia and other variant forms of a Stream. Stream.activateVariants lets you move between variant and default readings.
  • Better work on FiguredBass (including pickup measures) and Braille output (thanks Jose!)
  • Support for .french, .dutch, .italian, and .spanish fixed-do solfeg (.german was already there). Scale .solfeg() gives the English relative-do solfeg syllable
  • Better MIDI import. Fixed PDFtoMusic musicxml import.
  • Much more powerful web applications (see paper to be produced this week and presented at the Hamburg Digital Humanities conference). See the webapps folder. (thanks Lars)
  • Works with MRJob for parallel work on Amazon Web Services Elastic Map Reduce (thanks Beth)
  • TheoryAnalyzer objects help find (or eliminate) musical occurrences such as passing tones, neighbor tones, parallel fifths, etc. (thanks Beth)
  • .show(‘vexflow’) will render simple scores in Vexflow (thanks Chris 8-ball Reyes!)
  • MusicXML now outputs audible dynamics. Improved ornament handling.
  • ABC files with omitted barlines at ends of lines are fixed (common abc variant)
  • Improved handling of Roman Numerals including automatic creation from chords.
  • Huge speedup to chordify() for large scores. (n.b. this version also works with PyPy for even more speedups). Faster corpus searches.
  • Gracenotes all work!


So with v. 1.0 we can say that the project is done and no more features will be added, right? Hardly! This summer we continue on work on a number of things, such as automatically creating variants from two or more versions of the same Score; automatically finding repeating or similar sections in scores; hugely expanding the corpus for 14th-16th c. music.; support for Medieval notation; better Vexflow and Javascript support; cached Features for quick machine learning on the corpus; and tons of improved docs.

Thanks again to everyone who has made the project possible. If you have examples of work you’ve done with music21, please share it with me or the list. shameless plug: It’ll be a lot harder to keep working on music21 if I’m unemployed, and your testimonials will help me with the tenure process next year.

     -- Myke

Thursday, June 7, 2012

Process Music with music21

Last July, I was watching this gorgeous video showing the waves created by a set of pendulums of different (but simple ratio) lengths after they're simultaneously released:


(courtesy Harvard Science Center demonstrations)
I thought about ways that such a demonstration could be made musically using simple processes.  After writing the music21 code to do so, I discovered that several other composers have done similar things, so I don't think that it's absolutely original, but I wanted to share the possibilities.  First the opening of the score and the recording:



Play here directly: Or if that doesn't work: Click here
Here's the music21 code used to make it (unhighlighted code in music21-tools Github repository under composition.phasing):
  1. def pendulumMusic(show = True,
  2.                   loopLength = 160.0,
  3.                   totalLoops = 1,
  4.                   maxNotesPerLoop = 40,
  5.                   totalParts = 16,
  6.                   scaleStepSize = 3,
  7.                   scaleType = scale.OctatonicScale,
  8.                   startingPitch = 'C1'
  9.                   ):
  10.     from music21 import scale, pitch, stream, note, chord, clef, tempo, duration, metadata
  11.    
  12.     totalLoops = totalLoops * 1.01
  13.     jMax = loopLength * totalLoops
  14.    
  15.    
  16.     p = pitch.Pitch(startingPitch)
  17.     if isinstance(scaleType, scale.Scale):
  18.         octo = scaleType
  19.     else:
  20.         octo = scaleType(p)
  21.     s = stream.Score()
  22.     s.metadata = metadata.Metadata()
  23.     s.metadata.title = 'Pendulum Waves'
  24.     s.metadata.composer = 'inspired by http://www.youtube.com/watch?v=yVkdfJ9PkRQ'
  25.     parts = [stream.Part(), stream.Part(), stream.Part(), stream.Part()]
  26.     parts[0].insert(0, clef.Treble8vaClef())
  27.     parts[1].insert(0, clef.TrebleClef())
  28.     parts[2].insert(0, clef.BassClef())
  29.     parts[3].insert(0, clef.Bass8vbClef())
  30.     for i in range(totalParts):
  31.         j = 1.0
  32.         while j < (jMax+1.0):
  33.             ps = p.ps
  34.             if ps > 84:
  35.                 active = 0
  36.             elif ps >= 60:
  37.                 active = 1
  38.             elif ps >= 36:
  39.                 active = 2
  40.             elif ps < 36:
  41.                 active = 3
  42.            
  43.             jQuant = round(j*8)/8.0
  44.             establishedChords = parts[active].getElementsByOffset(jQuant)
  45.             if len(establishedChords) == 0:
  46.                 c = chord.Chord([p])
  47.                 c.duration.type = '32nd'
  48.                 parts[active].insert(jQuant, c)
  49.             else:
  50.                 c = establishedChords[0]
  51.                 pitches = c.pitches
  52.                 pitches.append(p)
  53.                 c.pitches = pitches
  54.             j += loopLength/(maxNotesPerLoop - totalParts + i)
  55.             #j += (8+(8-i))/8.0
  56.         p = octo.next(p, stepSize = scaleStepSize)
  57.            
  58.     parts[0].insert(0, tempo.MetronomeMark(number = 120, referent = duration.Duration(2.0)))
  59.     for i in range(4):
  60.         parts[i].insert(int((jMax + 4.0)/4)*4, note.Rest(quarterLength=4.0))
  61.         parts[i].makeRests(fillGaps=True, inPlace=True)
  62.         parts[i] = parts[i].makeNotation()
  63.         s.insert(0, parts[i])
  64.    
  65.     if show == True:
  66.         #s.show('text')
  67.         s.show('midi')
  68.         s.show()
One nice thing that you can do is call pendulumMusic with different attributes, such as:
        pendulumMusic(show = True, 
                  loopLength = 210.0, 
                  totalLoops = 1, 
                  maxNotesPerLoop = 70,
                  totalParts = 64,
                  scaleStepSize = 1,
                  scaleType = scale.ChromaticScale,
                  startingPitch = 'C1',
                  )
Play here directly: Or if that doesn't work: Click here

which gives a denser score that has parts that sound like Nancarrow. Or this version...:
        pendulumMusic(show = True, 
                  loopLength = 210.0, 
                  totalLoops = 1, 
                  maxNotesPerLoop = 70,
                  totalParts = 12,
                  scaleStepSize = 5,
                  scaleType = scale.ScalaScale('C3', '13-19.scl'),
                  startingPitch = 'C2',
                  )
Play here directly: or if that doesn't work: Click here which should produce a 19-tone version of the same piece.

Happy process music composing!

(Updated 2021 September to replace Flash links from 2012 with HTML 5 and point to the correct Github repository) 

Wednesday, May 9, 2012

Music21 speedups in Chordify and with PyPy

The biggest recurring complaint about using music21 is the speed at working with large scores. I wanted to point out two resources that are available in the latest SVN releases. Both will appear in the next public release, but for some people you might want to try it already:

  1. Some parts of chordify move from O(m^2) time to O(m) where m is the number of measures in a part – for very large scores, this will mean a huge speedup. (usually noticeable after about 100 measures)
  2. Music21 works with the rewrite of python called PyPy – which is a sped-up version of python 2.7. The only parts that don’t work are plotting algorithms, since matplotlib and numpy aren’t yet ported to pypy. Most operations will see about a halving of the speed – the exception is in parsing files a second and subsequent time (however, the first time is quite a bit faster).

Work on running music21 on multiple systems is proceeding, so we should be able to demonstrate that soon.

Thanks for the patience. My motto is “make it work first, make it faster later.” which I sometimes translate as, “we’ve waited 200 years to have a tool that can analyze thousands of works at once; we can wait another 20 minutes.” but that doesn’t mean we’re not working all the time to make music21 run as fast as we can.

Friday, April 27, 2012

Music21 External Overview

R. Michael Winters of McGill University has written up a short summary of some of the uses of music21 on his website. Thanks to R. Michael for his work and the shoutout!

Music21 was also a part of Florence Levé et al.'s work on rhythm extraction on polyphonic music in ISMIR 2011. See the paper here. It also helped enable Patrick Mennen's thesis "Pattern Recognition and Machine Learning based on Musical Information". Thanks also to Andrew Hankinson et al. for the shoutout in their paper Creating a large-scale searchable digital collection from printed music materials and to David Lewis et al. in their paper "Tools for Music Scholarship and their Interactions: A Case Study".

We're also thankful for the writeup about music21 by Douglas Mason in his article for the Department of Energy (p. 5) which won a 2011 DOE Emerging Writer Essay award. Douglas will be presenting aspects of music21 for data visualization at SIGGRAPH.

A new version of music21 was recently released. For the first time, regular and noCorpus versions are available (the latter for embedding in systems with low memory space or for fully LGPL'd needs). Download it at Google Code.

Saturday, February 11, 2012

music21 Theory Analyzer

The music21 Theory Analyzer, currently under development by the music21 team at MIT, has the potential to transform the way students and teachers approach music theory education. The package provides analysis tools to identify common counterpoint errors, such as parallel fifths and improper resolutions, in a student’s assignment. It can then display the results directly to the student, or send an email to the professor containing the results coupled with an annotated version of the student’s assignment.


In nearly all courses of introductory music theory, students are taught the rules of common-practice contrapuntal composition. In 1725 Johann Joseph Fux published what is often considered the first “textbook” on composition, Gradus ad Parnassum, in which he outlined the many rules of counterpoint according to the Palestrina style. Surprisingly, the approach to teaching music theory has not changed much since its publication. Students learn the rules by reading a textbook, listening to musical excerpts, and studying with their teacher. They are asked to complete written compositional assignments in which they adhere to these strict rules. The teacher then must go through the assignments, checking for each rule. The entire process is fairly laborious and tedious, which can often be discouraging for both student and teacher.

The music21 Theory Analyzer utilizes the python-based music21 toolkit to transform the way students and teachers approach common-practice music theory education. The package pre-grades student assignments by analyzing them for common practice errors, checking the accuracy of textual responses, and returning results to the student’s professor.

The project began at the Boston Music HackDay in November 2011 where a small proof-of-concept music theory checker site was developed. Since then, the project has expanded in functionality and features. The curriculum of the package is specifically tailored to one of the most commonly used books to study music theory, The Musician’s Guide to Theory and Analysis, published by W.W. Norton & Company, Inc.

The package is currently implemented as a plugin for the open-source music notation editor, MuseScore. Through the plugin, students navigate to the exercise they wish to complete, and the exercise is loaded from the music21 server.


The student reads instructions regarding the exercise, and completes the assignment, often involving part-writing above or below a cantus in addition to several textual components such as labeling harmonic intervals. The student may then submit the assignment to their professor via email from within the plugin.

The professor then receives an email with the results of the music21 theory analyzer. This email contains a list of the comments generated by the analysis regarding the student’s assignment.


The package’s modular design allows different assignments to be easily analyzed for different subsets of music theory rules. For example, a typical novice-level part-writing assignment might check for basic counterpoint errors, such as parallel motion by fifth or octave and improper resolutions of dissonant harmonic intervals. The assignment would only be checked for counterpoint rules learned for that assignment, disregarding more complex rules taught later in the course. The package can also analyze textual input submitted by the student, dynamically determining accuracy by comparing the responses to the notes the student actually wrote.


Additionally, the results email includes an attachment with an annotated version of the student’s exercise. The score is colored according to the errors identified, allowing the professor to more easily locate the students’ mistakes.

Music21 Theory Analyzer is designed as a pre-grading and instructional tool. The package may be easily adapted for use by both the student and professor, serving as a tremendous educational tool.

The package is currently under development, although we welcome comments and suggestions. Future plans include expanding the analysis routines to include a larger suite of music theory concepts. We are also investigating additional interface options beyond MuseScore. This package is being developed as a UROP project by MIT undergraduates Beth Hadley and Lars Johnson, with support from the lab’s principal investigator Michael Scott Cuthbert, lead programmer Chris Ariza, and fellow UROP student Jose Cabal-Ugaz.

Music21 + ELVIS = NEH/Digging into Data Grant

The music21 project as part of a larger project called ELVIS on the study of chords from 1300-1900 has just received an NEH/Digging into Data grant on the order of $500k (of which $175k will go towards music21 projects). Read more here.