Skip to toolbar

Community & Business Groups

Musikmesse 2019 Meeting Minutes

The W3C Music Notation Community Group met in the Apropos room (Hall 3.C) at Messe Frankfurt during the 2019 Musikmesse trade show, on Thursday 4 April 2019 between 2:30 pm and 4:30 pm.

CG co-chairs Michael Good, Adrian Holovaty, and Daniel Spreadbury chaired the meeting, with 29 members of the CG and interested guests attending. The presentations from the meeting are posted at:

W3C MNCG Musikmesse 2019 Presentation

Daniel Ray from MuseScore recorded the meeting and has posted the video on YouTube. The video starting times for each part of the meeting are included in the headings below.

Attendees (3:42)

After Michael gave an introduction to the Music Notation Community Group, we started the meeting by having each of the attendees introduce themselves. Here are the attendees in alphabetical order by organization:

  • Dominique Vandenneucker, Arpege / MakeMusic
  • Dorian Dziwisch, capella-software
  • Dominik Hörnel, capella-software
  • Markus Hübenthal, capella-software
  • Bernd Jungmann, capella-software
  • Christof Schardt, Columbus Soft
  • Matthias Leopold, Deutsche Zentralbücherei für Blinde
  • James Sutton, Dolphin Computing
  • Karsten Gundermann, self
  • Bob Hamblok, self
  • James Ingram, self
  • Simon Barkow-Oesterreicher, Lugert Verlag
  • Mogens Lundholm, self
  • Michael Good, MakeMusic
  • Daniel Ray, MuseScore / Ultimate Guitar
  • Gerhard Müllritter, Musicalion
  • Johannes Kepper, The Music Encoding Initiative
  • Tom Naumann, Musicnotes
  • Christina Noel, Musicnotes
  • Reinhold Hoffmann, Notation Software
  • Martin Marris, Notecraft Europe
  • Heiko Petersen, self
  • Alex Plötz, self
  • Dominik Svoboda, self
  • Martin Beinecke, SoundNotation
  • Adrian Holovaty, Soundslice
  • Frank Heckel, Steinberg
  • Daniel Spreadbury, Steinberg
  • Cyril Coutelier, Tutteo (Flat.io)

capella-software Sponsor Introduction (8:10)

capella-software sponsored this year’s meeting reception. Dominik Hörnel, capella’s CEO, introduced the company and its product line. Most of the company’s products support MusicXML.

capella is following (and sometimes contributing to) the MNX discussions with great interest. Dominik thanked Joe Berkovitz for his pioneering work on MNX and welcomed Adrian for continuing Joe’s work.

Introduction from Adrian Holovaty (12:49)

Adrian offered his own introduction given that this was his first meeting while serving as co-chair. He works on a web site called Soundslice which is focused on music education. Soundslice includes a notation rendering engine which consumes and produces MusicXML. In another life, he was the co-creator of the Python framework Django, from which he has retired as one of the Benevolent Dictators for Life. He is looking forward to continuing Joe’s work on MNX as co-chair of the MNCG.

SMuFL 1.3 and 1.4 (14:25)

Daniel provided a quick update on SMuFL 1.3 and SMuFL 1.4. There are currently 16 issues in scope for SMuFL 1.4, including improvements to font metadata, numbered notation, and chromatic solfège. More issues are welcome.

Alex asked how many glyphs are included in SMuFL so far. Daniel wasn’t sure (he believes around 3,500) but will find out. After the meeting he reported that SMuFL 1.3 has 2,791 recommended glyphs and 505 optional glyphs. The Bravura font currently has 3,523 glyphs, including 227 glyphs that are duplicated at standard Unicode code points.

MusicXML 3.2 (18:49)

Michael presented the current plans for an MusicXML 3.2 release, developed together with the group’s ongoing work on MNX. There are currently about 20 open issues in the MusicXML 3.2 milestone, focused on improved support for parts, improved XML tool support, and documentation clarifications. MusicXML 3.2 does not try to address any of the new use cases for MNX, or feature requests that are better handled in a new format that does not have MusicXML’s compatibility constraints.

Michael opened up discussion about whether the group believes it is a good idea for MusicXML 3.2 development to proceed in parallel with MNX development. Christina expressed concerns about splitting the group’s energies between MusicXML and MNX, but that this also is a question of when we expect to see MNX-Common finished. Reinhold followed up saying that it is not just when MNX-Common is finished within the community group, but when will it be implemented by major vendors for both import and export, and when there will be music available in MNX-Common format.

Adrian believes that a huge part of the solution to MusicXML and MNX-Common co-existence and migration is to have automated converters between MusicXML and MNX-Common. Adrian was more optimistic than Michael on the timeline for finishing MNX-Common within the community group.

Daniel Ray believes that MusicXML and MNX will co-exist for some time. Specification, conversion, and adoption are three phases for MNX development. He asked if there are ways to speed up MNX and MusicXML development. For instance, could MusicXML development be broken up into smaller fragments by released more frequently? Michael responded that having more contributors and more early implementations can help to speed things along. Given MusicXML’s compatibility requirements, it is important to have multiple working implementations of major new features before a final MusicXML version release.

James Sutton asked about the need for parts support, since parts can already be generated from the data in the score. Michael replied that formatting, for example, is a current customer concern. Manual formatting that is not the same as automatic formatting currently gets lost in transfer between programs.

James Sutton also asked about using differences between score and parts instead of full copies of both score and parts. Christina and Michael replied that this is more the approach that MNX adopts. Specifying differences can get very complex very quickly. MusicXML’s simpler approach can lead to a faster release, while we work out a solution for MNX-Common that is more appropriate for native applications.

James Ingram is in favor of continuing MusicXML and MNX as separate projects, but we need to keep clear the interface and boundaries between the two projects. We do not know if MNX is going to succeed, so we need to keep improving MusicXML in the meantime.

Frank elaborated on an earlier question from James Sutton about importing score and part differences into a notation program if MusicXML 3.2 treats them as separate sets of data. Michael replied that this is already implemented for the next maintenance update for Finale, as well as for a future implementation of SmartMusic. These implementations use XML processing instructions rather than the new features planned for MusicXML 3.2. We will want to test this during MusicXML 3.2 development to make sure that other vendors also find this usable for their products with their different implementations of score and part relationships.

Michael summarized the results of the discussion as having general support for continuing MusicXML development alongside MNX development. However we should be careful to limit and target the scope of MusicXML releases so we do not slow down MNX development. This sense of the room matched what we heard earlier in the year at the NAMM meeting.

DAISY Consortium and Braille Music (49:50)

Matthias Leopold from the Deutsche Zentralbücherei für Blinde (DZB or German Central Library for the Blind) introduced the work of the DAISY Consortium on Braille music.

Braille music is the international standard for helping blind musicians, developed by a Parisian blind organist. Organizations around the world provide manual braille music translation. This relies on a diminishing pool of expertise, so the goal is provide fully automatic translation of music scores into braille music. Braille music has some specific challenges because of the optical nature of printed music notation compared to the more semantic nature of Braille notation.

Some software is at least partially accessible to blind musicians – for example, capella has supported this for a long time. But there are lots of problems with programs such as Sibelius – and it is necessary to provide better software tools.

The project is an initiative of Arne Kyrkjebø from the Norwegian Library for the Blind. Dr Sarah Morley-Wilkins is coordinating the project from the UK; they are relying on the input from Haipeng Hu, a blind musician from China.

Because Braille music relies on semantic relationships, not just optical relationships, there are issues that need improvement with original source files, conversion tools, and making sure that all concepts can be expressed directly in MusicXML and MNX-Common.

Simon from Forte asked if there was a good way to evaluate the quality of the MusicXML files exported from his tool. Matthias replied that this is a goal of the group (to have a tool to do this) but it is a difficult problem – how can you automatically evaluate what the right results are?

Daniel Ray commented that there is an initiative at MuseScore to improve support for braille music in the OpenScore project.

Martin Marris commented that the change to using a Qt UI in Sibelius 7 prevented access to the UI elements. Matthias says he’s not seen any blind users running Sibelius but his understanding is that it is the most accessible of the main notation programs.

MNX (59:19)

Adrian provided an update on Joe Berkovitz’s whereabouts (he is a successful sculptor and a grandfather) and what Adrian has been doing for the past few months. He has put together an introductory document for MNX-Common to explain what it is intended to be and why we are doing it. This is posted on the MNX GitHub page.

A second thing we have done recently is to decide that MNX-Common and MNX-Generic would be two separate formats. The original idea was that an MNX file could be a more generic SVG-type file or a more semantic MNX-Common file, and that the application opening the file would have to decide what to do about it. We decided against this for three reasons: it would be confusing for end users, confusing for developers, and there is no huge benefit in combining them. So we have decided to split the specification into two, but have not yet done the work.

Bob asked whether we will call the two formats different things. Adrian answered that MNX was always intended to be a codename, but we need to settle on some names soon. Michael said that we would like the names to indicate that they are a family of specifications, since we would like to provide semantic support for other kinds of music notation in the future.

Written and Sounding Pitch Introduction (1:05:25)

We have a fundamental decision to make for MNX-Common: should pitches be represented with written pitch or sounding pitch? These pitches can be different for transposing instruments either in parts or a transposing score.

Adrian outlined the options of storing sounding pitch, storing written pitch, and variations that allow duplicate or alternate pitches. This question of how to represent pitch gets at some big questions about MNX-Common:

  • Are we encoding musical ideas or musical documents?
  • Do we prioritize the performer’s or listener’s perspective?
  • What is the ground truth: the sound or the visual display?
  • How important is XML readability vs. a reference implementation?
  • Is redundancy between score and parts inevitable, or is every part-specific notation derivable from a score given the right hints?

Written and Sounding Pitch Discussion (1:15:02)

James Sutton said that philosophically, data duplication is terrible as it introduces errors due to mismatches, so we should store minimal data. Because music is sound, we should store the sounding pitch, and the transpositions can be handled algorithmically.

Christof advocated for the exact opposite, storing written pitch. We didn’t enjoy programming MusicXML. It suffered from different implementations from different vendors and was too flexible. Developers need to have a comfortable, understandable format. He likes being able to see the direct correspondence between the page of music he is looking at and the XML markup, and not be forced to transpose mentally. Any kind of transposition will often cause changes for stems, slurs, and much more. Make it as joyful as possible for developers to implement this format. The decision should be guided by practical concerns more than philosophical questions.

Adrian asked whether having a reference implementation would help? James Ingram said that the reference implementation would not be a substitute for the readable XML markup. Christof said that a reference implementation would not fit into his development workflow.

Johannes disputed that philosophically music is sound. If so, we have never heard music from Bach, Beethoven, or Mozart since all we have from them is paper. Printed or written music is also music. Technically, it’s hard to understand transposing instruments when scanning a printed score using OMR. It would be easier for scanning software to adjust the transpositions than to change all the pitches. He sees no reason to encode the sounding pitch as primary, though both written and sounding pitch could be present – that is a separate discussion. If only one pitch is to be encoded, it should be the written pitch. This is also the way both MEI and MusicXML do it.

Christina says that from a philosophical standpoint the piece of sheet music is the composer’s way of communicating to the musician. Sound is the end result of the musical idea. However, as a person who works for a publishing firm, we work very hard on the details of the displayed music notation. These details need to be correctly specified in the files we are importing and exporting. Even if we are going to turn it into something else, we need to be able to reproduce the original written document. She feels we should write both the written pitch and the sounding pitch so that both can be available.

Dominik Svoboda thinks both sound and the visual display are ground truth. If parts are messy, you annoy all of the players in your orchestra. Visual display is equally important to the sound. Perhaps we could use AI or neural network technology to bridge the gaps between written and sounding pitch?

Daniel Ray questions whether there is a consensus for what MNX is for: is it for the composer, for the performer, for the developer? Everything should be in service for the end user; anything that simplifies something for the developer should only be in service of simplifying things for the end user. What excites him is a common native format, because it would maximize everybody’s investment in the format compared to an exchange format. On the subject of what should be in the file format: transposition shouldn’t be in there. Instead, the instrument should be defined, so that the software can infer transposition information based on knowledge of the instrument. We may also be too focused on a fixed presentation format, e.g. a publisher’s printed page; instead we should be designing and thinking about a more fluid representation of music notation. What are the unique advantages of digital technology versus making more portable paper?

Christina said it’s important that information about layout must be possible to encode, else publishers won’t buy in. But these kinds of visual things should be as separate as possible from the representation of the semantic information. Musicnotes supports the idea of responsive layouts, but it is difficult to do that and have everything still look nice. If you have to specify every single written and sounding pitch for every note, it becomes very complicated very fast. Written spellings are very important for layout purposes.

Daniel Ray asked what delegations should exist. Currently the publisher and the composer decide everything and the consumer simply consumes what they’re given. In the digital world we can delegate some of those decisions to the user, e.g. read a clarinet in B flat part in alto clef if they really want to. Adrian said that regardless of what solution we come up with, that will always be possible. Changing instruments, transpositions etc. on the fly will always be possible, regardless of the decisions made in the choice of pitch encoding.

Bernd strongly supported Christof’s initial point to encode the written pitch. If we were only going to encode sounding pitch, we wouldn’t need to encode anything beyond what MIDI can define. It seems impossible that any automated conversion from MusicXML from MNX will be 100% accurate. Longer term the need will be for dedicated MNX import and export to avoid a lossy intermediate conversion to or from MusicXML. There are many cases where enharmonic decisions are editorial choices, and those choices need to be encoded. The capella format uses written pitch. The people dealing with scores are the players, and what they are talking about – the written pitch – is the important thing.

Reinhold agrees with both Christof and Bernd. We should use written pitch, capturing the editorial decisions while still being able to play the music correctly.

Mogens says that for him music is sound, and resolving the transposed pitch is possible algorithmically if we know the source and destination keys. He likes the idea of encoding both pitches because then it makes everything possible, and the redundancy is not so bad. For instance, he likes the idea of making it possible to specify a different playback pitch without affecting the notation to capture the specifics of a particular mode or idiom, e.g. microtonal inflections in folk music.

James Ingram thinks that doubly encoding written and sounding pitch is the answer, especially for being able to handle microtones. For sounding pitch we could use an absolute pitch. This could be done by combining a MIDI note number with cents, e.g. 60.25 would be a quarter-tone higher than middle C.

Based on his experience with making braille translations, Matthias believes that software that needs to do different things needs different formats. Building one format for programs that could do everything gets very complicated. We might want do divide into four or five different formats for different types of applications, and provide transformers between those formats.

Cyril would prefer to have the written pitch encoded. Whatever we choose, we need to be able to transpose accurately. This requires both the chromatic and diatonic information for transpositions. MusicXML does not require both, which causes problems. We should try to fix this for MNX-Common. We also need layout information, including global layout and part layout.

Bob said we are encoding notation, and the performer is using notation to make music for the listener, so we need to focus on the performer rather than the listener. From this perspective it is clear that we must encode the written pitch.

Frank said that the harp is another interesting example. Even though it is not a transposing instrument, it is often written in an enharmonically different key because of the technical way the instrument is played. For instance, B major will often be notated C flat major for the harp. This type of special case should not be forgotten in the encoding, however it is done.

Dominique does not want the pitch data to be duplicated to avoid consistency issues. We need to focus on encoding notation because this format is for notation apps, not MIDI sequencers. Encoding written pitch is better for notation apps because it is one line of code to get to sounding pitch, but the reverse is not true. If we want to use this format as a native format, we need the data to match what is displayed on the screen as much as possible. If we have to transpose the notation on the fly just to display it, that will be difficult to do and slow down the native apps, while transposing for playback is very easy to do.

Martin’s only objection for using sounding pitch is that the enharmonic choice of the written pitch is an editorial decision. Editors often make enharmonic changes in parts and it’s important to be able to encode them.

Simon said that there could be programs that don’t care about sound at all. Having the written pitch as the primary data is easier for those programs to deal with.

Dominik also spoke in favor of using written pitch. We all agree that we have different perspectives on the music and that there are semantic, performance, and presentation descriptions that should all be present in the specification. The semantics are about music notation: not about how music sounds, but how it is written. Therefore we should use written pitch.

As a non-developer, Tom would also opt for the written pitch. We are trying to encode the recipe for the muffins, not the muffins themselves.

Christof says that Joe started this work defining a set of roles but missed developers. We should try to make it fun to develop using MNX. End users will benefit from developers joyfully bringing them these features.

Michael closed the meeting. It was great to hear from many more voices that had not been present in our online discussions. For next steps, the co-chairs will now come up with a final proposal to present to the group.

Leave a Reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet, the plugin we use to mitigate spam comments.

*