The Music Notation Community Group develops and maintains format and language specifications for notated music used by web, desktop, and mobile applications. The group aims to serve a broad range of users engaging in music-related activities involving notation, and will document these use cases.
The Community Group documents, maintains and updates the MusicXML and SMuFL (Standard Music Font Layout) specifications. The goals are to evolve the specifications to handle a broader set of use cases and technologies, including use of music notation on the web, while maximizing the existing investment in implementations of the existing MusicXML and SMuFL specifications.
The group is developing a new specification to embody this broader set of use cases and technologies, under the working title of MNX. The group is proposing the development of an additional new specification to provide a standard, machine-readable source of musical instrument data.
w3c/smuflGroup's public email, repo and wiki activity over time
Note: Community Groups are proposed and run by the community. Although W3C hosts these
conversations, the groups do not necessarily represent the views of the W3C Membership or staff.
The W3C Music Notation Community Group met in the TEC Tracks Meetup space in the Hilton Anaheim (Level 3, Room 7) during the 2018 NAMM trade show, on Friday, January 26, 2018 between 10:30 am and 12:00 noon.
The meeting was chaired by CG co-chairs Joe Berkovitz, Michael Good, and Daniel Spreadbury, and was attended by 20 members of the CG and interested guests. The handouts from the meeting can be found at
Philip Rothman from the Scoring Notes blog recorded the meeting and has posted the video on YouTube. The video starting times for each part of the meeting are included in the headings below.
Introduction to the W3C MNCG (Starts at 0:41)
Michael Good introduced the W3C Music Notation Community Group. This meeting was part of NAMM’s TEC Tracks Meetup sessions, so several people attending were not members of the group.
Michael discussed the history of the group, its progress in 2017 in releasing MusicXML 3.1 as a Community Group Final Report, and its plans for 2018. The 2018 plans include work on the next-generation MNX project, as well as releasing a SMuFL update as a Community Group Final Report.
Group Introductions (Starts at 5:52)
We went around the room and each of the 20 attendees introduced themselves and their interest in the Music Notation Community Group. The attendees in order of their introduction on the video are:
Daniel Spreadbury, Steinberg (co-chair)
Jeff Kellem, Slanted Hall
Kevin Weed, self
Tom Nauman, Musicnotes
Jon Higgins, Musicnotes
Adrian Holovaty, Soundslice
Derek Lee, Groove Freedom
Philip Rothman, NYC Music Services
Jeremy Sawruk, J.W. Pepper
Bruce Nelson, Alfred
Mark Adler, MakeMusic
Steve Morell, NiceChart
Jon Brantingham, Art of Composing Academy
Evan Balster, Interactopia
Fabrizio Ferrari, Virtual Sheet Music
Simon Barkow-Oesterreicher, Forte Notation / Uberchord
Chris Koszuta, Hal Leonard
Doug LeBow, self
Joe Berkovitz, Risible (co-chair)
Michael Good, MakeMusic (co-chair)
These attendees covered a wide range of the music notation community. In addition to software developers there were composers, performers, music preparers and engravers, publishers, publication and production directors.
MNX (Starts at 21:00)
Joe Berkovitz led a discussion of the current status and future directions for the next-generation MNX project. Given the variety of attendees, Joe tried to balance the discussion between the perspectives of both developers and users of music notation standards.
Currently there are three parts of MNX:
CWMNX is the most familiar part for conventional Western music notation. We can think of this as the next generation of MusicXML, and hope that it will take the place of what would have been MusicXML 4.0.
GMNX, a general music notation format. This emerged from the group’s discussions of how we could encode arbitrary music, not necessarily part of the Western music literature. There is a role for a literal format the encodes a linkage between arbitrary vector graphics and sound. Many applications for Western music notation could use it as well.
The MNX Container covers the need to package an ensemble of files together in a way that reflects the need of a compound document. This is in the most primitive state now and needs to be built out further.
Why Start Again and Work on MNX vs MusicXML? (Starts at 29:50)
MusicXML predated the Internet delivery of music when print was still king. The MusicXML format includes several print-based assumptions such as page breaks and credits (page-attached text) that cause problems for more flexible, mobile, and web-based ways of delivering music.
The success of MusicXML and the web has also created more music notation use cases that people want to address. A key one is for the model of the standard to be closer to the model that you would use for building an interactive notation program. Michael elaborated on why this was an explicit non-goal for MusicXML back in 2000, when MusicXML was trying to create a standard exchange format in the wake of unsuccessful prior efforts such as NIFF and SMDL.
Times have changed since then. We now have a product developer community that has seen the benefits of music notation exchange standards. We also have many more links to the music publisher community than what MusicXML had in 2000.
Where Are We Now? (Starts at 36:40)
We do not have very much yet for MNX. There is a draft specification, but it only covers perhaps 1/4 to 1/3 of what MusicXML does. There are no reference applications, there are not many examples, and there are lots of open issues.
The hope is to have a complete draft of the specification by the end of 2018, though that may be optimistic. At that point the vendor community will not be rushing to build MNX support, but we do expect to see experimental implementations. This is fine – if you don’t have implementations, you don’t learn.
Container Format (Starts at 41:17)
The MNX container format tries to do a better job of representing document hierarchies than MusicXML’s opus document type, which nobody appears to be using. Another goal is to provide a more solid approach to metadata compared to what we have today in MusicXML. Different score types can be included in the container, including CWMNX, GMNX, and other score types such as neumes that might be developed in the future.
Michael asked about using a zip file as an alternative or supplement to the XML format container. Joe replied that zip is just one of many ways we could package an archive, and Michael will file an issue on this.
Michael raised a second question about including digital rights management in the container format. Jeremy Sawruk replied that we should look at the HTML5 video debacle and not specify DRM ourselves. We should not preclude vendors adding DRM, but that should be done at the vendor level.
Doug LeBow raised an issue about being able to identify a creation and usage history for music within the metadata. In his experience with Disney, music get repurposed and reused all the time, and people need to know where different parts came from. Joe suggested that Doug enter issues so that we can capture his knowledge of these use cases. Joe also mentioned that MNX intends for metadata to present at any level in the document, not just at score or collection level.
CWMNX Highlights (Starts at 50:35)
Sequences and directions are at the core of the new organization of musical material in CWMNX. In MusicXML you can hop back and forth between voices and times at will. CWMNX takes MusicXML’s cursor approach to ordering music and makes it much more constrained.
In CWMNX, music from a single voice is arranged into a sequence of events, including rests, notes, and chords. Directions are elements that are not events. Unlike events, they can have their own offsets into a container that they belong to. Dividing things into sequences and directions can make it easier to both encode and decode music notation. It provides a more natural mapping to data structures such as voices that are common among musical notation applications.
MNX tries to make a clear distinction between semantic markup, such as “a C4 quarter note,” and presentation information. Presentation information could be left out and the application could still create readable music, though not necessarily looking as good as you might like. Examples of presentation information include fonts, changes from standard positioning, size, and color. Presentation information in CWMNX is referred to as styles, a clear reference to HTML styles and CSS.
A third category of data in CWMNX is interpretation. This is more general than MusicXML’s sound element. Interpretation can specify that irrespective of what the semantics indicate, here is how some music should be played, using a MIDI-like description.
Michael added that MusicXML handles some of MNX interpretation data not only with the sound element, but with pairs of elements that indicate what is played vs how music looks. One example is using the tie element for playback and the tied element for appearance. These paired elements are a common source of confusion among MusicXML developers. MNX can offer a more systematic approach to addressing the same underlying problem.
CWMNX includes the concept of profiles. A “standard” profile would cover the great majority, but not everything, of what is in conventional Western music notation. Multi-metric music is one of the biggest examples of something that would be in CWMNX but might not be in the standard profile.
We want to support the concept of house styles in CWMNX. This includes font, distance, and other layout information that applies across an entire score. We want to easily substitute one style for another depending on context, enabling responsive styling for music notation.
CWMNX Discussion (Starts at 1:03:00)
Joe asked the group how far should CWMNX go in describing a normative style of positioning for conventional Western music notation? Should it try to do this, and if so, how far should this go? What would the benefits and drawbacks be?
Daniel Spreadbury said that if we go in this direction, then we have to specify norms, and specify them quite thoroughly. That will be difficult to do.
Kevin Weed asked what happens if we don’t have these standards in MNX. What’s the alternative? The alternative is what happens now, where each application decides for itself how to interpret the formatting.
Doug LeBow referred to orchestrator use cases where people split up between Finale and Sibelius to write a single cue under high time pressure, with different people writing for different instruments. Without standards for appearance between applications you would lose control over quality and stylistic consistency in the final music product.
Chris Koszuta said that Hal Leonard has been trying to get their digital files to the pristineness of the printed score. They have worked very hard to get to that point with MusicXML over the past several years, but are not quite there yet. To get the same control of the nuances in digital as you have in print, you need some agreed-upon standards. If not, when things fail and you have to go back to do additional work at the publisher, that’s tens of thousands of files with all the time and money associated with that.
Hal Leonard has been converting into MusicXML over the past four years but still runs into customer problems because a digital service doesn’t do something quite right yet. Customers really do notice these details. Chris hopes we can get to some level of agreement and control where it’s fluid and things are fun, instead of being a lot of extra work to create the next step of interactive music notation. If we don’t lock things down now, we will be fiddling with these details for years and years ahead.
Tom Nauman said that a lot of Musicnotes’ use of MusicXML is inbound. Everything they import has to be tweaked to be satisfactory to the customer. Chris followed up that when Hal Leonard does content deals with partners, they don’t want to provide messy files where the partner has to do extra work.
Daniel said that if we do encode positioning information, we have to lock it down and agree. It will take a long time, but if we don’t do it and things aren’t absolutely black and white, applications won’t be predictable. In other aspects of MNX we are trying to have just one way to encode things, as with sequences. Positioning would be the same way.
Steve Morell raised the point that most developers focus on their MusicXML import, but MusicXML export has less attention paid to it. Is there any way to incentivize export as well as import quality? Doug agreed – there is so much back-and-forth exchange in today’s workflows for musicians that both directions need to work equally well. Joe replied that when we have widely adopted, free, probably open source MNX viewers in browsers, that would provide an incentive to improve export.
GMNX (Starts at 1:16:42)
GMNX is a “graphics plus media” type of format. The notation is an SVG file. Musical sound or performance is either an audio file or a MIDI-like list of timed events. The time relationships can then be linked between the graphics and sound, and applications don’t really need to know what the notation is. Many practice and performance applications don’t need more than this.
Joe has made GMNX demos available online for examples from Fauré, Hebrew cantillation, and aleatoric music from Lutosławski. GMNX might even be applied sooner than CWMNX since it is much simpler.
Adrian Holovaty asked how we could get performance synchronization points from GMNX into CWMNX? The synchronization feature in GMNX would be useful for applications that do know the semantics of music notation. Joe asked Adrian to file an issue so we can address this.
Evan Balster asked a question about longer-term intent and if MNX was something that could be embedded within HTML browser documents in the future, like math and SVG. Joe replied that there will be a namespace immediately, and it could be viewable in a browser once there is a decent JavaScript library that supports it.
Conclusion (Starts at 1:22:30)
At this point we concluded the meeting. We had productive discussions and look forward to these conversations continuing. We hope to figure our a way to have these conversations more often than our once or twice a year meetings at NAMM and Musikmesse.
That evening we had a dinner at Thai Nakorn in Garden Grove. This photo of the dinner attendees is courtesy of Tom Nauman.
Attendees from bottom left, going clockwise: Matthew Logan, Michael Johnson, Philip Rothman, Adrian Holovaty, Jon Higgins, Joe Berkovitz, Doug LeBow, Tyler LeBow, Daniel Spreadbury, Vili Robert Ollila, John Barron, Michael Good, Jeff Kellem, Evan Balster, Jeremy Sawruk, Tom Nauman, Simon Barkow-Oesterreicher, Steve Morell, Kevin Weed, and Laura Weed.
We had a great meeting in Anaheim yesterday. Here’s a video of the presentation and discussion; many thanks to Philip Rothman of scoringnotes.com for recording our session!
The co-chairs are hosting a meeting at The NAMM Show in Anaheim, CA for CG members and for any interested attendees of the show. The meeting will take place this Friday (January 26, 2018) from 10:30 am to 11:55 am, in the Hilton Anaheim in Room 7 on Mezzanine Level 3. The discussion will focus on MNX, a next-generation markup language for encoding notated music (see the preceding post).
This is an important milestone for the group, and we’d like to thank everyone who has contributed to the many email threads and issues that helped move MNX forward so far. We’re excited at the prospect of moving the group’s work to a new level, one which can take a fresh look at some of the problems in music notation encoding.
Some of the significant ground covered in this draft includes:
A proposed semantic encoding for conventional Western music notation named “CWMNX”. This encoding takes MusicXML as a point of departure, but includes many improvements to syntax, style, content and structure. (See spec and examples.)
A new type of literal encoding called “GMNX”, which links SVG graphics to audio/performance data via the time dimension. This encoding is particularly suited to drive music practice and performance applications. It also tries to remove bias towards notational idioms by avoiding the encoding of semantics: in GMNX, notations are just shapes and regions, and all audible content is encoded separately. A common timeline serves to connect notations and their audible counterparts. (See spec and examples.)
The group will be discussing MNX as well as other topics at the forthcoming NAMM Show in Anaheim, CA on Friday January 26, 2018; see this link for details.
We also expect to hold a meeting later in the year at Musikmesse in Frankfurt. Details forthcoming.
Today is a major milestone for the W3C Music Notation Community Group. We have published our first W3C Community Group Final Report for MusicXML Version 3.1.
MusicXML 3.1 is the first MusicXML release since August 2011, and the first release by the W3C Music Notation Community Group. As you can see from our GitHub issue list for V3.1 milestone, we addressed 80 issues during the MusicXML 3.1 development process. When you remove issues that addressed bugs introduced during the beta test and issues involving the move to the W3C, MusicXML 3.1 resolved 65 substantive issues. They fall into 4 major categories:
37 issues involved better support for the Standard Music Font Layout (SMuFL). These issues fell into 3 more categories:
Adding new elements and enumeration values to represent SMuFL symbols
Adding attributes and values to specify a particular SMuFL glyph in MusicXML extension elements
Adding the ability to combine text with arbitrary music symbols identified by a SMuFL glyph name
16 issues involved documentation improvements.
3 issues involved packaging:
The change to the .musicxml extension for uncompressed files
The new mimetype file in compressed .mxl files
New Uniform Type Identifiers for MusicXML files
9 issues involved other fixes for appearance and semantics:
Adding height and width to images
Adding grace cue notes
Adding measure number display text
Adding id attributes to uniquely identify many MusicXML elements
Adding a combination of slash and regular notation within a single staff
Adding highest / lowest notes without displaying leger lines
Adding parentheses to accidental marks displayed above or below notes
Adding more polygon options for enclosures
Adding more playback information for lyrics
Many people have contributed to the MusicXML 3.1 release in addition to my work as editor. Daniel Spreadbury’s invention and advocacy of the SMuFL font standard provided the main impetus for this release. MusicXML needed to improve its SMuFL support in order to maintain its current level of interoperability. SMuFL also provided the technology needed to solve formerly difficult problems such as the mixture of text with arbitrary musical symbols.
Joe Berkovitz led the creation of the W3C Music Notation Community Group and moving responsibility for MusicXML and SMuFL to the new group. Joe’s work on the next-generation MNX format also freed MusicXML 3.1 to focus on shorter-term, tactical changes to improve interoperability between notation applications. Ivan Herman from the W3C helped get the community group up and running.
Many other W3C Music Notation Community Group members contributed to MusicXML 3.1. Jeremy Sawruk and Matthew Briggs checked in changes to the GitHub repository. James Sutton, Mogens Lundholm, Bo-Ching Jhan, Evan Brooks, and Matthew Briggs wrote up GitHub issues that were addressed in MusicXML 3.1. Martin Marris and Peter Trubinov suggested the idea behind one of MusicXML 3.1’s key features for adding SMuFL support. Mogens Lundholm suggested using the .musicxml extension for uncompressed MusicXML files. L. Peter Deutsch’s review improved the content of the Community Group Report. Hans Vereyken, Glenn Linderman, Richard Lanyon, Adrian Holovaty, Reinhold Hoffmann, Nicolas Froment, Jim DeLaHunt, Michael Cuthbert, and Eric Carraway contributed to the GitHub issue discussions. Many more members contributed to the discussions on the group mailing list.
We look forward to seeing more applications adopt the features of the MusicXML 3.1 format to improve the exchange of digital sheet music files between applications. We plan to release a SMuFL Community Group Report early next year, and to continue work on the next-generation MNX project. Thanks to everyone in the W3C Music Notation Community Group for their contributions to a productive 2017.
If you represent a W3C Member, please contact your Advisory
Committee Representative, who is the person from your organization
authorized to complete the commitment form.
If you have any questions, please contact the group on their public list: public-music-notation@w3.org. Learn more about the Music Notation Community Group.
Since our group’s priorities have become clearer over the past couple of months, the co-chairs have been working on revisions to the group’s charter in order to reflect the group’s current set of priorities.
The main proposed changes to the charter are as follows:
Update the timeline for the releases of MusicXML 3.1 and SMuFL 1.2
Explicitly identify the MNX container format, CWMNX and GMNX representation formats as the main deliverables of the group’s work
Propose that the initial versions of these three formats should be published by the end of 2018
Defines the development of test suites to support MNX, CWMNX and GMNX, and the development of software to aid in the conversion of MusicXML documents to CWMNX documents.
The procedure for changing a community group’s charter is that the revisions are proposed to the members of the group, who then have a period of 30 days to vote whether to accept or reject the revisions. Before we embark on the formal voting process, we would like to invite comments and feedback from members of the group.
The co-chairs propose that we have a period of two weeks for gathering feedback on the charter, and we ask that all feedback should be posted to the public-music-notation-contrib@w3.org mailing list before Monday 3 July. On or after Monday 3 July, the co-chairs will announce the formal start of the vote for the approval of the new charter.
Thank you in advance for taking the time to review the proposed changes to the charter. The co-chairs look forward to hearing your feedback.
We are happy to announce that beta versions of the Dolet 7 for Finale plug-in are now available from MakeMusic. You can download both the Mac installer and the Windows installer.
Dolet 7 for Finale adds support for reading and writing MusicXML 3.1 files. MusicXML 3.1 is the latest version of the MusicXML format and the first one to be developed within the W3C Music Notation Community Group.
By making Dolet 7 for Finale available for beta testing, we hope to assist other music software developers who want to add support for MusicXML 3.1 by being able to test exchanging MusicXML 3.1 files with Finale.
The MusicXML 3.1 features that Dolet 7 for Finale supports include:
Uncompressed MusicXML 3.1 files are now saved with a .musicxml file extension by default (issue 191).
Finale expressions with a mix of Maestro musical symbols and text are now exported and imported (issue 163).
Finale expressions with a mix of text and note symbols from other Finale built-in fonts are now exported (issue 163).
Unexpected symbols in Finale articulations can now be exported in a way that can be exchanged with other applications (issue 107).
Unexpected symbols in MusicXML files can now be imported into Finale articulations (issue 107).
Parenthesized accidental marks are now supported (issue 218).
Circled noteheads for percussion notation are now supported (issue 91).
The two styles of percussion clef are now distinguished during export and import (issue 64).
The n dynamic character is now supported (issue 52).
The sfzp and pf dynamics now use the corresponding new MusicXML elements (issue 52).
Highest / lowest notes without leger lines now use the standard MusicXML 3.1 feature for greater interoperability (issue 184).
Arrowhead characters in the Engraver Text fonts are now supported (issue 183)
Enclosures with 5 to 10 sides are now supported (issue 86).
The Dolet 7 for Finale plug-in is a 32-bit plug-in that works with Finale 2009 through Finale 2014.5 on Mac and Windows.
Please share your experiences with MusicXML 3.1 and the Dolet plug-in at the W3C Music Notation Community Group. If you find a problem with MusicXML 3.1 in your testing, please post an issue at the MusicXML GitHub repository.
Thank you for your help in making the MusicXML format an ever more powerful way to exchange scores between applications that use music notation.
The first beta version of the MusicXML 3.1 format is now available for developer testing. To get a copy, go the MusicXML GitHub repository at https://github.com/w3c/musicxml and click the “Clone or download” button. You can then either download a zip file or clone the Git repository on your local machine.
MusicXML 3.1’s main new feature is greatly enhanced SMuFL support. Many more SMuFL symbols are now represented directly by MusicXML elements, such as the <flip/> element for the brass techniques character located at U+E5E1. Other symbols can be specified by their SMuFL canonical glyph name in the MusicXML extension elements, using the “smufl” attribute. For example, there is no separate MusicXML 3.1 element for a brass valve trill, but it can be specified using the other-technical element:
<other-technical smufl=”brassValveTrill”/>
In addition to enhanced SMuFL support, MusicXML 3.1 has several other new features to improve interchange between music notation applications. These include:
A new “id” attribute is available on over 40 elements to allow specification of unique identifiers.
Musical symbols may be arbitrarily interspersed with text using the new <symbol> and <credit-symbol> elements.
Grace cue notes are now supported.
The measure element has a new “text” attribute to allow specification of non-unique measure numbers within a part. One case where this is helpful is when multiple movements are combined in a single MusicXML file, with each new movement starting at measure 1.
Ties are now allowed within metronome marks.
Images can now specify their height and width for scaling.
A new “time-only” attribute for the <lyric> element allows precise specification of which lyric is associated with which time through a repeated section of music.
A new “let-ring” type value for the <tied> element makes it easier to represent “let ring” or “laissez-vibrer” ties.
Additional polygonal enclosure shapes are allowed.
There is direct support for highest / lowest notes that are displayed without ledger lines.
MusicXML now has a recommended Uniform Type Identifier for macOS and iOS applications.
Compressed MusicXML files now have a recommended mimetype file to make it easier for applications to detect what type of zip file they are working with. This is similar to what is used in EPUB and other zip-based file formats.
The recommended file extension for uncompressed MusicXML 3.1 files is now “.musicxml” rather than “.xml”.
Many parts of the documentation have been clarified.
We look forward to hearing about your experience in implementing these new MusicXML 3.1 features. If you run into bugs or other issues with MusicXML 3.1, please create a new issue in the MusicXML GitHub repository.
The W3C Music Notation Community Group met in the Logos/Genius rooms (Hall 9.1) at Messe Frankfurt during the 2017 Musikmesse trade show, on Friday 7 April 2016 between 2.30pm and 4.30pm.
The meeting was chaired by CG co-chairs Joe Berkovitz, Michael Good, and Daniel Spreadbury, and was attended by about 40 members of the CG. A complete list of the attendees can be found at the end of this report, and the slides presented can be found here.
Peter Jonas from MuseScore recorded the meeting and has posted both video and audio recordings. Each recording is in two parts. The first part is for MusicXML and SMuFL, the second is for MNX. The videos are here:
Daniel Spreadbury (Steinberg, CG co-chair) presented a SMuFL 1.2 update. 33 of 42 issues are already fixed. The largest remaining issues implementation of Kahnotation and Spatialization Symbolic Music Notation (SSMN).
MusicXML 3.1
Michael Good (MakeMusic, CG co-chair) presented a MusicXML 3.1 update. 60 of 68 issues are already addressed. This is the last chance for a while to include anything not currently in scope, because after this most efforts will be focused on MNX. SMuFL 1.2 and MusicXML 3.1 should be released at the same time.
SMuFL coverage is one of the main goals in MusicXML 3.1. SMuFL glyphs can now be represented in MusicXML either using new elements, new entries in existing enumerations used by existing elements, or a means of specifying a glyph for an existing element using the new smufl attribute.
MusicXML 3.1 is adding a unique ID attribute to around 40 elements (e.g. measures, notes, directions, etc.) to allow unique identification of specific elements.
Musical symbols embedded in text can now be represented more cleanly using the new symbol and credit-symbol elements. These use SMuFL canonical glyph names as their values. You can’t use them by themselves: they must be used within a sequence of words or credit-words elements.
Grace cue notes can now be specified in MusicXML 3.1. This was a limitation of MuseData previously carried over to MusicXML.
You can now have ties in metronome marks. You can specify height and width values for embedded images. The new time-only attribute for lyrics allows you to specify which lyric is used on which pass through the music in repeats. The tied element can now have a let-ring type instead of requiring separate tied elements with start and stop types as used before. You can now use non-unique numbers for bar numbers to be displayed, which is helpful for scores that include multiple works.
Remaining work includes decisions on media types, UTIs for macOS and iOS, and media/MIME types. Michael asked if there were any objections to making this constellation of changes. Reinhold Hoffmann (Notation Software) asked if there were any real additional value in changing the recommended file extension for uncompressed MusicXML files, since it could cause some user confusion. Mogens Lundholm suggested that we should use .musicxml since there is no reason to stick to three character suffixes either. The meeting reached consensus that we should make these changes, and a preference for .musicxml as the recommended file extension. Michael will update the GitHub issues after the meeting.
A consideration of some things not currently in scope then followed. Is it OK to leave the remaining glyphs in the SMuFL ranges for plucked, string, and wind techniques as being only indirectly represented in MusicXML and instead leave them in the other-technical element using the smufl attribute? Some remaining documentation issues are due to lack of clarity in the MusicXML specification which have caused implementations to differ, and Michael doesn’t want to make changes to the documentation that will make existing applications wrong.
Michael is looking for implementations other than Finale and the Dolet for Finale/Sibelius plug-ins. James Sutton raised his hand as being likely to want to implement it soon. Gustaf, Soundslice, and Komp will all try to be among the initial implementations.
Reinhold asked whether it would make sense to participate only on the import side rather than also on the export side. Reinhold would want files from other applications to verify the import. He will discuss the schedule and see if this is possible.
MNX
Joe Berkovitz (Hal Leonard / Noteflight, CG co-chair) presented the current state of the MNX proposal. The current state of the proposal is a rough draft with a lot of gaps. It’s not a spec as it’s not rigorous enough, but the idea was to get something out to the community as quickly as possible.
Roadmap, Tradeoffs, and Rationales
There is a trade-off between three desirable qualities: semantics, interoperability, and generality. You can only have two of the three. A lot of the current discussion is around the ability to encode more music. Rich semantics and large vocabulary means an enormous spec and therefore a big implementation surface, which means a lot fewer implementations. A lot of generality and operability means worse semantics, instead using a general way of describing your subject matter, where you try not to model so much and let things speak for themselves. MusicXML occupies the middle ground between generality and interoperability, but MNX is going to allow a sense of compliance that can be clearly stated, which means a much tighter specification to prioritise interoperability.
The roadmap therefore looks like this: now, we’re looking at a way of encoding idiomatic Common Western Music Notation (CWMN), in the ballpark of what you see in works between 1600 and 1900. This does leave out many works, but it is intended to roughly as expressive but more interoperable than MusicXML.
As a corollary to this, we also need to take on the responsibility of developing a more general approach that would allow (more or less) any graphical approach to music that has time as an element of it. This will probably necessitate the graphics having to speak for themselves. We don’t yet have any recommendations on this: we should study some of the precedents (e.g. SMIL – Synchronized Multimedia Integration Language).
What about the middle ground? Scores that use many aspects of CWMN but which go outside of the conventions: we could call these “CWMN-inspired”. Ligeti is one example, but you could go a long way beyond Ligeti to include things that include invented notations alongside CWMN. Should we try and shoehorn this into the same structure we use for idiomatic CWMN, or should we instead move it towards the more general effort? The chairs think it has to go the second way, in order to protect interoperability and avoid having to build out the semantics further and further.
Dominik Hörnel asked about music that doesn’t fit cleanly into these two approaches, e.g. other music written before 1600 that isn’t specifically CWMN. Joe suggested that we might want to develop other profiles that draw upon some of the concepts we use in CWMN where appropriate. For dialects that have a shared understanding in the world, we can have idiomatic representations within the MNX family. Michael said that, when feasible, we could share the commonalities between different repertoires and allow the differences to be different.
Mogens Lundholm asked what are examples of things outside CWMN? Joe and Michael suggested Berio, Cage, aleatoric music where the performer has great freedom to play or not play various notes, etc.
Alexander Plötz asked whether we have a hard definition for idiomatic CWMN: can we describe what the boundaries are? We are going to have to decide what they are. We seek consensus but in the end we might not get there on everything.
Alexander also asked whether it would be appropriate to tackle the more general or more obscure notations, e.g. the Okinawan notation, now rather than later? Joe thinks that these might be good test cases for the MNX(graphics+time).
James Ingram said that it’s a mistake to think that CWMN has stopped developing: there are more general forms of CWMN that could be accommodated. James said that all symbols in music notation have “event symbols” which have meaning and a temporal element. CWMN has well-formed and efficient event symbols that we should build on rather than stating that evolution has come to an end.
Joe said that some time from now we will be able to identify the shared understandings that have emerged from the latter-day developments of western music, and address them with encodings that convey their spirit and meaning, but it’s the shared understanding that we don’t yet have in order to fulfil that goal. James responded that CWMN has been developed over hundreds of years to become very efficient and legible, and it would be a mistake to think that we need to forget this tradition and develop something else. As a composer it seems stupid that you can only put symbols in particular places: why not put them between them? James would prefer to generalise rather than to constrain.
Alexander asked whether MNX(graphics+time) should have a timeline with goals? Joe responded that we plan to include a rough timetable in the roadmap. Alexander said he would be more reassured if there were a timetable.
Alexander said that we are focused on interoperability: but who settled that? Michael replied that it’s in the charter of the W3C Music Notation Community Group. MEI is not as focused on interoperability, for example, as MNX is.
Hans Vereyken (neoScores) said that it’s very important to focus on as narrow a focus as possible. When you think about how it’s going to be adopted, narrowness is important in order to speed adoption. We won’t catch everything, but if the focus is there, we can develop more profiles. If we start too broad then we can’t get implementations.
Johannes Kepper (MEI) seconded this appeal to focus on a core subset, then only add things when there is a consensus from a larger group. Only when there is substantial need should you include it later. Johannes says that this is a learning from the development of MEI. Laurent Pugin (RISM) added that it would be a good idea to look at the other modules in MEI as a means of adding things to MNX: don’t make a new niche from an existing niche.
Christof Schardt (Columbus Soft) commented from the viewpoint of the implementer. Software exists because we accept limits. In order to make MNX a success, we have to agree on the constraints. Supporting idiomatic CWMN is enough. In the discussion group, some of our voices are slowing down the process. Expressed trust in the chairs and the ability to set the limits.
Alon Shacham (Compoze) said the group’s responsibility is to minimize the number of symbols, to make constraints.
Adrian Holovaty (Soundslice) suggested that since our focus is on interoperability, should we perhaps survey (say) the capabilities of the 30 top scoring programs to find the boundaries of what interoperability really means in a more scientific way?
Interoperability
Shifting to interoperability, why are we doing another format for CWMN? How do we achieve interoperability? Profiles and content types are mechanisms to prevent application developers from having to do everything: even within CWMN there will need to be a core profile that will exclude some more esoteric features.
We haven’t yet talked much about the MNX rendering model, but we need something that will help us to specify whether a rendering is conformant. L. Peter Deutsch and Christof Schardt have brought this up many times over the years. We need to allow enough wiggle room for applications to do what they think is right but hopefully we can arrive at something that allows us to specify what the baseline is.
CSS
Joe said that CSS is one of the hot button aspects of the spec. You could do MNX without CSS, but it’s the chairs’ view that it would be more painful without it. CSS is not there because CSS is a “webby” technology, though of course it is. It’s there because it’s a rule-based approach to styling in a way that is orthogonal to the structure of the mark-up language itself. Because it’s orthogonal it’s easy to ignore if need be. Also consider the use of CSS for the performance information: CSS can replace various sound-related elements in MusicXML. It could even inject specific MIDI-like interpretations that override the semantic representation.
The CSS selector/rule system may not capture the way that appearance of items are defined in scorewriter software. So maybe we should step away from it, at least from the point of view of exchange.
Matt Briggs (Semitone) said that as a C++ programmer, he is not very familiar with CSS. It could be written in XML elements rather than as a CSS string. Ad hoc styles look like “stringly-typed data”. If interoperability is a goal, then “easy to ignore” should not be a selling point. How do you constrain CSS content? It’s basically a list of properties that each element can have. Can we generate code from CSS like we can from XML schemas? Joe will follow up.
James Sutton asked about temperament. Michael replied that MusicXML does not support temperament directly: it does it by allowing you to specify precise MIDI pitches for each part. Could it be handled as part of the performance rules via CSS? James thinks it should be defined in an intrinsic manner for the pitch of each note.
James Ingram commented that the CSS could go into the SVG produced by the scoring application.
Christof expressed support for CSS because it can be applied to a standard set of data in lots of ways, e.g. to change the appearance of an existing score, in a powerful way. Producing variations of a single score is a key use case and using CSS makes that possible.
Cursors, Syntaxes, and System Notations
Joe realizes that the compact syntaxes in MNX could be controversial given that MusicXML has a very fine-grained element approach. For cursors and positioning, there are many possibilities for positioning directions in arbitrary places and we will need to decide.
Christof said we should avoid events having positions. He likes the fact that sequences don’t have positions, and thinks we should restrict positions to only non-event items. Offset vs. position: Christof thinks that offsets are semantically stronger than positions.
Michael agreed with Christof and mentioned that ticks can provide a high-resolution offset that is simpler than MusicXML’s divisions concepts. Daniel said he could not disagree more with what Christof and Michael have both said. Joe said that we will need to discuss this issue on the list; as you can see even the co-chairs disagree on this point.
Alexander proposed nesting sequences within an event to allow the positioning of non-note events within a longer note event.
Matt said that it could all be done without cursors. Events could be listed in sequential order, but with their position in time specified. Johannes says that this is how MEI also works, via “control events”.
Next Steps
Joe said our next steps are to apply course corrections based on this discussion, provide more MNX examples, and create and approve a roadmap document. After this we need to establish “beachhead” specifications for MNX(cwmn) elements and style properties, and begin reference implementation. We will continue open design discussion of MNX(graphics+time) in the background.
Generalized Notation
Joe said it should be possible to associate regions with arbitrary graphics with a progression of time, coupled with audio resources such as MP4 and MIDI data. In one set of applications, a semantic format could be converted into a graphics/audio format: compiling a score in a way that could be rendered by simple applications, with the graphics markup tracing back to the semantic data.
James Ingram discussed the Fauré “Après un reve” MNX example. The system element could represent tempo by specifying that the bar is 3000ms long, because 3/4 * 60 beats per minute, means that each bar is 3 seconds long. Instead of specifying durations in terms of length in quarter notes, instead specify duration in milliseconds so that it’s 1000ms. Could specify the duration as e.g. 980ms or add an additional event that is a grace note. The bar doesn’t have to add up in terms of CWMN, it just has to add up in time. Could have 5 quarter notes in there, even with different lengths, provided they add up to 3000ms. Tuplets are like beams: if a tuplet lasts for 2 eighths, it lasts for 1000ms, so you can add up the elements inside the tuplet with any values provided they add up to 1000ms. It’s useful to make them look like how they do in CWMN because they’re easier to read, but they could use any other symbols. So far this has been about metronomic time, but with recordings measures are at all different lengths, so this can be adjusted.
James Sutton commented that you’re basically throwing away all the higher order information in order to make something that looks like MIDI.
Additional Questions
Christof asked about the score container mechanism. We expect that applications will not accept every content type. MusicXML doesn’t provide a means of handling collections of music. The opus document type is not implemented by anybody. So multi-movement works of the same type, and different pieces of different types, can also be handled within the same container.
The meeting then moved on to a reception sponsored by Steinberg. Thanks to Daniel Spreadbury for arranging the sponsorship and taking the meeting minutes.
Attendees
Dominique Vandenneucker, Arpege / MakeMusic
Sam Butler, Avid
Amit Gur, BandPad
Antonio Quatraro, Biblioteca Italiana per i Ciechi
Carsten Bönsel, self
Dominik Hörnel, capella software
Bernd Jungmann, capella software
Christof Schardt, Columbus Soft
Alon Shacham, Compoze
James Sutton, Dolphin Computing
László Sigrai, Editio Musica Budapest
Sébastien Bourgeois, Gardant Studios
Joe Berkovitz, Hal Leonard / Noteflight
Edward Guo, IMSLP
James Ingram, self
Mogens Lundholm, self
Grégory Dell’Era, MakeMusic
Michael Good, MakeMusic
Heath Mathews, MakeMusic
Thomas Bonte, MuseScore
Peter Jonas, MuseScore
Johannes Kepper, The Music Encoding Initiative
Michael Avery, MusicFirst
Senne de Valck, neoScores
Hans Vereyken, neoScores
Reinhold Hoffmann, Notation Software
Chris Swaffer, PreSonus
Alexander Plötz, self
Laurent Pugin, RISM
Dietmar Schneider, self
Matt Briggs, Semitone
Martin Beinicke, Soundnotation
Adrian Holovaty, Soundslice
Daniel Spreadbury, Steinberg
Stijn Van Peborgh, Tritone
Mehdi Benallal, Tutteo
Cyril Coutelier, Tutteo
Mark Porter, Universität Erfurt