Skip to toolbar

Community & Business Groups

Musikmesse 2017 Meeting Minutes

The W3C Music Notation Community Group met in the Logos/Genius rooms (Hall 9.1) at Messe Frankfurt during the 2017 Musikmesse trade show, on Friday 7 April 2016 between 2.30pm and 4.30pm.

The meeting was chaired by CG co-chairs Joe Berkovitz, Michael Good, and Daniel Spreadbury, and was attended by about 40 members of the CG. A complete list of the attendees can be found at the end of this report, and the slides presented can be found here.

Peter Jonas from MuseScore recorded the meeting and has posted both video and audio recordings. Each recording is in two parts. The first part is for MusicXML and SMuFL, the second is for MNX. The videos are here:

The audio files are available at https://soundcloud.com/musescore/sets/musicxml-meeting-musikmesse-2017.

SMuFL 1.2

Daniel Spreadbury (Steinberg, CG co-chair) presented a SMuFL 1.2 update. 33 of 42 issues are already fixed. The largest remaining issues implementation of Kahnotation and Spatialization Symbolic Music Notation (SSMN).

MusicXML 3.1

Michael Good (MakeMusic, CG co-chair) presented a MusicXML 3.1 update. 60 of 68 issues are already addressed. This is the last chance for a while to include anything not currently in scope, because after this most efforts will be focused on MNX. SMuFL 1.2 and MusicXML 3.1 should be released at the same time.

SMuFL coverage is one of the main goals in MusicXML 3.1. SMuFL glyphs can now be represented in MusicXML either using new elements, new entries in existing enumerations used by existing elements, or a means of specifying a glyph for an existing element using the new smufl attribute.

MusicXML 3.1 is adding a unique ID attribute to around 40 elements (e.g. measures, notes, directions, etc.) to allow unique identification of specific elements.

Musical symbols embedded in text can now be represented more cleanly using the new symbol and credit-symbol elements. These use SMuFL canonical glyph names as their values. You can’t use them by themselves: they must be used within a sequence of words or credit-words elements.

Grace cue notes can now be specified in MusicXML 3.1. This was a limitation of MuseData previously carried over to MusicXML.

You can now have ties in metronome marks. You can specify height and width values for embedded images. The new time-only attribute for lyrics allows you to specify which lyric is used on which pass through the music in repeats. The tied element can now have a let-ring type instead of requiring separate tied elements with start and stop types as used before. You can now use non-unique numbers for bar numbers to be displayed, which is helpful for scores that include multiple works.

Remaining work includes decisions on media types, UTIs for macOS and iOS, and media/MIME types. Michael asked if there were any objections to making this constellation of changes. Reinhold Hoffmann (Notation Software) asked if there were any real additional value in changing the recommended file extension for uncompressed MusicXML files, since it could cause some user confusion. Mogens Lundholm suggested that we should use .musicxml since there is no reason to stick to three character suffixes either. The meeting reached consensus that we should make these changes, and a preference for .musicxml as the recommended file extension. Michael will update the GitHub issues after the meeting.

A consideration of some things not currently in scope then followed. Is it OK to leave the remaining glyphs in the SMuFL ranges for plucked, string, and wind techniques as being only indirectly represented in MusicXML and instead leave them in the other-technical element using the smufl attribute? Some remaining documentation issues are due to lack of clarity in the MusicXML specification which have caused implementations to differ, and Michael doesn’t want to make changes to the documentation that will make existing applications wrong.

Michael is looking for implementations other than Finale and the Dolet for Finale/Sibelius plug-ins. James Sutton raised his hand as being likely to want to implement it soon. Gustaf, Soundslice, and Komp will all try to be among the initial implementations.

Reinhold asked whether it would make sense to participate only on the import side rather than also on the export side. Reinhold would want files from other applications to verify the import. He will discuss the schedule and see if this is possible.

MNX

Joe Berkovitz (Hal Leonard / Noteflight, CG co-chair) presented the current state of the MNX proposal. The current state of the proposal is a rough draft with a lot of gaps. It’s not a spec as it’s not rigorous enough, but the idea was to get something out to the community as quickly as possible.

Roadmap, Tradeoffs, and Rationales

There is a trade-off between three desirable qualities: semantics, interoperability, and generality. You can only have two of the three. A lot of the current discussion is around the ability to encode more music. Rich semantics and large vocabulary means an enormous spec and therefore a big implementation surface, which means a lot fewer implementations. A lot of generality and operability means worse semantics, instead using a general way of describing your subject matter, where you try not to model so much and let things speak for themselves. MusicXML occupies the middle ground between generality and interoperability, but MNX is going to allow a sense of compliance that can be clearly stated, which means a much tighter specification to prioritise interoperability.

The roadmap therefore looks like this: now, we’re looking at a way of encoding idiomatic Common Western Music Notation (CWMN), in the ballpark of what you see in works between 1600 and 1900. This does leave out many works, but it is intended to roughly as expressive but more interoperable than MusicXML.

As a corollary to this, we also need to take on the responsibility of developing a more general approach that would allow (more or less) any graphical approach to music that has time as an element of it. This will probably necessitate the graphics having to speak for themselves. We don’t yet have any recommendations on this: we should study some of the precedents (e.g. SMIL – Synchronized Multimedia Integration Language).

What about the middle ground? Scores that use many aspects of CWMN but which go outside of the conventions: we could call these “CWMN-inspired”. Ligeti is one example, but you could go a long way beyond Ligeti to include things that include invented notations alongside CWMN. Should we try and shoehorn this into the same structure we use for idiomatic CWMN, or should we instead move it towards the more general effort? The chairs think it has to go the second way, in order to protect interoperability and avoid having to build out the semantics further and further.

Dominik Hörnel asked about music that doesn’t fit cleanly into these two approaches, e.g. other music written before 1600 that isn’t specifically CWMN. Joe suggested that we might want to develop other profiles that draw upon some of the concepts we use in CWMN where appropriate. For dialects that have a shared understanding in the world, we can have idiomatic representations within the MNX family. Michael said that, when feasible, we could share the commonalities between different repertoires and allow the differences to be different.

Mogens Lundholm asked what are examples of things outside CWMN? Joe and Michael suggested Berio, Cage, aleatoric music where the performer has great freedom to play or not play various notes, etc.

Alexander Plötz asked whether we have a hard definition for idiomatic CWMN: can we describe what the boundaries are? We are going to have to decide what they are. We seek consensus but in the end we might not get there on everything.

Alexander also asked whether it would be appropriate to tackle the more general or more obscure notations, e.g. the Okinawan notation, now rather than later? Joe thinks that these might be good test cases for the MNX(graphics+time).

James Ingram said that it’s a mistake to think that CWMN has stopped developing: there are more general forms of CWMN that could be accommodated. James said that all symbols in music notation have “event symbols” which have meaning and a temporal element. CWMN has well-formed and efficient event symbols that we should build on rather than stating that evolution has come to an end.

Joe said that some time from now we will be able to identify the shared understandings that have emerged from the latter-day developments of western music, and address them with encodings that convey their spirit and meaning, but it’s the shared understanding that we don’t yet have in order to fulfil that goal. James responded that CWMN has been developed over hundreds of years to become very efficient and legible, and it would be a mistake to think that we need to forget this tradition and develop something else. As a composer it seems stupid that you can only put symbols in particular places: why not put them between them? James would prefer to generalise rather than to constrain.

Alexander asked whether MNX(graphics+time) should have a timeline with goals? Joe responded that we plan to include a rough timetable in the roadmap. Alexander said he would be more reassured if there were a timetable.

Alexander said that we are focused on interoperability: but who settled that? Michael replied that it’s in the charter of the W3C Music Notation Community Group. MEI is not as focused on interoperability, for example, as MNX is.

Hans Vereyken (neoScores) said that it’s very important to focus on as narrow a focus as possible. When you think about how it’s going to be adopted, narrowness is important in order to speed adoption. We won’t catch everything, but if the focus is there, we can develop more profiles. If we start too broad then we can’t get implementations.

Johannes Kepper (MEI) seconded this appeal to focus on a core subset, then only add things when there is a consensus from a larger group. Only when there is substantial need should you include it later. Johannes says that this is a learning from the development of MEI. Laurent Pugin (RISM) added that it would be a good idea to look at the other modules in MEI as a means of adding things to MNX: don’t make a new niche from an existing niche.

Christof Schardt (Columbus Soft) commented from the viewpoint of the implementer. Software exists because we accept limits. In order to make MNX a success, we have to agree on the constraints. Supporting idiomatic CWMN is enough. In the discussion group, some of our voices are slowing down the process. Expressed trust in the chairs and the ability to set the limits.

Alon Shacham (Compoze) said the group’s responsibility is to minimize the number of symbols, to make constraints.

Adrian Holovaty (Soundslice) suggested that since our focus is on interoperability, should we perhaps survey (say) the capabilities of the 30 top scoring programs to find the boundaries of what interoperability really means in a more scientific way?

Interoperability

Shifting to interoperability, why are we doing another format for CWMN? How do we achieve interoperability? Profiles and content types are mechanisms to prevent application developers from having to do everything: even within CWMN there will need to be a core profile that will exclude some more esoteric features.

We haven’t yet talked much about the MNX rendering model, but we need something that will help us to specify whether a rendering is conformant. L. Peter Deutsch and Christof Schardt have brought this up many times over the years. We need to allow enough wiggle room for applications to do what they think is right but hopefully we can arrive at something that allows us to specify what the baseline is.

CSS

Joe said that CSS is one of the hot button aspects of the spec. You could do MNX without CSS, but it’s the chairs’ view that it would be more painful without it. CSS is not there because CSS is a “webby” technology, though of course it is. It’s there because it’s a rule-based approach to styling in a way that is orthogonal to the structure of the mark-up language itself. Because it’s orthogonal it’s easy to ignore if need be. Also consider the use of CSS for the performance information: CSS can replace various sound-related elements in MusicXML. It could even inject specific MIDI-like interpretations that override the semantic representation.

The CSS selector/rule system may not capture the way that appearance of items are defined in scorewriter software. So maybe we should step away from it, at least from the point of view of exchange.

Matt Briggs (Semitone) said that as a C++ programmer, he is not very familiar with CSS. It could be written in XML elements rather than as a CSS string. Ad hoc styles look like “stringly-typed data”. If interoperability is a goal, then “easy to ignore” should not be a selling point. How do you constrain CSS content? It’s basically a list of properties that each element can have. Can we generate code from CSS like we can from XML schemas? Joe will follow up.

James Sutton asked about temperament. Michael replied that MusicXML does not support temperament directly: it does it by allowing you to specify precise MIDI pitches for each part. Could it be handled as part of the performance rules via CSS? James thinks it should be defined in an intrinsic manner for the pitch of each note.

James Ingram commented that the CSS could go into the SVG produced by the scoring application.

Christof expressed support for CSS because it can be applied to a standard set of data in lots of ways, e.g. to change the appearance of an existing score, in a powerful way. Producing variations of a single score is a key use case and using CSS makes that possible.

Cursors, Syntaxes, and System Notations

Joe realizes that the compact syntaxes in MNX could be controversial given that MusicXML has a very fine-grained element approach. For cursors and positioning, there are many possibilities for positioning directions in arbitrary places and we will need to decide.

Christof said we should avoid events having positions. He likes the fact that sequences don’t have positions, and thinks we should restrict positions to only non-event items. Offset vs. position: Christof thinks that offsets are semantically stronger than positions.

Michael agreed with Christof and mentioned that ticks can provide a high-resolution offset that is simpler than MusicXML’s divisions concepts. Daniel said he could not disagree more with what Christof and Michael have both said. Joe said that we will need to discuss this issue on the list; as you can see even the co-chairs disagree on this point.

Alexander proposed nesting sequences within an event to allow the positioning of non-note events within a longer note event.

Matt said that it could all be done without cursors. Events could be listed in sequential order, but with their position in time specified. Johannes says that this is how MEI also works, via “control events”.

Next Steps

Joe said our next steps are to apply course corrections based on this discussion, provide more MNX examples, and create and approve a roadmap document. After this we need to establish “beachhead” specifications for MNX(cwmn) elements and style properties, and begin reference implementation. We will continue open design discussion of MNX(graphics+time) in the background.

Generalized Notation

Joe said it should be possible to associate regions with arbitrary graphics with a progression of time, coupled with audio resources such as MP4 and MIDI data. In one set of applications, a semantic format could be converted into a graphics/audio format: compiling a score in a way that could be rendered by simple applications, with the graphics markup tracing back to the semantic data.

James Ingram discussed the Fauré “Après un reve” MNX example. The system element could represent tempo by specifying that the bar is 3000ms long, because 3/4 * 60 beats per minute, means that each bar is 3 seconds long. Instead of specifying durations in terms of length in quarter notes, instead specify duration in milliseconds so that it’s 1000ms. Could specify the duration as e.g. 980ms or add an additional event that is a grace note. The bar doesn’t have to add up in terms of CWMN, it just has to add up in time. Could have 5 quarter notes in there, even with different lengths, provided they add up to 3000ms. Tuplets are like beams: if a tuplet lasts for 2 eighths, it lasts for 1000ms, so you can add up the elements inside the tuplet with any values provided they add up to 1000ms. It’s useful to make them look like how they do in CWMN because they’re easier to read, but they could use any other symbols. So far this has been about metronomic time, but with recordings measures are at all different lengths, so this can be adjusted.

James Sutton commented that you’re basically throwing away all the higher order information in order to make something that looks like MIDI.

Additional Questions

Christof asked about the score container mechanism. We expect that applications will not accept every content type. MusicXML doesn’t provide a means of handling collections of music. The opus document type is not implemented by anybody. So multi-movement works of the same type, and different pieces of different types, can also be handled within the same container.

The meeting then moved on to a reception sponsored by Steinberg. Thanks to Daniel Spreadbury for arranging the sponsorship and taking the meeting minutes.

Attendees

Dominique Vandenneucker, Arpege / MakeMusic
Sam Butler, Avid
Amit Gur, BandPad
Antonio Quatraro, Biblioteca Italiana per i Ciechi
Carsten Bönsel, self
Dominik Hörnel, capella software
Bernd Jungmann, capella software
Christof Schardt, Columbus Soft
Alon Shacham, Compoze
James Sutton, Dolphin Computing
László Sigrai, Editio Musica Budapest
Sébastien Bourgeois, Gardant Studios
Joe Berkovitz, Hal Leonard / Noteflight
Edward Guo, IMSLP
James Ingram, self
Mogens Lundholm, self
Grégory Dell’Era, MakeMusic
Michael Good, MakeMusic
Heath Mathews, MakeMusic
Thomas Bonte, MuseScore
Peter Jonas, MuseScore
Johannes Kepper, The Music Encoding Initiative
Michael Avery, MusicFirst
Senne de Valck, neoScores
Hans Vereyken, neoScores
Reinhold Hoffmann, Notation Software
Chris Swaffer, PreSonus
Alexander Plötz, self
Laurent Pugin, RISM
Dietmar Schneider, self
Matt Briggs, Semitone
Martin Beinicke, Soundnotation
Adrian Holovaty, Soundslice
Daniel Spreadbury, Steinberg
Stijn Van Peborgh, Tritone
Mehdi Benallal, Tutteo
Cyril Coutelier, Tutteo
Mark Porter, Universität Erfurt

Leave a Reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet, the plugin we use to mitigate spam comments.

*