2020 Online Meeting Minutes
Posted on:The W3C Music Notation Community Group met online in a Zoom meeting on Thursday, 30 April 2020 between 2:00 pm and 4:00 pm UTC.
CG co-chairs Michael Good, Daniel Spreadbury, and Adrian Holovaty chaired the meeting, with 37 members of the CG and interested guests attending. The presentations from the meeting are posted at:
W3C MNCG Online 2020 Presentation
We recorded the meeting on Zoom and have posted it online at YouTube. Zoom provides an automatic transcription feature, so there is also a complete transcript available via closed captioning. The video starting times for each part of the meeting are included in the headings below.
Much of the group discussion happened in the Zoom chat window. We have incorporated highlights into these minutes, and you can also see the full chat log.
Attendees
Here are the meeting attendees in alphabetical order by organization:
- Arshia Cont, Antescofo
- Dominique Vandenneucker, Arpege / MakeMusic
- Haipeng Hu, BrailleOrch / Open Braille Music
- Dominik Hörnel, capella-software
- Markus Hübenthal, capella-software
- Christof Schardt, Columbus Soft
- James Sutton, Dolphin Computing
- James Ingram, self
- Oleksil Sapov, Internationale Stiftung Mozarteum
- Bonnie Janofsky, self
- Jim DeLaHunt, Keyboard Philharmonic
- Michael Good, MakeMusic
- Jason Wick, MakeMusic
- Michael Cuthbert, MIT
- Peter Jonas, MuseScore / OpenScore
- Martin Keary, MuseScore
- Daniel Ray, MuseScore
- Benjamin Bohl, The Music Encoding Initiative
- Christina Noel, Musicnotes
- Matan Daskal, Newzik
- Marin Fauvel, Newzik
- Kenzi Noike, self
- Reinhold Hoffmann, Notation Software
- Philip Rothman, NYC Music Services / Scoring Notes
- James Opstad, self
- Eric Carraway, percuss.io
- Laurent Pugin, RISM
- Jeremy Sawruk, self
- Matt Briggs, Semitone (Komp)
- Ben Spratling, Sing Accord
- Jeff Kellem, Slanted Hall
- Adrian Holovaty, Soundslice
- Daniel Spreadbury, Steinberg
- George Litterst, TimeWarp Technologies
- Cyril Coutelier, Tutteo (Flat.io)
- Roger Firman, UKAAF / MSA
- Fabrizio Ferrari, Virtual Sheet Music
Introduction (0:22)
Michael welcomed everybody to the meeting. He outlined the rules of engagement for holding a meeting over Zoom, encouraging the participants to use the “raise hand” feature in the Participants list.
Michael introduced the W3C Music Notation Community Group and encouraged those in attendance to join the CG if they would like to contribute.
MusicXML 3.2 (5:17)
Michael introduced the goals for MusicXML 3.2, in particular to improve the support for parts and scores. The intent is to continue to work on both MusicXML and MNX in parallel, but to defer any use cases that cannot be solved without making major changes to MusicXML to the MNX effort.
Michael outlined the six main themes for the MusicXML 3.2 release, which can be seen in the V3.2 milestone in the GitHub repository, with a target completion date of the first quarter of 2021.
Improved Parts Support (9:04)
There are currently 8 issues in scope with the Parts label in the GitHub repository.
At the time of MusicXML’s inception 20 years ago there was no support in notation software for having the score and parts linked together or stored in the same file, and it was designed to model the experience of working with paper scores. Today things are very different, and in particular there is a specific need for Finale and SmartMusic to handle parts in a more sophisticated way. The approach that has been followed to date is to embed separate MusicXML files for each part in the MusicXML file, with a linking mechanism to correlate each MusicXML part in the score MusicXML file to each individual MusicXML part.
A goal is also to improve the interchange of part information even if you are exchanging only a single uncompressed MusicXML file rather than a compressed .mxl archive containing multiple MusicXML documents. MusicXML currently behaves poorly with concert pitch score documents, so one way in which this restriction can be eased is by providing information about the transposition of each part to be used when the score is viewed in transposed pitch (issue 279).
Michael had pointed out that one constraint on the design concerns trying to avoid particular patents. Jim DeLaHunt asked if these IP issues were enumerated anywhere, and Michael pointed out that the main issue concerns a patent held by Avid in the area of linked parts.
Machine Listening (16:15)
Machine listening applications include Metronaut and SmartMusic. These kinds of applications have different requirements and may need additional information that is not encoded notationally in the score. MusicXML provides the sound and play elements to provide supplemental information things that are not represented directly in the score, and the proposal is to add a new listen element, which would have a counterpart in a new player element, which would allow individual instruments to be grouped by player, handling divisi, and so on.
Michael outlined the different requirements that may exist for synchronization, including situations where the computer needs to wait for the performer, and those where the performer is expected to keep up with the computer.
Bob Hamblok had raised concerns about the amount of additional data that might be required to represent this new data intended for machine listening applications, but Michael sought to point out that the amount of data that will need to be encoded will not be significant.
Arshia Cont spoke to reassure other members that the additional data for synchronization etc. can be useful to applications that are interested in playback, as they can be used to improve humanisation of playback etc. Arshia expressed his strong support for this initiative in MusicXML 3.2.
Michael Cuthbert pointed out that “the sum total of all MusicXML ever generated might be about the same about of data as this 40-part video chat today is going to create, so from my perspective, let’s keep the data in our encoding!”
Tools (26:07)
Antescofo suggested that introducing an XML catalog would be helpful. If you’re validating a MusicXML file, the validation against the schema or DTD will go out to the web location, but XML catalogs make validation against a local copy of the schema or DTD much easier.
We cannot add an XML namespace to MusicXML without breaking compatibility with older MusicXML files, so the proposal is instead to adopt a standard namespace so that MusicXML can be combined with other XML vocabularies, when embedding MusicXML into other documents.
Michael spoke about some of the issues that exist with using code generation to create classes based on MusicXML, which have largely now got solutions. The goal is to document those solutions, and to ensure that any new features are not implemented in such a way as to be likely to introduce problems with code generation tools.
Another idea is to introduce XSLT stylesheets to automatically generate ID attributes for items in MusicXML.
It is also the intention to update the DTD and schema locations to reflect their current secure HTTPS locations.
Jim asked whether or not this tooling support might include additional features for producing a concise set of differences between two different scores, e.g. to facilitate the exchange of small differences like annotations. Michael pointed out that this might be more an application-level problem and not something that we would necessarily tackle as part of the format itself. Michael Cuthbert pointed out that this kind of diffing is very difficult – he has had two masters students working on the problem without completely solving it.
Gaps in Appearance (34:33)
Michael outlined a number of presentational items that cannot currently be described adequately in MusicXML, including some enclosure types, the appearance of measure numbering for multi-bar rests, and so on.
Percussion grids (staves with wider than normal spacing) cannot be easily represented in MusicXML, so Michael proposes some changes to help improve this.
Michael also proposes that it might be time to tackle some long-standing alignment issues, including the vertical positioning of rests and the horizontal extents of barlines.
Gaps in Playback (41:11)
Adding support for swing in the form of a ratio etc. rather than relying on the duration and type values for note elements work to define the difference between the printed and played values. Daniel Ray proposes that perhaps using a set of swing descriptions, rather than numerically.
To allow better support for doublings, it’s important to be able to change instrument for virtual instruments in the same way as changing MIDI instruments.
The sounds.xml file hasn’t been updated since MusicXML 3.0, despite the intention of defining it as a separate file in order to allow it to be updated on a faster cycle than MusicXML itself. But the time has come to add at least one more tin whistle sound, and potentially further sounds if needed. Michael invites proposals for any other sounds that are not currently covered.
Benjamin Bohl says: “In terms of performance parameters I’d like to point you to the “Music Performance Markup“ model for describing performance parameters: https://github.com/axelberndt/MPM”
Documentation (47:07)
Improving the MusicXML documentation is a never-ending task. Michael welcomes a volunteer to help improve the design of Roman numerals for harmonic analysis, and Michael Cuthbert stepped forward to take this on.
Michael also says that moving the existing documentation for MusicXML 3.0 to the W3C site is a goal, but it’s challenging because it was authored in a moribund version of a proprietary tool.
Postponed to MNX (50:37)
Issues that will require greatly expanding the semantics of MusicXML for text and lines are going to be deferred to MNX.
Next Steps (52:23)
Michael asked the group whether this seems like a reasonable scope for the new version. Michael is also inclined to give MusicXML 3.2 the version 4.0.
Benjamin Bohl suggested that we might follow semantic versioning, but Michael is not keen on this as he doesn’t feel this is especially appropriate for slow-moving formats like MusicXML.
Support for both 3.2 and 4.0 appeared to be roughly even among the attendees. Michael proposed that we would conclude this topic after the meeting, perhaps with a formal poll among CG members in due course. We did a “show of hands” and 4.0 came out ahead roughly 2:1.
Community Group Membership (57:17)
Michael once again reiterated that if you want to contribute to any of the CG’s projects, you need to join the group. If you represent a company working in the field of music software, it’s important that you join as a representative of your company.
SMuFL (1:01:38)
Daniel outlined the current state of SMuFL 1.4, which is not really any further forward than last year. He reiterated that issues are always welcome.
Michael Cuthbert introduced two issues concerning text SMuFL fonts. He pointed out that Bravura Text is very large because of the many combining glyphs for allowing symbols to be raised or lowered by up to 8 staff positions, and this makes the font less useful for subsetting, etc.
He also pointed out that the use of Bravura Text in user interfaces is difficult because the size of, say, a sharp relative to a treble clef means that showing them at the same point size would produce very different visual sizes. Daniel’s initial reaction was that using Bravura Text in a user interface is not necessarily an appropriate goal for text SMuFL fonts, but it’s perhaps something that we could tackle as a separate effort.
MNX (1:12:00)
Adrian introduced the MNX Common by Example page, showing that the focus has moved to building the specification around specific atomic examples rather than from the top down. When designing new features, we are now working “examples first” and coming up with analogous MusicXML code to help make it more illustrative.
Benjamin Bohl proposed adding MEI as a third column to the examples page, which Michael opposes on the grounds that we should perhaps keep focus on the formats currently maintained by the CG. Adrian agreed, and said that it’s important to keep the examples very focused for the time being.
James Ingram asked to what extent the specification can be trusted. Adrian said that the spec can be trusted, but it’s easier to understand if you look at the MNX Common by Example file.
Benjamin Bohl shared some resources that the MEI community have been working on to ease the interchange with MNX:
The Music Encoding Initiative (MEI) sees the need to better connect with the MNX community. Thus we did some work to support this. Namely providing a MEI Basic customization that fits the current development status of MNX and providing a converter for MNX to MEI.
Customization in ODD (XML meta-schema format by the Text Encoding Initiative): mei-basic.xml
mnxconverter (1:21:09)
Adrian announced that we have begun work on a tool to convert MusicXML to MNX, which was released via a new W3C repository on GitHub soon after the meeting ended.
Adrian expounded on why Python was chosen as the language. It’s popular, well-understood, and runs on every operating system. Benjamin Spratling expressed concern that it can’t run on iOS. Jim DeLaHunt and Michael pointed out that it should be possible to run the converter on a website and have an iOS app use a web service to provide conversion.
An eventual goal is to provide a production-ready two-way conversion between MusicXML and MNX. Two-way conversion will be useful to help drive adoption. It will also hopefully be possible to use it as a MusicXML “cleaner,” allowing you to ingest MusicXML and output MusicXML that uses a more consistent approach, making it easier to write MusicXML importers.
Adrian also explained the non-goals for the project, to keep it tightly focused on providing a conversion process and nothing else.
Laurent asked whether the converter provided a direct correspondence between classes and MNX elements, and Adrian confirmed that this is the case.
Peter Jonas asked why not use XSLT to provide the translation? Adrian’s hypothesis is that it’s either not possible or is possible but extremely difficult to perform the conversion using XSLT. Michael pointed out that XSLT’s strengths don’t really lie in this kind of large-scale conversion project.
Adrian reflected on his experience working on the converter before and said that he was happy to report that it feels good, and is a testament to the work done by Joe on the bones of the specification.
It is hoped that mnxconverter will help to grow understanding and appreciation for the core MNX concepts, and provides another open-source approach to MusicXML parsing.
Matt Briggs asked whether it might be possible to create XSDs directly using reflection from the Python code. Michael said that we have no XSD schema for MNX at the moment, but there will be a schema available in the future. Benjamin Bohl suggested using ODD as it can compile to XSD, RNG, and so on.
Should I Start Using MNX? (1:43:15)
Adrian said that it’s probably not yet time to start implementing MNX unless you want to be on the bleeding edge, but it’s a good idea to become more familiar with the core concepts.
MNX vs. MNX-Common (1:44:04)
Adrian provided a quick recap of the “MNX” name and how it was originally conceived as a kind of umbrella term to allow the packaging of many different types of music data. The group then talked about two different specific flavours, MNX-Common and MNX-Generic, but this is clumsy. So we propose instead renaming MNX-Common to simply MNX, and MNX-Generic to simply MGX.
Christina Noel points out that MNX-Common and MNX-Generic were names that were hard-fought and feels that renaming the formats again now is not the right time. Michael Cuthbert agreed.
Peter Jonas pointed out that people will abbreviate to MNX anyway, and this could cause confusion, and the kinds of practical issues that Adrian elucidated last year.
Jeremy Sawruk proposed perhaps keeping MNX-Generic but dropping “Common” from MNX.
James Sutton pointed out that there is at present very little in common between MNX-Common and MNX-Generic, and it’s unlikely that they’ll become more unified, so keeping the same name is itself confusing.
Start Implementation Milestone (1:53:03)
Adrian stated that the goal is to reach the milestone of the project being stable enough to be able to encourage developers to start implementing MNX-Common by the end of 2020.
Comments and Questions (1:54:30)
Daniel Ray asked whether MNX represents the best path forward. In the MIDI 2.0 specification, they have carved out some space for “notation over the wire”, and a couple of the companies in the MMA are, he says, working on this. He thinks this could be a common native format rather than a common interchange format. Michael responded, as he’s part of the Standard MIDI File 2.0 working group. The MMA community seems nowhere near as close as Daniel suggests, nor is it the best community to develop notation formats, because it doesn’t have the same concentration of expertise in the field as this W3C CG.
Benjamin Spratling asked if there were open source implementations of MusicXML that could render everywhere, such as using HTML or SVG? OpenSheetMusicDisplay, MuseScore, and Verovio were mentioned as possibilities for this, depending on just what you are looking for.
Jim DeLaHunt asked if there was already a place to discuss issues with scores as digital native documents such as formats, tools, and making editorial decisions in transcription projects, whether at the CG or elsewhere. Nobody could think of a place that was already doing this. The forums at notat.io might be the closest. W3C Community Groups don’t provide really great web tools for this type of discussion that is not related to specification and software development. Raising this as an issue on GitHub might attract some other ideas and suggestions from the community.
At the end of the meeting, 24 of the attendees had their cameras on for the group photo in the video thumbnail.