Skip to toolbar

Community & Business Groups

Proposed CG Agenda Changes

Thanks to everyone for your thoughtful discussion of the agenda for the Music Notation Community Group. We have posted a summary of the discussion on the CG Wiki at https://www.w3.org/community/music-notation/wiki/Agenda_Discussion. In this post, the co-chairs would like to bring this discussion back to its effect on the short-term and long-term agenda of the group.

Our initial proposal for an agenda contained four short-term and three long-term projects:

Short Term:

  • Document music notation use cases
  • Build an initial MusicXML specification
  • Add support for use of SMuFL glyphs within MusicXML
  • Identify and fix any remaining gaps or adoption barriers in SMuFL

Long Term:

  • Improve formatting support in MusicXML
  • Build a complete MusicXML specification document
  • Add Document Object Model (DOM) manipulation and interactivity to MusicXML.

Looking at the discussion, we saw a clear emphasis on the need for use cases, as well as the need to address foundational and structural issues in MusicXML. After reviewing all the responses, the co-chairs agree that these two areas deserve top priority, and that it makes sense to treat them sequentially. First we should focus on developing, refining and filtering a set of use cases for our notation format. After we have adopted a solid set of use cases, we can then turn to the foundational/structural issues in the standard with a much better shared understanding of our goals.

We believe document format discussions are best driven by use cases, expressed in terms of the needs and actions of different classes of users, as well as the nature and contents of the documents involved. Many of the discussions so far have been framed in terms of technology choices like MusicXML vs. MEI, XML vs. JSON, or IEEE 1599 vs. SMIL. We would like to begin re-framing these discussions to clearly identify user needs and document contents served by specific features in these technologies, rather than on whether we should pick technology A vs. technology B. Eventually we can shift to finding technical solutions that satisfy the use cases we deem worth addressing, in the context of this group’s charter.

Building an initial MusicXML specification met with some thoughtful push back. Peter Deutsch proposed that the specification effort should wait until the format is redesigned in some key areas to be more easily specifiable. Others mentioned that it may not be wise to do short-term work that would then need to be redone based on long-term goals.

People asked what adding support for use of SMuFL glyphs within MusicXML meant, and why a font standard influences a music representation standard. Our feeling is that there is an immediate practical need to add more musical symbols to the MusicXML format. A MusicXML update that includes greater support SMuFL’s expanded vocabulary of musical symbols will keep interoperability between applications high. We believe that this can be done with little risk of future rework based on long-term goals.

Identifying and fixing any remaining gaps or adoption barriers in SMuFL did not receive much feedback. There appears to be a feeling that the existing specification is in good shape, and mostly in need of incremental improvements in symbol coverage.

We therefore propose our short-term work cover three main areas:

  • Document the use cases for music notation formats in a way that can help drive decisions about the structural and conceptual changes needed in the standard.
  • Create a very focused, short-term MusicXML 3.1 update that addresses the need for broadening MusicXML’s symbol vocabulary and fixes some documentation bugs. This update will avoid any areas that are likely to be redone if structural changes are made in the future. We already have an initial take on what would be included in a MusicXML 3.1 update on Github at https://github.com/w3c/musicxml/labels/V3.1.
  • Produce incremental SMuFL updates as needed for enhanced symbol coverage.

We propose that these three items proceed in parallel. Joe will lead the use case documentation, Michael will lead the MusicXML 3.1 update, and Daniel will lead the SMuFL updates. We anticipate that the use case work will receive most of the group’s focus.

Our plan is for the co-chairs to put together an initial use case draft on the community group Wiki to illustrate what exactly we are thinking of when we talk about use cases. We plan to get that out within the next week or two.

Please let us know your thoughts on this proposed agenda update on the public-music-notation-contrib mailing list.

Michael, Joe, and Daniel

One Response to Proposed CG Agenda Changes

  • [Flying-by-the-seat-of-my-pants response to Agenda preparations and content as a whole]

    The fundamental problem with MusicXML (and probably MEI, which I’m unfamiliar with) is that it does not provide an intuitive base for use within a data-driven paradigm, which is otherwise revolutionising visualisation – and expectations.

    This is -with absolute certainty- holding back the development of advanced musical applications such as (screenshots only, also uses MusicXML):
    https://www.pinterest.com/cantillate/dynamic-world-music-instruments-and-theory-models/

    A data-driven visualization library’s primary expectation is something that can be coerced into looking like an array, but the path between MusicXML’s syntax and this is tortuous. In the longer term, I feel there is a compelling argument for a simplified array-like ’part, div and duration’ exchange format. I could imagine this being based on JSON -which effortlessly supports arrays- for the future.

    Moreover, with modern data visualisation libraries facilitating algorithmic positioning *by default*, I don’t see positioning information as meriting any place in what is effectively a data transfer protocol. It simply adds ballast to the payload.

    JSON usage will continue to spread parallel to MusicXML – if initially only as a result of derivation from MusicXML using tools such as:
    https://www.npmjs.com/package/musicjson3

    The real power of JSON will emerge as it’s more intuitive array representation are recognised and exploited. At this point, JSON and MusicXML usage will clearly diverge.

    The focus should therefore clearly for the meantime be on attribute and value naming conventions (mappings to font formats, be they svg or legacy).

    Finally (and to my mind most important of all) worldwide, musical diversity is under profound threat from the western musical system and the rise of the ‘industry monoculture’.

    This cultural loss affects us all. A third area of focus should therefore be the most rapid integration possible of those world music notation forms able -by whatever means- to exploit the staff.

    It follows that MusicXML or it’s descendants will need to prove very fast and flexible. My feeling, however, is that the very complexity of MusicXML will slow any such advance.

    [Postscript] Am I right in thinking that the test scripts offered by Soundslice deal only with the comparatively trivial case of a single part or voice? The vast bulk of handling complexities lie in the realm of multipart, piano and hybrid scores.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Before you comment here, note that this forum is moderated and your IP address is sent to Akismet, the plugin we use to mitigate spam comments.

*