The Music Notation Community Group develops and maintains format and language specifications for notated music used by web, desktop, and mobile applications. The group aims to serve a broad range of users engaging in music-related activities involving notation, and will document these use cases.
The initial task of the Community Group is to maintain and update the MusicXML and SMuFL (Standard Music Font Layout) specifications. The goals are to evolve the specifications to handle new use cases and technologies, including greater use of music notation on the web, while maximizing the existing investment in implementations of the existing MusicXML 3.0 and SMuFL specifications.
Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.
We hope that you have all had a happy holiday season and new year. Our progress at the W3C Music Notation Community Group was not quite as fast during the second half of 2016 as we had hoped, but we expect to pick up the pace in 2017.
Our first order of business is to finish the incremental releases of MusicXML 3.1 and SMuFL 1.2. Our new goal is to deliver these by the end of the first quarter of 2017.
We have made a lot of progress on MusicXML 3.1 in recent months. For the issues currently listed as in scope for version 3.1, 35 are closed and 21 are still open. With consistent focus on moving forward, it seems reasonable to be able to finish this within the next three months.
SMuFL 1.2 is largely complete. The main outstanding issue there would be the possibility of bringing the Kahnotation symbols for tap dance into SMuFL 1.2.
While Michael focuses on finishing MusicXML 3.1 and Daniel focuses on finishing SMuFL 1.2, Joe plans to focus on the MNX proposal for the next generation of music notation encoding. This work plans to start in February so that there are concrete proposals for the community to discuss at Musikmesse in April.
Several people have asked about a Community Group meeting at the NAMM show in Anaheim later this month. We have not scheduled a formal meeting for two main reasons: 1) we don’t have a lot of agenda items that seem to require a face-to-face meeting at this time, and 2) our Musikmesse meetings have been much better attended than our NAMM meetings.
However, it would be great to have an informal meet-up during the show. We have planned for dinner on Friday, January 20 at 7:30 pm. This will be at Thai Nakorn in Garden Grove. Michael and Daniel both plan to attend. Let us know if you plan to be there too.
We would plan our next formal meeting to be at Musikmesse in Frankfurt, where we could discuss some concrete proposals for MNX. We hope that we can have the same venue as in previous years, and similarly schedule it for Friday afternoon, April 7. If any company is interested in sponsoring the reception after the meeting, please let us know.
Thank you for your continued interest in the W3C Music Notation Community Group. We look forward to working together with you in the coming year.
Michael Good, Joe Berkovitz, and Daniel Spreadbury
W3C Music Notation Community Group Co-Chairs
In this post we’re taking the first of a series of steps that will establish a specification for the notation encoding standard that’s the subject of our work in this CG. The steps here are focused on a name, a mission and a high-level approach.
A Name, Of Sorts
We’re proposing the use of a code name for the next standard: something that we can use as a label, yet which is temporary in nature: it isn’t a committed naming decision. We feel that it’s not right to stick with MusicXML as the name for this project: it may suggest to some that it’s going to address the same use cases in the same ways as MusicXML, or that there may be some bias against evaluating ideas from other quarters.
For this code name, we’re proposing to call this project “MNX” for the time being. We don’t have to love this choice yet, because the name is replaceable. We just need something easy to say, easy to type and a bit open-ended. It stands for… “Music Notation X”, where X might mean the X in “extensibility”, or the X in “next”, or the call of the unknown, or the Roman numeral 10, or, perhaps, just a plain old letter of the alphabet.
What is MNX About?
Now that we have a name, however temporary, we’d like to make some statements about what MNX is intended to accomplish. These goals are only partly derived from the many use cases and scenarios discussed so far. They also represent the co-chairs’ ideas about how the CG should spend its time, and where the payoffs lie for stakeholders.
An Open Framework, But A Focused One
MNX provides an overall framework for encoding works of music of many different kinds. As such, it anticipates that more than one encoding system may be used, perhaps even within the same document.
The MNX schema will begin by describing a notation-neutral container that is concerned solely with a document’s metadata, attribution and organization. This part of this framework does not reference any notational system.
Inside the MNX container are one or more body elements of encoded music notation. These body elements can contain embedded documents of various types registered by some mechanism and policy to be determined. An MNX “body type” consists of an XML schema or sub-schema applying to some MNX body. An MNX body may also cite one or more document profiles that govern how its body is encoded, above and beyond the constraints of the XML schema itself.
There will be a way for those who wish to create their very own digital representation of notated music to do so, and plug that into an MNX document. However, only some set of recommended types are likely to be supported by other parties and their software projects. So there is an obvious incentive to use a recognized type to encode a work, rather than inventing one’s own. The work of the CG will focus on a small number of such recognized body type.
MNX comes bundled with a ready-made body type that supports Common Western Music Notation (CWMN), and which is biased towards a semantic representation of music. The encoded music is not required to reference any specific visual or aural rendering, although such information can be optionally included. This goal does not presume that there is a single unitary definition of CWMN, but the ways to tweak its boundaries must necessarily be limited in order to have a standard that solidly represents the vast majority of CWMN repertoire. Other body types will no doubt be invented to accommodate the gaps that result, and which make different design tradeoffs.
The CWMN body type and its associated document profiles are the heart of MNX: our primary intent is for MNX to support uses of CWMN across the various user communities described in our use cases document, serving creative, academic and commercial interests alike. The hope is that MNX will at least inherit the degree of support found for MusicXML today, and will hopefully address many unmet needs of other musical constituencies.
The initial work of the CG, therefore, will lie in elaborating the CWMN body type for MNX. Other body types will fall inside the circle of recommended types, of course, while yet others will be works-in-progress or experimental prototypes. These other body types will, at first, be the work of voluntary subgroups.
Something Borrowed, Something New
MNX is not starting from scratch: there’s a lot of useful work to draw on. We should strive to avoid reinventing the wheel. Rather than repeat some of the areas where MNX intends to innovate (e.g. styling, profiles), this section looks at some of the key existing work that we can make use of today.
MEI does a great job of organizing musical texts with their attached metadata in an arbitrary structure (the “container” spoken of above). MEI’s approach to metadata overall is very thoughtful and more comprehensive than what is available in MusicXML.
MusicXML has a rich semantic vocabulary for CWMN proven by time and adoption.
MusicXML can cope with a wide range of conflicts between visual notation and the intended aural rendering
The MEI CWMN schema has a simpler approach to organizing elements that is more friendly to a DOM-based approach.
Other projects including CMME and NeumesXML have looked at encoding neumes and mensural notation, and these should be examined for potential adoption or adaptation.
The best next step would be to create a family of spec documents that begin to capture the most significant ideas that we want to employ. These initial drafts will document concrete directions that can be discussed. They will follow the overall structure and formatting of other W3C specs in HTML, and will draw on W3C’s internal specification toolset ReSpec.
We propose that the initial emphasis be placed on two specifications:
The MNX container. MEI has already done a great job developing the concepts needed for a good approach to a container, and it’s possible that not much new ground may need to be broken here. Perhaps the key challenge here is that the notion of body type will need to be worked out and elaborated.
MNX support for CWMN encoding. We expect the CWMN encoding piece to draw on the terms, concepts and vocabulary of MusicXML, but reworked to better support current encoding and programming best practices. A key part of this work will be the development of a family of CWMN document profiles that can significantly lighten the development burden. To leverage MusicXML adoption, the specification will be developed together with automated software to convert existing MusicXML files into MNX CWMN files.
We expect another round of discussion on the CG list, naturally, but we wanted to take this opportunity to share our thoughts so that we can begin together on the next — and very exciting — bunch of work.
Joe Berkovitz, Michael Good and Daniel Spreadbury
W3C Music Notation Community Group Co-Chairs
The W3C Music Notation CG met in Genius/Logos (Hall 9.1) at Messe Frankfurt during the 2016 Musikmesse trade show, on Friday 8 April 2016 between 2.30pm and 4.30pm.
The meeting was chaired by CG co-chairs Joe Berkovitz, Michael Good, and Daniel Spreadbury, and was attended by about 40 members of the CG. A complete list of the attendees can be found at the end of this report, and the slides presented can be found here.
SMuFL 1.2 update
Daniel Spreadbury (Steinberg, CG co-chair) presented a brief summary of the state of the SMuFL 1.2 development effort, with 30 issues currently open and an expected delivery date of no later than the end of Q3 2016.
There were no substantive questions or discussion raised by this update.
MusicXML 3.1 update
Michael Good (MakeMusic, CG co-chair) presented a brief summary of the state of the MusicXML 3.1 development effort, with 37 issues currently open and an expected delivery date of no later than the end of Q3 2016. Michael also explained the basic procedure of how issues will be resolved using the GitHub issue/discussion/pull request workflow, and offered help on behalf of the co-chairs to any member of the CG who is daunted by or has questions about this workflow.
James Sutton (Dolphin Software) expressed concern about the noisiness of the emails generated by the GitHub issue/pull request workflow. He suggested that the ideal solution would be to provide a series of opt-ins/opt-outs for different kinds of automatic emails, if possible.
ACTION: The co-chairs agreed to investigate what possibilities might exist with their contacts at the W3C.
Werner Wolff (Notengrafik Berlin) prefaced Joe’s presentation on how user stories should inform the capabilities of the new notation representation by asking how we as a CG should engage the wider music writing community, and to get to the core of what music notation really means?
James Ingram suggested that the requirements identified by the MPEG-sponsored effort to define a new representation for music notation should be included in our user stories.
ACTION: James Ingram to produce a link to the MPEG user stories.
What should the scope of the effort be?
Following discussion of what kinds of musical works should be considered to be in scope for the capabilities of a new representation format, with a couple of examples cited by Joe including George Crumb’s Makrokosmos (with its circular staves) and Frédéric Chopin’s Prelude no. 15, or Raindrop Prelude (with note values that appear to exceed the time signature) posited as those that might be sufficiently complex that some aspects might be considered out of scope.
James Ingram and Werner Wolff were both of the opinion that all scores of all kinds should be representable in the standard. Zoltan Komives (Tido) argued that certainly if Chopin is considered out of scope, the scope is too narrow. Christof Schardt (PriMus Software) argued that the current version of MusicXML can represent the visual appearance of Chopin’s Raindrop Prelude quite adequately by reproducing the techniques used in engraving programs like Sibelius and Finale required to produce the desired appearance.
Werner Wolff raised the question of where graphical notation, as distinct from CWMN, can be considered to start? Does, say, a heart-shaped notehead constitute a graphical notation?
James Ingram suggested that it would be possible to use a combination of a purely visual representation (e.g. SVG) and a purely aural/temporal one (e.g. MIDI) to make it possible to capture these different semantic dimensions.
Ron Regev (Tonara) expressed that there is a conflict between making the standard all-encompassing on the one hand and easy to work with on the other, mirroring a point made in Joe’s slide that the tighter the semantic restrictions, in general the easier the format is to work with.
Joe presented the idea of encoding profiles for documents in the new notation representation, as a means of expressing the intent behind the encoding and informing a consuming application (and end user) what capabilities an application must have to be able to work with that particular document.
Thomas Weber suggested that the “menu” approach taken by the various kinds of Creative Commons license, presenting content creators with a set of checkbox options for what kinds of uses are permitted and prohibited, might be an approach to how an encoding profile could be made. Joe suggested that in fact each individual profile might be more like one of the checkboxes in the CC licensing set-up process.
Jan Rosseel (Scora) pointed out that if these profiles are going to work, they will need to be enforced in the editing applications used to author the content as well as in the documents themselves.
Zoltan Komives explained that profiles are a core part of the MEI framework, where they are known as customisations, and summarised their role as a contract between the producer of the data and the consumer of that data.
Joe went on to present some suggestions about how the new notation representation could be architected, including the idea of cleanly separating semantic, visual styling, and performance (or playback) data in a manner similar to how the semantics and visual dimensions of web pages are separated into HTML (semantic) and CSS (visual). He also proposed that following the DOM approach makes creating interactive experiences driven from symbolic music representations easy. Joe demonstrated this with a toy application that uses a combination of MusicXML data, JQuery, HTML, CSS, and Noteflight’s embeddable MusicXML renderer, to produce a simple music theory quiz in a few dozen lines of code.
Note that there is not yet any public, online version of the software shown here.
Adrian Holovaty (Soundslice) asked whether the proposal for CSS-like description of visual aspects of notation would actually use CSS, or a new language? Joe responded that it would make sense to borrow some of the CSS entities directly (e.g. color) but that there would be a lot of work to do in defining entities that make sense for music notation (e.g. dimensions might want to be expressed in stave units rather than in, say, pixels or points).
Zoltan Komives commented that music notation is a means of describing art with art, adding that Tido has found CSS to be insufficient to describe the visual aspects of music notation, and has already made some progress in defining a new language that attempts to do this. Joe asked Zoltan if he could share any observations about the unsuitability of CSS.
Action: Zoltan to prepare some comments for the CG about Tido’s experiences with using CSS to style music visually.
James Ingram presented the idea that the visual dimension can be thought of purely in terms of space, and the performance or aural dimension can be thought of purely in terms of time: in his view, everything is either time or space. This seemed to be a controversial view among the attendees of the meeting, with Alexander Plötz asking whether information about the forces required to perform a work (e.g. labeling one of the staves as being played by a flute) would be considered “space” or “time” in James’s division of responsibilities, to which James replied that it would be “space.”
Thomas Weber commented that he felt it would be necessary to extend the DOM in the same way that SVG has done in order to make the kinds of high-level interactive experiences outlined by our user stories possible; in particular, Thomas expressed concern about how to handle the complex relationships between different entities, e.g. to ensure that if you edit the duration of one note in a bar, this may well have consequences for other notes in the same and indeed other bars. Music is not as cleanly hierarchical as other standards or types of content. Michael suggested that XPath or other similar technologies might be useful to help link separate entities together and move towards solving this problem.
Adrian Holovaty expressed concern about using DOM programming to achieve these interactive user stories because this approach implies that the music notation representation is transmitted in full to the client’s computer, which may have implications both for performance and security (e.g. rights management). Adrian explained that although Soundslice uses MusicXML for the representation of the music, it is transmitted to the end user’s browser by way of an intermediate format. He expressed concern that developers and rights holders alike might find obstacles and objections to this approach.
Thoughts on scope and feasibility
As the meeting drew towards its close, the attendees returned to the discussion of what the scope of what the CG can hope to achieve might be.
Werner Wolff appealed for keeping the scope as broad as possible, while recognising that the music industry is small in comparison with other industries, and resources (time and money) are comparatively scarce. However, he did not want the CG’s work to immediately head to the lowest common denominator and leave many niches of musical expression on the outside.
Reinhold Hoffmann (Notation Software) made the counterpoint that the CG’s work must be market-driven, based on what is feasible from a time and effort perspective, and geared towards the needs of consumers; in other words, a pragmatic approach.
Christof Schardt expressed support for the need that the new representation format must break compatibility with MusicXML in order to be able to solve the big problems. Michael responded that he agreed that breaking changes would be necessary, but cautioned against making breaking changes only on the grounds of preferring the elegance of a new solution. To minimise the effort required by those applications and technologies that already support MusicXML, if a use case is already adequately met by an existing capability of MusicXML, the CG should not be in a hurry to throw it away purely because we have come up with a more elegant solution.
The co-chairs thanked the attendees for their attendance and participation in the meeting, which closed with a drinks reception generously sponsored by Newzik.
Manfred Knauff, Apple
Dominique Vandenneucker, Arpege / MakeMusic
Jan Angermüller, self
Ainhoa Esténoz, Blackbinder
Sergio Peñalver, Blackbinder
Gorka Urzaiz, Blackbinder
Brenda Cameron, Cambrian Software
Dominik Hörnel, capella software
Bernd Jungmann, capella software
Wincent Balin, Columbus Soft
Christof Schardt, Columbus Soft
James Sutton, Dolphin Computing
Hans Jakobsen, Earmaster
James Ingram, self
Michael Good, MakeMusic
Thomas Bonte, Musescore
Mogens Lundholm, MusicXML-Player
Bob Hamblok, neoScores
Aurélia Azoulay, Newzik
Pierre Madron, Newzik
Raphaël Schumann, Newzik
Reinhold Hoffmann, Notation Software
Martin Marris, Notecraft Services
Joe Berkovitz, Noteflight
Werner J Wolff, Notengrafik Berlin
Thomas Weber, Notengrafik Berlin
Werner Eickhoff-Maschitzki, Notengrafik Eickhoff, Freiburg
For the past 3 years we have had a MusicXML community meeting at the Musikmesse fair in Frankfurt. This year we will have another meeting: the first meeting of the W3C Music Notation Community Group in Europe.
The meeting will be held on Friday, April 8 from 2:30 pm to 5:30 pm at the Logos/Genius conference room in Hall 9.1. As in past years, we plan to have a 2-hour meeting followed by a 1-hour reception sponsored by Newzik.
Our proposed agenda is to discuss the group’s progress on MusicXML 3.1 and SMuFL 1.2, and review the notation use cases that we expect to guide the group’s future work. We welcome your suggestions on changes or additions to this agenda.
Please sign up on our Google form at http://bit.ly/1U1SUIP if you plan to attend the meeting. This will help ensure that we have enough room and refreshments for everyone.
You will need a Musikmesse ticket to attend the meeting. These cost 25 euros and are available online at www.musikmesse.com.
We look forward to seeing you in Frankfurt!
Michael Good, Daniel Spreadbury, and Joe Berkovitz
W3C Music Notation Community Group co-chairs
Participants and potential participants in the W3C Music Notation Community Group met in Anaheim, California on January 23, 2016 during the annual NAMM show.
Ryan Sargent from MakeMusic photographed the attendees. From left to right they are:
Daniel Spreadbury, Steinberg (co-chair)
Michael Good, MakeMusic (co-chair)
Joe Berkovitz, Hal Leonard/Noteflight (co-chair)
Martin Marris, Notecraft
Joe Pearson, Avid Technology
Kazuo Hikawa, Crimson Technology / AMEI
Deryk Rachinski, CCLI
Michael Johnson from MakeMusic and Yasunobu Suzuki from Roland also attended, but left before the photo was taken at the end of the meeting.
The agenda for the meeting included discussions of:
An introduction and background about the formation of the Community Group, led by Joe Berkovitz
SMuFL 1.2 updates, led by Daniel Spreadbury
MusicXML 3.1 updates, led by Michael Good
A group discussion of what’s next after SMuFL 1.2 and MusicXML 3.1
Introduction and Background on the Community Group
Joe Berkovitz started the meeting with a brief introduction and background on the formation of the Community Group.
The World Wide Web Consortium (W3C) is an organization that creates specification, and can offer many resources to assist in the creation of specifications. The W3C also is the author of other specifications, many of which are relevant to music notation applications, such as CSS, HTML 5, the SVG graphics format, and Web Audio.
Moving development of MusicXML and SMuFL to a consortium provides an open playing field and open governance for developing support for additional use cases beyond MusicXML’s initial focus on exchange and archiving. It is great to have MakeMusic and Steinberg’s contributions of MusicXML and SMuFL, and Michael and Daniel’s continued participation as co-chairs. The consortium provides an opportunity to re-examine things and make things better for everyone.
The specifications from the W3C are produced by Working Groups and are called Recommendations. We are in a different path at the W3C called a Community Group. Community Groups have open membership and no membership cost. One does not need to be a W3C member to be a participant in a W3C Community Group. Community Group processes are more informal than Working Group processes. Whereas Working Groups produce Recommendations, Community Groups produce Community Group Reports.
Ideally the output of this Community Group will make a transition to a Working Group at some point. Costs are an important issue for the music notation community, and the W3C is working on more cost-effective methods of joining the consortium, such as single-specification membership.
Evan Brooks mentioned that it is brilliant to have both MusicXML and SMuFL in the Community Group as that provides critical mass for music notation standards. Jeremy Sawruk agreed, saying this was fantastic for developers.
SMuFL 1.2 Updates
Daniel Spreadbury discussed what is being planned for SMuFL (Standard Music Font Layout) 1.2. The major proposed updates are 4 to 5 dozen new characters, and some updates to the metadata layout for features like cutouts. These proposals are all in GitHub as issues with a v1.2 label. We will discuss the issues in GitHub and then proceed to update the repository. Anybody in the Community Group can submit a pull request on GitHub, but only the co-chairs can merge the pull requests into the mainline for either SMuFL or MusicXML.
We welcome all feedback on the specification for things that are not clear. Evan Brooks has already helped in this regard by pointing out inconsistencies between the SMuFL specification and the Bravura font that have been cleaned up in both SMuFL and Bravura.
Possible additions beyond SMuFL 1.2 include more dedicated support for tablature, and improving metadata to specify what glyphs are present in a font in an indirect way that is independent of changing font technology.
This was followed by a lot of discussion on building and using SMuFL fonts:
Font Creation. Martin Marris wants to create a new font and asked about how to create the SMuFL metadata. Daniel replied that there are scripts available for both FontLab and FontForge to assist with this. Both Daniel and Martin use FontLab. Daniel found that he needed to use Adobe Illustrator to get stem corrections correct and then bring that into FontLab.
Testing New SMuFL Fonts. Testing new SMuFL fonts is tricky since existing applications have incomplete support for SMuFL (at best) that don’t fully meet testing needs. Joe Berkovitz suggested that the group publish the HTML testbed for SMuFL that Noteflight developed earlier in the SMuFL process, which should help with testing.
SMuFL Font Profiles. SMuFL contains thousands of glyphs. It doesn’t make economic sense for all fonts to support all glyphs. Jeremy suggested some sort of “minimum viable profile” for SMuFL. This was discussed earlier in the SMuFL process and the decision at the time was not to provide a profile, but this could be revisited. Note that while SMuFL arranges glyphs into categories, the categories are not developed with profiling in mind. Infrequently used symbols are combined with frequently used symbols within a single category. Evan noted that having a minimum required profile would still not remove the need for an application to have some sort of backup in place in case the current font does not have a given character, but it might reduce the frequency of problems.
Families of Fonts. With older music technologies, fonts like Maestro and Opus had several different families with different name (e.g. Maestro Percussion, Opus Special) for different symbols. Martin asked if this was something that new fonts should consider doing. Daniel advised against breaking a SMuFL font into two or more parts, while recognizing that this does require using more modern font technologies such as OpenType or AAT. Larger fonts do increase the need for updates to the entire font when individual glyphs are changed, in which case a user would need to save an older version of the font for use with existing documents and projects in case the appearance changes. However, this can happen whether or not a font is split up.
Web Resources. There are still some open questions about how to best package SMuFL fonts as web resources.
SMuFL and MusicXML. Questions came up about a possible SMuFL namespace for MusicXML, and how integrating SMuFL more closely into MusicXML reflects the push and pull between the semantic and graphic roles in MusicXML. This led us into our next topic, a discussion of MusicXML 3.1.
MusicXML 3.1 Updates
Michael Good discussed what is planned for the updates in MusicXML 3.1, and the motivation for a MusicXML 3.1 release.
The discussion the group just started about the role of semantics and graphics in MusicXML is one of many structural issues with music notation markup that the group will want to consider in the future. However, the MusicXML community needs an update sooner than we will resolve these deeper issues. The focus is on bug fixes, minor features, and expanded symbol support.
The primary goal of the MusicXML 3.1 release is to keep interoperability alive right now at the same level as MusicXML 3.0 provides. In particular, with the upcoming release of more applications with SMuFL font support, we need support for more SMuFL glyphs.
Michael’s goal is to get a MusicXML 3.1 update out within the next few months. This is likely to still not have support for every single SMuFL glyph. It also doesn’t mean that each element that has a list of possible glyphs will support every possible variant in the sense of a MusicXML schema enumeration. This maybe happen in some situations, but in others, elements will be able to designate a specific SMuFL glyph in a semantics-free kind of way, referring to SMuFL unique name rather than a code point.
MusicXML 3.1 will also fix many documentation bugs. Those will likely be among the first issues addressed, after converting the MusicXML licensing language to the W3C Community Group licensing language.
Like SMuFL, MusicXML has moved to a Git repository on GitHub. Contributors need to join GitHub and follow the Git feature-branch and pull request workflow.
Evan suggested that MusicXML 3.1 should address the issue of whether the endpoints of grouping symbols (such as 8va octave-shift elements) are defined to be inclusive or exclusive of the last member. Michael responded that this was a good idea. The co-chairs encouraged Evan to join the community group as a participant, create a GitHub account, and add this issue to the MusicXML repository.
What’s Next After SMuFL 1.2 and MusicXML 3.1
Joe Berkovitz began the discussion of what’s next after the short-term SMuFL and MusicXML releases. As a group we need to know what is in scope and out of scope in order to make progress. Otherwise there can be endless raising of esoterica if there is no way for people to have a shared understanding of what is important and in scope, and what is not.
To this end, we are creating a music notation use case document. The initial draft is on the group Wiki. The document is organized by user roles. At this point we are trying to be as inclusive as possible in collecting user roles and user actions. Eventually we will need to decide what’s in and what’s out, but not yet. Collection is an important step in the process. Joe read several of the use cases from different user roles currently in the document to provide a flavor for the range of issues under consideration.
The group returned to the discussion of semantic and/or visual focus raised by Evan earlier. Joe mentioned that one example of a way to separate the visual from semantic is by using CSS, rather than the mix of elements and attributes that is currently in MusicXML 3.0. The use of CSS classes could remove a lot of redundant formatting information that is currently in MusicXML files. CSS media profiles and queries could help with responsive design for music notation.
As with web documents, not all formatting needs to follow a class in a style – one-off style attributes could be used in elements when needed. The types of formatting used would also likely be different than what CSS provides, such as for note positions. Jeremy suggested this could lead to an “MSS”, or music style sheet.
There are problems that CSS has not solved that are important to music notation, such as pagination. This is currently a big topic in the EPUB community. The W3C digital publishing group is looking at this – an instance of possible future collaboration between the Community Group and W3C Working Groups.
The current use case document has a technical requirement section that is similar to developer-specific use cases. One such case involves what happens when a user clicks on a note in a web application. A more hierarchical model that moves notes into a containing chord object (similar to a NoteRest in Sibelius ManuScript, or an entry in Finale) could make this type of interaction work more easily.
Currently applications have to map MusicXML into an interactive structure suitable for the application’s needs. Previous versions of MusicXML deliberately did not design for this use case, but times have changed. Applications may still need to do this mapping, but wouldn’t it be easier if the mapping were more natural? This is just one example of where the Community Group can take a fresh look at things to help move digital music notation standards forward.
This month we plan to begin working on MusicXML 3.1, a short-term update to the MusicXML format to add greater coverage of musical symbols, along with targeted bug fixes and feature enhancements. We will also start work on SMuFL 1.2, a short-term update to the SMuFL standard.
We will be using the W3C’s MusicXML and SMuFL repositories on GitHub as the focus point for this work. If you do not already have an account on GitHub, please sign up for a free account at https://github.com. This will let you fully participate in the discussions. Once you sign up, it would be great if you could enter your GitHub ID and a brief summary of your music notation interests at the group’s Wiki page for contributors.
If you have used Git before, this will likely be familiar. If not, the terminology may be new, but the ideas are pretty simple. We will be using Git’s feature branch workflow. Each issue will be worked on in its own named “feature branch” that branches off of the main GitHub branch. That main branch is called gh-pages in the W3C repositories. The feature branches will have names related to the issues.
Once the work for each issue is completed, there will be a request to pull the feature branch into the main gh-pages branch. This pull request serves as an opportunity for anybody following the issue to review the solution and propose changes. Once the group reaches agreement on the review, the changes will be merged from the feature branch into the main branch.
The main gh-pages branch in each repository will always reflect the latest reviewed, working version of the MusicXML schemas and the SMuFL specification and metadata files, respectively. The feature branches will be used for work in progress.
We have set up the Git repositories and the contributors mailing list so that the mailing list will get notifications of new issues, pull requests, and merges. This will only be part of the work going on at GitHub. The best way to follow and participate in the work is to “watch” the MusicXML and/or SMuFL repository once you have a GitHub account.
Git and GitHub may seem a little complicated the first time you use them. However they have become very popular for open source software development projects, including W3C projects like those at the Audio Working Group. Git and GitHub provides structure and transparency for software projects so you can always see what changes were made, who made the changes, when the changes were made, and why they were made.
We look forward to getting to work on the first W3C Music Notation Community Group updates for these important standards. Please let us know if you have questions about the work process. In the meantime, work will also continue on the use cases for a longer-term music notation format standard. See you on GitHub!
Happy New Year everyone! We are happy to announce that thanks to the work of Joe Berkovitz and NAMM staff, we will be having a W3C Music Notation Community Group meeting during the upcoming NAMM Show in Anaheim, California.
The meeting will be on Saturday, January 23 from 10:30 am to 12:00 noon in the Hilton Oceanside room. The agenda will include:
Community Group status update
Use case document discussion
MusicXML 3.1 updates
SMuFL 1.2 updates
Please let us know if you have suggestions for additional agenda topics.
Since the meeting is in the Hilton rather than the Anaheim Convention Center, you do not need to be registered for the NAMM show in order to attend the meeting.
We apologize for the short notice about the meeting. This was our first time working with NAMM staff regarding a W3C music notation event, so we have been learning as we go.
The room holds 30 people. We expect that will be sufficient given the short notice and the attendance at the NAMM MusicXML meeting in 2012. For planning purposes it would be helpful if you could let us know if you plan to attend. Feel free to send an email to Michael or to respond on the blog or mailing list.
For those who can’t make it to NAMM, we do hope to have a meeting at Musikmesse in Frankfurt this April, as we have done with the MusicXML community the past three years. We hope to provide more advance notice for that event.
We look forward to seeing many of you at this meeting!
2015 has been a major year for music notation standards with the formation of the W3C Music Notation Community Group. We are now at 222 members, making this the fifth largest community group at the W3C. Thanks to everybody for your interest and participation!
We recently started work on a music notation use cases document. This is on the Music Notation Community Group Wiki, and participants in the Community Group may edit it.
There are a lot of use cases in this document already. The two things we need are more complete descriptions for the existing use cases, and coverage for any use cases we may not have included yet.
We think it is best if people edit the use cases who are either in the role described by the use case, or are implementing solutions for people in that use case. For some roles this is pretty easy. Lots of members of this list are performers, and composers, arrangers, teachers, and students are also easy to find.
Some roles are more specialized, though. The use cases for musicologists, for instance, do not have any descriptions yet. The use cases associated with education, accessibility, and convergence with web and Epub technologies are also missing descriptions. If you are in one of these roles, or supporting people in these roles, we especially welcome your contributions to describing the use cases in more detail. Our headlines and one-paragraph descriptions are a starting point, but more detail will be needed to have these use cases are addressed effectively in the future.
This use case work is a first step toward an updated music notation format that builds on the success of MusicXML 3.0 for music document interchange and expands it to new uses. In the meantime, though, both MusicXML and SMuFL have some short-term needs that we want to start addressing later this month or in early January. These will be MusicXML 3.1 and SMuFL 1.2 updates.
MusicXML 3.1 will be focused on adding greater coverage for musical symbols, along with targeted bug fixes and feature enhancements. The goal is to maintain and improve MusicXML 3.0’s high level of document interoperability, without distracting from the longer-term work of the community group. You can see the current list of MusicXML 3.1 issues in the MusicXML GitHub repository.
Similarly, SMuFL 1.2 adds coverage for more musical symbols, and addresses other issues that have been raised by the SMuFL community over the past several months. You can see the current list of SMuFL 1.2 issues in the SMuFL GitHub repository.
We have added integration of the GitHub repositories into the public-music-notation-contrib mailing list so that group contributors will be notified of progress as the MusicXML 3.1 and SMuFL 1.2 issues are worked through in GitHub.
Thank you again for all your interest in this group. 2015 has been a year of transition and new beginnings for music notation standards. We look forward to 2016 when we expect the group to deliver its first updates to the MusicXML and SMuFL formats, and to start more extensive work on an updated web music notation format for the future.
Michael Good, Joe Berkovitz, and Daniel Spreadbury
W3C Music Notation Community Group Co-Chairs
Thanks to everyone for your thoughtful discussion of the agenda for the Music Notation Community Group. We have posted a summary of the discussion on the CG Wiki at https://www.w3.org/community/music-notation/wiki/Agenda_Discussion. In this post, the co-chairs would like to bring this discussion back to its effect on the short-term and long-term agenda of the group.
Our initial proposal for an agenda contained four short-term and three long-term projects:
Document music notation use cases
Build an initial MusicXML specification
Add support for use of SMuFL glyphs within MusicXML
Identify and fix any remaining gaps or adoption barriers in SMuFL
Improve formatting support in MusicXML
Build a complete MusicXML specification document
Add Document Object Model (DOM) manipulation and interactivity to MusicXML.
Looking at the discussion, we saw a clear emphasis on the need for use cases, as well as the need to address foundational and structural issues in MusicXML. After reviewing all the responses, the co-chairs agree that these two areas deserve top priority, and that it makes sense to treat them sequentially. First we should focus on developing, refining and filtering a set of use cases for our notation format. After we have adopted a solid set of use cases, we can then turn to the foundational/structural issues in the standard with a much better shared understanding of our goals.
We believe document format discussions are best driven by use cases, expressed in terms of the needs and actions of different classes of users, as well as the nature and contents of the documents involved. Many of the discussions so far have been framed in terms of technology choices like MusicXML vs. MEI, XML vs. JSON, or IEEE 1599 vs. SMIL. We would like to begin re-framing these discussions to clearly identify user needs and document contents served by specific features in these technologies, rather than on whether we should pick technology A vs. technology B. Eventually we can shift to finding technical solutions that satisfy the use cases we deem worth addressing, in the context of this group’s charter.
Building an initial MusicXML specification met with some thoughtful push back. Peter Deutsch proposed that the specification effort should wait until the format is redesigned in some key areas to be more easily specifiable. Others mentioned that it may not be wise to do short-term work that would then need to be redone based on long-term goals.
People asked what adding support for use of SMuFL glyphs within MusicXML meant, and why a font standard influences a music representation standard. Our feeling is that there is an immediate practical need to add more musical symbols to the MusicXML format. A MusicXML update that includes greater support SMuFL’s expanded vocabulary of musical symbols will keep interoperability between applications high. We believe that this can be done with little risk of future rework based on long-term goals.
Identifying and fixing any remaining gaps or adoption barriers in SMuFL did not receive much feedback. There appears to be a feeling that the existing specification is in good shape, and mostly in need of incremental improvements in symbol coverage.
We therefore propose our short-term work cover three main areas:
Document the use cases for music notation formats in a way that can help drive decisions about the structural and conceptual changes needed in the standard.
Create a very focused, short-term MusicXML 3.1 update that addresses the need for broadening MusicXML’s symbol vocabulary and fixes some documentation bugs. This update will avoid any areas that are likely to be redone if structural changes are made in the future. We already have an initial take on what would be included in a MusicXML 3.1 update on Github at https://github.com/w3c/musicxml/labels/V3.1.
Produce incremental SMuFL updates as needed for enhanced symbol coverage.
We propose that these three items proceed in parallel. Joe will lead the use case documentation, Michael will lead the MusicXML 3.1 update, and Daniel will lead the SMuFL updates. We anticipate that the use case work will receive most of the group’s focus.
Our plan is for the co-chairs to put together an initial use case draft on the community group Wiki to illustrate what exactly we are thinking of when we talk about use cases. We plan to get that out within the next week or two.
Please let us know your thoughts on this proposed agenda update on the public-music-notation-contrib mailing list.
As a next step, the co-chairs will be working to adapt our initial agenda to reflect the ideas raised during this discussion. We hope to get that out to further discussion this week and see if we can start coming to a group consensus on how to proceed.