Audio Description work moved to DAPT
Posted on:Our previous work on “ADPT” has now been absorbed into a new Recommendation Track specification called DAPT, being worked on by the Timed Text Working Group (TTWG). The latest Editor’s Draft is at https://w3c.github.io/dapt/ and the TTWG recently approved its publication as a First Public Working Draft. When the URL for that is available I’ll post it, but TTWG has also agreed to a working mode where any changes made to the Editor’s Draft will be published as a new Working Draft, at least until the specification moves to Candidate Recommendation, so the two should stay closely in synchronisation.
DAPT stands for “Dubbing and Audio description Profiles of TTML2” and incorporates most of what was in ADPT before. The Requirements part was published as a separate document at https://www.w3.org/TR/dapt-reqs/ and, observing that there is a large overlap between the requirements for creating audio description scripts and dubbing scripts, it seemed to make sense to combine them into a single specification. Having worked on it as Editor for a while now, alongside my co-Editor, Cyril Concolato of Netflix, I think it still does!
Next steps
During the Working Draft stage, we can make substantive changes in response to wide review feedback, which can come from anyone, especially you! This stage also provides a patent exclusion opportunity, in case anyone has IPR in this specification that they want to flag.
Our goal in producing this specification is to make it easier for everyone to write, exchange, archive and play back visual media either dubbed or described. That’s across the whole chain including scripting, recording and mixing.
So please think about whether the specification meets your needs, and if it does not, let us know. That can be by email or even better, by raising an issue on the GitHub repository. The ways to provide feedback are listed in the “More details about this document” section right at the top of the document.
After that…
After the Working Draft stage the next step is Candidate Recommendation, in which we will invite people to implement the specification and let us know. We will be producing a test suite against which we can verify that those implementations meet the specification, and logging implementations so that we can demonstrate that we have reached the bar to move to a W3C Recommendation.
Who is supporting this?
The Editors work for BBC and Netflix, and we have been talking with many others in the industry too. Nobody has yet told me they think this is a bad idea! If you would like to participate and support this, or even if you can see the benefit of the work but can’t commit a lot of time right now, why not tell the group?
What are the big questions now?
There are several issues marked as “question” open at the moment, covering aspects such as referenced and embedded audio, audio encodings, and what extended support for SSML might be useful.
One of the questions that I have is how much support might be needed to support the editing process. Creating a script can take a while: do people need to be able to add markup showing the work in progress, or what they need to come back to? Does that markup need to be standardised?
Another is if we need to define different classes of implementation for support of audio features and support of the script (text) transcription, translation and adaptation. Does it make sense to introduce a stronger distinction between audio description features and dubbing features, or do some people want to use the audio recording and mixing capabilities that we need for audio description within their dubbing workflows?
I really want to take part!
Great, if you can, please join the newly chartered Timed Text Working Group, but if not, your contributions are still welcome. I’m happy to talk this through as Chair of TTWG and of this Community Group – you can email me to start a conversation in private, or email this community group, or reply to this post to do so more publicly.