TPAC/2015

From W3C Wiki
(Redirected from TPAC2015)

TPAC2015 takes place M-F in Sapporo, Japan.

On Wednesday, 28 October, we hold a "Plenary Day." As we did for TPAC2014, we will organize most of the day as "camp-style" breakout sessions. We invite you to add to or comment on Breakout session ideas. The people at the meeting that day will build the Session Grid, drawing from ideas socialized in advance and new ideas proposed the day of the meeting.

  • Questions? See the FAQ.
  • See Also past TPAC meetings.

Wednesday 28 October Plenary Day Schedule

Plenary sessions take place at 1st floor, in the Conference Hall (Sapporo Convention Center). Breakout sessions take place in nearby rooms.

Minute of W3C Technical Plenary day are available.

  • 08:00-08:30: Registration
  • 08:30-08:35: Welcome - Jeff Jaffe
  • 08:35-09:15: Panel discussion on the future of the Internet and Web. Moderator: Jeff Jaffe; Panelists Tim Berners-Lee, Vint Cerf, Jun Murai
  • 09:15-09:30: Web Platform Incubator Community Group - Yoav Weiss
  • 09:30-09:45: Web Platform Working Group - Adrian Bateman
  • 09:45-10:30: Coffee break + last additions to Breakout session ideas + Breakout preparation (Ian Jacobs)
  • 10:30-12:00: Breakouts (see grid below)
  • 12:00-13:30: Lunch
  • 13:30-14:30: Breakouts (see grid below)
  • 14:30-15:30: Breakouts (see grid below)
  • 15:30-16:00: Coffee break
  • 16:00-17:00: Breakouts (see grid below)
  • 18:30: Reception

Breakout Schedule (session grid)

[47 sessions]

Room (capacity)/Slots 10:30-12:00 13:30-14:30 14:30-15:30 16:00-17:00
1F 101 (20) HTMLCue (Nigel Megitt) - minutes - summary Web-Based IoT Platform Federation (Oleg/Yosuke) - minutes Web Authentication and Hardware Security WG discussion (Wendy Seltzer) - minutes https-transitional securing "http" links (TAG) - minutes
1F 102 (20) Interledger Protocol (Stefan Thomas, Evan Schwartz) How blockchain could change we-based content distribution (Shigeru Fujimura, Hiroki Watanabe) - minutes - summary Web Payments Architecture (Ian Jacobs) - minutes Interledger Protocal (repeat) (Stefan Thomas, Evan Schwartz)
1F 103 (14) Generic Sensor API (Tobie Langel) - minutes Web GPIO I2C (Koichi Takagi) - minutes Spatial Data on the Web meets Web of Things (Kerry Taylor, Ed Parsons, Phil Archer and others from the SDW Working Group) - minutes EPUB Zero ebook format (Dave Cramer) - minutes
1F 104 (14) iTLDs & presentation of them (David Singer) - minutes Identity Credientials WG aka verifiable attribution (Manu Sporny) - presentation - minutes Javascript accessibility APIs (Cynthia Shelly) - minutes Distributing Web Apps across Devices (Mark A. Foltz) - minutes - summary
1F 105 (20) Web-based Signage (Kiyoshi Tanaka, Shigeru Fujimura) - slides (including result) - minutes - summary Cross-Device Sync (François Daoust) - slides - minutes - summary Echidna (W3C's auto publication system) (Antonio) - slides - minutes W3C Web API (Vivien Lacourba) - minutes
1F 107 (36) Network Interactions, Dom - minutes - summary AMP & HTML (Chris Wilson) - minutes WICG (Yoav Weiss) Future of HTML5 (chaals) - minutes
1F 108 (36) Social Web (AnnB, Amy Guy, Tantek) - wiki, photo, minutes Open Data with local community (Taisuke Fukuno) - minutes Device APIs (BT, NFC, USB, etc.) and privacy/permissions (Jeffrey Yasskin) - minutes Web of Things mapping ideas based on GW model (Kazuo Kajimoto and Yoshikazu Ishii) - minutes
2F 201 (20) /TR redesign (fantasai) - minutes Web Platform Testing (fantasai + gsnedders) — minutes Testing for various devices! (Fumitaka Watanabe) - minutes High Dynamic Range Video (Mark Watson) - minutes
2F 202 (20) FoxEye, video processing (Chiahung Tai and Tzuhao Kuo) - summary Location-Based service (Qing An) - minutes Secure Comm with local network devices (Mark Watson) - minutes W3C Developers (Guillaume, Coralie) [no attendance]
2F 203 (6) ATSC (TV) (Bill Foote) CSS for paged media (screen & paper) (Shinyu Murakami) - minutes Additional funding sources for W3C? (Ann Bassetti) Standards for Personal Assistants (Debbie Dahl) - minutes
2F 206 (36) empty ARIA Future Architecture (Hakkinen et al) - minutes CSS validation in the browser (Takeharu Igari) - minutes Business side of Verticals (Alan Bird)
2F 207 (36) HTML Testing - minutes, The new community group for application-oriented testing (chaals, Mike, Judy Zhu) Re-decentralize the Web + net (Ira, Csarven, Andrei) minutes Machine-readable Rights OLE/ODRL (Ira) - minutes Web Annotation (Rob Sanderson) / Egocentric Architecture (Benjamin Young) - minutes
1F Waiting Room 2 (10) empty Webex, how is it going? (Nigel Megitt, Ralph Swick) - minutes - summary DataVis: Data visualization (Chunming) - minutes - join DataVis CG Digital Marketing (Chad Hage) - minutes


See Also

Aggregated Summaries

Network Interactions

The Network Interaction breakout aimed at understanding if and how application developers might obtain better information and control on the network on which their service operates.

The breakout reviewed the outcomes of the recent GSMA/IAB MaRNEW workshop and looked at various cases where this additional interaction could be applied: WebRTC optimization, network usage adaption based on user data allowance, overall optimization of radio usage.

While there was an emerging consensus that it would be useful to have a way to signal which of latency or bandwidth was most important for a given network request (either from the app or from the browser perspective), the details of the network infrastructure needed for it to happen was left to future discussions at the IETF, and its applicability to the current TCP-based protocol stack of the Web was unclear.

The overall discussions of how and when the network operator would want to accommodate more specific requests for control or information on their network from the application layer remain inconclusive on a way forward.

FoxEye, video processing

There are a lot of valuable inputs during FoxEye session. Kaku filed all the issues to the github for tracking. Please feel free to join the open discussion. Let's make the Web more friendly for video processing and computer vision.

1. Decouple VideoProcessor from Workers (#30): This issue is suggested by Mathieu. I will make a new API design for decoupling from worker. I think the issue #31 might be not necessary. I believe we don't need a canvas for processor case.

2. Using worklet (#32): WebAudio will move from AudioWorker to AudioWorklet. So we might align with Audio to adopt worklet. But it's still an open question. I will implement on worker first anyway. Let's talk more on this in Github.

3. Processing offline context (#33): After talk with padenot, he suggested we might take WHATWG Streams as the carrier for offline processing case. This also an open question so far. Let's talk more on this in Github. But in Gecko implementation, we might use MSG MediaStream to implement the prototype and verify the concept first.

4. Elaborate the backpressure handling (#34) and Security Considerations (#35): We will elaborate more in the spec.

5. Re-sample the video frame rate (#36): A new use case we haven't considered yet. We thought this might be useful in offline case. So we will try to figure out how to deal with in offline processing.

FoxEye information:

Cross-device synchronization

The session explored cross-device synchronization scenarios, including shared video viewing, lip-sync use cases, distributed music playback, video walls, cross-device animations, etc. These scenarios require mechanisms to (1) synchronize the local clock with an external one, (2) distribute a timeline, and (3) harness the media playback or animation. A pure JavaScript-based approach only works to some extent; proper cross-device sync needs native browser support. The Timing Object spec defines an API to expose cross-device sync mechanisms to Web applications. Discussion revealed that harnessing the media playback is not trivial as media rendering is often handled by the hardware itself, and that loose synchronization where e.g. a device temporarily runs out-of-sync could improve UI responsiveness in some cases. Work on the spec will continue in the Multi-Device Timing Community Group. Interested parties are invited to join the group!

How blockchain could change the Web-based content distribution

Shigeru and Hiroki made the presentation about the mechanism of blockchain and its potential related to web-based content distribution. They also showed the demonstration about contents use control via blockchain. And then, started the open discussion. In this discussion, main topic was business model, that is, what is incentive to continue maintaining blockchain in this case. The result of this discussion is that blockchain application is very brand-new topic and a community group would be adequate because better understanding is required.

Requirements for Embedded Browsers needed by Web-based Signage

The session started from a presentation of the feature of the web-based signage and requirements for the browser. Although the embedded browsers are useful as web-based signage terminal devices, there are still specific requirements for the signage services. The API ideas such as auto-pilot API and rich presentation API were shown and discussed regarding the proper WGs where such APIs would be considered.
In addition, the results were provided to Web-based Signage Business Group and reflected the discussion in the review of a draft charter for the newly proposed Web-based Signage WG.

HTMLCue

This well attended session discussed the idea of a new kind of Text Track Cue which would allow any fragment of HTML+CSS to be used to modify the display on a target element in synchronisation with a media element's timeline. The idea originated with Opera and has also been discussed in TTWG and WHATWG: to have a cue whose payload data is a fragment of HTML and whose default onenter() handler would attach that fragment to the target element, e.g. a video, for example to update the captions or subtitles, and whose onexit() would somehow remove or clear it. This approach is used for example in dash.js to present subtitles now, but the only browser supported cue type is a VTTCue that has to be overloaded to do this, and the extra VTT style attributes are ignored. A cleaner solution would be to have a supported generic cue type that doesn't include an initial list of VTT specific styles.

Previous discussions have suggested that this approach could have unintended consequences and would need to be sanitised; for example it would need to be clear that the changes do not create new navigation contexts, and there may be risks associated with running scripts embedded in the HTML, or loading external resources like images - those risks were not enumerated.

Different views were expressed, not all in agreement:

  • this can be done already using the current text track interface and VTTCue, setting the Text Track's kind attribute to "metadata" and handling the onenter() with bespoke javascript to attach the result of getCueAsHtml().
  • use of "metadata" tracks subverts the intent behind the label "metadata" when the purpose of the tracks is to display content.
  • a browser-native implementation not requiring use of js to attach the HTML payload to a target element would give browsers an opportunity to optimise for display rendering to achieve the required cue timing for example by pre-rendering.
  • for a javascript metadata track based solution the browser may need to expose the user preferences for showing or not showing subtitles so that non-native javascript implementations can know what to do.
  • a native implementation could allow browsers to use default player controls: bespoke/custom subtitle handling is generally likely to go alongside custom player controls.
  • a generic solution would allow other formats than VTT to be used more easily. For example it is easier to translate from TTML to HTML than from TTML to VTT. Conversely it should also be easy to implement VTT by translation to HTML. This would move the effort for browser implementors away from handling VTT and allow more agility in changing file formats since file parsing and processing into HTML can be done in javascript.
  • to handle the risks a sandbox approach could be used similar to what is available for iframe (or indeed by simply using an iframe and its sandbox directly)
  • browsers should not be expected to auto-load external resources like images from html attached in this way because that could create a privacy issue e.g. it would inform the image source server where in the related video the user is watching.
  • but also in opposition to that: browsers should be expected to auto-load external resources attached in this way, since the origin of the source data can be considered 'trusted'.
  • we should not try to subset html into a 'safe' set but rather build up a required and safe set from a blank starting point - we tried the former approach before and it caused problems.
  • the idea of HTMLCue is a good one from an architectural perspective.

Two actions were noted:

  1. File a feature request to expose user preferences about subtitles to javascript
  2. Work on a prototype using the TextTrack kind="metadata" and highlight any limitations or issues discovered

Other next steps include summarising the HTMLCue proposal in a clear document.

Webex - how's it going?

This was a feedback gathering session to understand the experience particularly of working groups since W3C moved from Zakim to Webex for audio calls.

A variety of issues were identified with the new system, some of which can be resolved through best practices, others of which Ralph offered to look at off-line. However there is not likely to be a significant amount of work from Cisco to address problems with Webex since they are working on a replacement.

Issues gathered covered:

  • Call initiation problems including initiation requests not being met and accessibility problems with user interface
  • Audio quality problems for example steady degradation over time for users in some locations
  • Partipant monitoring - ability to query who is making noise
  • Caller identification especially when dialling in from a telephone (name identification is all that's needed, not number identification)
  • Speed of administration to handle issues when they arise during meetings
  • Calls being disconnected
  • Minimal technical requirements are onerous - there's a dependency on Java
  • Call administration - the need for a host to be present to initiate a call
  • Requirement to explicitly end a meeting
  • Best practice documentation - is it okay to use the whiteboard feature?

One available but not widely used mitigation to issues that would otherwise require staff input, such as administrative tasks, is that Chairs can have an MIT Webex account for the purpose of beginning and ending meetings or setting up ad hoc meetings for group business.

Distributing Web Applications Across Devices

There are a number of device-centric APIs in various stages of development, including the Presentation API, Web Bluetooth API, Web NFC API , Sensors API. This session describes potential use cases for Web applications that use nearby devices to distribute input, output, sensing and control, and how to improve interoperability among these APIs to make them possible.

This session discussed the potential for creating a new class of Web applications that can be distributed among multiple devices and user agents, instead of executing within the context of a single device/user agent. The motivation is the set of upcoming device centric APIs such as those mentioned above.

The wide ranging discussion touched on several topics, including:

  • For some of the scenarios in the presentation, there is a need to discover and attach a remote input device into an existing browsing context. The Presentation API doesn't seem like the right vehicle, so maybe another capability is needed.
  • A lot of discussion about the "Exploded App" use case, where an application running on one user agent is "split" with output and input sent to alternative devices. Several alternatives were brainstormed for how this could be done.
  • There is one concept where the single DOM splits into two or more DOMs and they communicate through messaging. This is more in line with the Presentation API.
  • There is another concept where the DOM is kept on one device, but is styled in two different ways depending on the screen type. For example retaining the mobile view on a smartphone and a large screen view on a bigger display.
  • Finally there is a third concept where the entire DOM is packaged and sent to another display. Some mechanism to keep the DOM's in sync would be necessary.
  • There was agreement that sharing the DOM between two user agents introduced a lot of complexity, and it may be simpler to create a separate document with a specific mechanism for synchronization.
  • In the early iteration of CSS, there was a concept of styling the same DOM with alternative stylesheets, that could be applicable to this scenario.
  • The MediaScape project is developing relevant technology for multi-document applications and would be relevant to this idea.


See Also