W3C

– DRAFT –
Interop 2023

14 September 2022

Attendees

Present
dholbert, dom, fantasai, flackr, foolip, gregwhitworth, jgraham, kadirtopal, karlcow, leobalter, miketaylr, miriam, rachelandrew, rego_, tantek
Regrets
-
Chair
_Philip_Jägenstedt, Chris_Harrelson
Scribe
dom, jgraham

Meeting minutes

RRSAgent: make minutes

RRSAgent: make logs public

foolip: <presentation> https://docs.google.com/presentation/d/1f3Z7KA1ztauPxUH4HV5vr8Qt4EC1q2kAe2DTnW2rA00/edit?usp=sharing&resourcekey=0-V0dL1h_XfEp1-NX06xCXpg

foolip: Interop 2023 will be formed in the image of interop 2022

foolip: Evidence of web developer senitment (e.g. via surveys) is used as input. Also uage e.g. from use counters. Bugs filed on browsers also indicate that an area's worth looking at.

Slideset: https://lists.w3.org/Archives/Public/www-archive/2022Sep/att-0005/Interop_2023_TPAC_breakout.pdf

[Slide 3]

foolip: Each participant can decide what's important to them, and then we require consensus i.e. non-zero support and no objections

foolip: State of CSS was valuable input in 2022 and is running again soon

[Slide 4]

[Slide 5]

[Slide 6]

[Slide 7]

[Slide 8]

foolip: Focus area from 2022 summary. Subgrid. New viewport units. Color spaces & functions. And others.

[Slide 10]

foolip: Can see current scores for those.

[Slide 11]

[Slide 12]

foolip: 2023 will follow a similar shape. We have a detailed timeline. Proposals open tomorrow. Onemonth period for submitting proposals. Expect most to come from people in the standards space. But also open to external proposals. Second half of October will make sure proposals are in a good space. Will make decisions on which proposals have consensus in November. Might have to combine proposals to make

a good set of focus areas. In December make sure we have agreement on exact tests. Next year we do the work on fixing things and execute on metric.

[Slide 13]

foolip: How to make a proposal? There's a template on the GH repo you can use. Need an overall rationale for why to include the feature. Could be e.g. sites working around an issue, which shows it's causing a problem for web develoeprs

[Slide 14]

[Slide 15]

foolip: We only intend to work on stuff that's already on standards track. Expect it to be mostly WHATWG/W3C specs, but could be other standards bodies. Non specced things are out. Things that are in incubation are only in scope if they move onto the standards track.

foolip: Tests need to be selected.

[Slide 16]

foolip: After submitting expect additional questions. Getting proposal into good shape will require taking that feedback until end of October. Not useful to make a proposal that won't meet bar of consensus. If there are lots of proposals might require additional prioritisation.

foolip: Now a discussion

heycam: Interp 2022 had points allocated to research. Are we still trying that?

foolip: Yes, omitted from the presentation, but there's also a template for that. Require more clarity upfront for what the investigation actually is. e.g. with viewport units for 2022 we didn't make a concrete task list for how scoring would work and progress has been slow. But we think we're at 50% for that one now, but haven't confirmed with the subteam to update the score.

heycam: So you also want proposals for research areas?

foolip: If we know which areas are causing the most problems for developers, and it's something that's not well specified then it's a good idea to make an investigation effort. Could be spec problems, or missing test infrastructure. Would like to be able to treat it as a focus area next time around.

emilio: Viewport in particular we should work on in 2023 with a way to test dynamic viewport, zoom, etc. We know what behaviours are, where we want to end up, but we don't have test infrastruture.

emilio: I hope to propose that.

heycam: Is that a testdriver problem?

karl: We discussed this. If there's no automation possible we can't make a focus area.

flackr: We should only have things that are specced was mentioned, but we have gaps in spec, are those OK?

foolip: Yes, that sounds like a good candidate for an investigate area. That's the case for the existing investigation efforts. In some cases there might be a mix of good tests and gaps. Don't have a specific plan for that scenario, bt we can see which proposals we get.

gregwhitworth: What's the stage of the spec that we expect it to be in for interop? Also, is there a fork of this effort that's specifically about spec effort? e.g. for container queries we got a lot of requests for the feature, can we use the same approach?

chrishtr: e.g. we don't have resources for this year, but we want to do it next year?

gregwhitworth: Yes

gregwhitworth: e.g. scoped custom element registeries. Could we leverage this to get the spec work prioritised in 2024?

chrishtr: Encouraging people to work on closing the gaps could be in scope for an investigation issue

jgraham: doing spec work is explicitly not in scope
… if you want something to be in scope for next year, you can use this to rally efforts towards an interop push
… this isn't a process to overtake WG priorities

gregwhitworth: It seems like features like subgrid and conatiner queries also needed spec work.

plh: At the next breakout we'll talk about webdx, which might address this concern.

<foolip> Yes, and it had been shipping in Firefox for a few years already.

dom: That might be a way to help WGs prioritise work

heycam: By the time that subgrid and container queries were taken as focus areas tehy were in a good state.

foolip: To answer question about spec maturity. For WHATWG everything is a LS so anything with PR goes. For W3C require at least a FPWD so it's on Rec. track. Something only in a CG is out of scope.

foolip: New things are not in scope. Can be grey areas. If you have a proposal and you're not sure, please reach out.

foolip: I also share the urge to use a process like this for spec, but there isn't consensus and there are risks to that. Don't want to commit to successfully speccing something

flackr: There's been a lot of tension about what's testable e.g. mobile is hard to test. Are we going to emphasise development of features in test infra, or is manual testing in scope or...?

jgraham: mobile testing is hard
… we focus on automatic testing - we can't include manual test
… the score is based on automated results from WPT
… test infrastructure is definitely in scope for investigation
… mobile testing needs specific hardware testing in CI
… we would welcome proposals to help solve that

<gregwhitworth> browserstack partnership?

<tantek> +1 fantasai nudging WGs to prioritize and publish CRs makes sense

<astearns> no minuting?

fantasai: I agree with the idea of focusing with things that are only on the rec track. It would be helpful if interop areas are brought to the attention of WG chairs, so that we can also progress specs to CR to signal that we believe the spec is in a good place. Having that priority input to CSSWG would be welcome. On the topic of things without test infrastructure, it seems unfortunate to totally

exclude these things. Some specs are difficult to test in an automated way that are important e.g. media queries or scroll snapping. Those are important features. Might want to pick one of these. There is a manual test harness in the CSSWG that could help.

fantasai: Consider having at least one manually tested feature.

<gregwhitworth> q later

foolip: I agree that it would be unfortuante to totally exclude manual tests. I've discussed it with karlcow, but I don't think there's consensus yet. If there are specific features with lots of manual tests you could submit it and we'll see if we can work out a way to include it. If it's important to devs we should find a way.

emilio: For those kinds of features where we don't have good automated tests in wpt, we should prioritise getting ways of testing that into wpt. All of these features are tested internally in browsers; we don't land these features with only manual tests. Sometimes requires internal APIs. Some of these are not too hard to get an API for. For media queries could have overrides to force certian features. I

<gregwhitworth> all I'm hearing emilio say is to standardize the render tree :P

think we should work on getting these tests in wpt.

<Zakim> fantasai, you wanted to react to emilio to respond

fantasai: I dont oppose wanting to automate more things, but some stuff is really hard to automate in a cross platform way

fantasai: Some stuff like plugging in a high gamut monitors seems hard to automate.

emilio: For everything browsers can test internally we should be able to to share tests.

emilio: Hardware interaction might be a case where it's nt possible to write a test. But overriding the media query values is a feature we already have in e.g. devtools. So n API to toggle the media query would help.

jgraham: we also get strong requests from developers to be able to override e.G. hardware integration for testing
… e.G. in puppeteer

<fantasai> fantasai: But have you actually correctly implemented the feature, if you are never testing whether the hookup through the OS drivers actually works?

flackr: WebDriver injection points always involve some kind of emulation. But a lot of these things are cases we can test using that approach.

emilio: e.g. zooming scenario tests don't test at the OS level.

<fantasai> fantasai: That's fine for testing below that hookup, and also for regression testing, but at some point you need to test end to end

foolip: Let's have proposals and see what makes sense for the concrete proposals.

gregwhitworth: Is there a quantitive way to prioritise proposals for voting.

foolip: We will use things like use counters. Some survey data is quantitaive. I would say overall method isn't to provide a sorted list of proposals, we always have to provide human judgement. It's also consensus based, so we don't have a list except what everyone can agree to put into the metric.

gregwhitworth: Awesome to see this effort.

chrishtr: Does anyone have a proposal in mind they'd like to ask about?

gregwhitworth: Things like reporting api, we have a list, but not sure if they all meet the criteria

tantek: High level question is which are in CR, which aren't?

tantek: Better chance to get proposals accepted in CR

gregwhitworth: Some of it doesn't seem likely to be for this effort.

<astearns> near the top of my list is P3 support in WebGL canvases, but that should have been part of the color topic for 2022 (where the WebGL tests were not included)

<discussion about dir pseudoclass>

<foolip> astearns: We have talked about how we'd expand existing areas, for cases like this.

gregwhitworth: Also declarative shadow dom

emilio: Interaction between :dir() pseudoclass and shadow dom is interesting.

chrishtr: This sounds like a grey area where a submissin might motivate getting the spec work done.

bkardell: We have lots of tests, but need to resolve some spec issues. I would support it as a proposal.

<fantasai> https://github.com/whatwg/html/pull/7424

chrishtr: For declatrative shadow dom there's only one browser implementation. Research to gather dev sentiment even if it doesn't end up as proposal. Part of the point of the effort is to help us focus on things that matter. Also helps focus on getting things across the finish line.

miriam: My team has been working on CSS polyfills. There's a question about which things are useful to implement polyfills. Don't want to have to reimplement the cascade or parsers. Don't know how the features to help those cases fit in.

<miriam> specifically `@property` would make a big difference for polyfills

kadirtopal: ad-hoc prioritisation sometimes works, but focusing on what developers really need is important. Want to actually get things into developers hands. Going to talk more about this in developer experience session.

heycam_: One area I think we might have interop problems with is SVG. People are sharing lists of bugs. That's quite a wide area and I don't know how to focus it more. Can we make a top level focus area that broad?

bkardell: We did some "bags" of things this year e.g. forms. Would like to see one covering other embedded content SVG & MathML.

heycam_: SVG coverage in wpt has been poor and I want to improve, but been hard ot prioritise.

https://wpt.fyi/results/svg?label=master&label=experimental&aligned shows ~4000 test assertions

chrishtr: If there are lots of tests, we could consider that an area. e.g. for forms we saw lots of tests, but also failures.

bkardell: Sometimes browsers get 80% but they're different 80%.

gsnedders2: SVG is inconsistent on how many tests there are for each area e.g. a lot of tests for path parsing, but some areas have little coverage.

foolip: If you think SVG is an issue file a proposal and we can figure out the details.

ydaniv: Huge demand for SVG. Images are also a problem. Features like loading attribute. It's important to get this 100% across browsers. AVIF shipped in Safari, could be a chance to get servers to start transcoding to AVIF. Bugs can hurt performance.

RRSAgent: make minutes

Minutes manually created (not a transcript), formatted by scribe.perl version 192 (Tue Jun 28 16:55:30 2022 UTC).

Diagnostics

Succeeded: i|[slide 3]|Slideset: https://lists.w3.org/Archives/Public/www-archive/2022Sep/att-0005/Interop_2023_TPAC_breakout.pdf

Succeeded: s/|s/[s

Succeeded: s/plh:/dom:/

Succeeded: s/test infra/test infra, or is manual testing in scope or.../

Succeeded: s/like high/like plugging in a high/

Succeeded: s/propsals/proposals/

Succeeded: s/prioritse/prioritise/

Maybe present: bkardell, chrishtr, emilio, gsnedders2, heycam, heycam_, karl, plh, RRSAgent, ydaniv