W3C

– DRAFT –
Improving Web Advertising BG

19 January 2021

Attendees

Present
ajknox, apascoe, AramZS, blassey, bleparmentier, bmay, charlieharrison, dialtone, DiarmuidCriteo, dinesh, ErikAnderson, eriktaubeneck, GarrettJohnson, gendler, jonasz, jrobert, jrosewell, Karen_, kris_chapman, lbasdevant, marguin, Mike_Pisula_Xaxis, mjv, mserrate, pbannist, pl_mrcy, wbaker, weiler, wseltzer
Regrets
kleber
Chair
Wendy
Scribe
Karen

Meeting minutes

<seanbedford> +present

Wendy: Welcome, folks
… give people a moment to join
… Looking at agenda for today's meeting
… agenda curation and introductions
… roundup and updates on proposal status
… how to track and support proposals that may move out into standards track
… introduction to TEETAR from Arnaud
… and from Don Marti, publisher reporting use cases
… Let's see if we can move things today
… and any other business
… items they would like to see on this or future agenda that you want to introduce here
… any other business or agenda notes for today?
… We have regrets from Michael Kleber (google)
… and proposal that he talk about plans around Turtledove, Sparrow and range of related proposals
… make sure that comes back to the agenda
… Anyone new to the call?

Brendan: Hi, represent eyeo today

Sam: hi I'm from DuckDuckGo

Roundup/updates on proposal status, how to track and support

Wendy: welcome Brendan and Sam
… On the roundups and updates on where proposals are
… I wanted to raise this to the agenda
… as a question I had heard
… from a few people; how do we track things and how do we see things moving along onto standards track

<wseltzer> https://w3c.github.io/web-advertising/dashboard/

Wendy: or how to suggest that they move onto the standards track
… We have seen a lot of proposals in the repository
… I'm aiming to capture them all in the dashboard at this link

<wseltzer> https://github.com/w3c/web-advertising/

Wendy: and in the readme of the Github repository
… if you look at that, there are quite a number of different proposals that we've discussed
… if we want to see those move forward as web standards
… in W3C, that means proposing that they move to a chartered working group
… after some period of incubation
… and either move to an existing WG whose charter they fit
… or to a new working group
… with a new charter that would be proposed to the W3C membership; reviewed, and if approved, launched as a new group
… So we know there are some proposals currently incubating in the Web Platform Incubator CG
… some in Privacy CG
… others currently in individual repositories
… that might think of migrating to a more formal incubation
… I wanted to ask whether proponents or editors on any of these
… wanted to give us an update
… or in turn, whether those who have been watching and participating, had particular requests for update
… where we could make sure we call out to those who might be able to share more information

Brian: there is large number of proposals outstanding
… that have dependencies on TD
… what is proposal to go onto standards track that would incorporate those other things?

Wendy: I'm hearing that as both as a procedural question and a substantive quetion
… Procedurally, it's gather info into a scope
… that describes what it is we're trying to do
… and what the success criteria would be
… whose participation is needed
… to make this successful; what do we need to see there
… and present as a charter
… let me quickly find the guide

<wseltzer> https://www.w3.org/Guide/process/charter.html#creation

Wendy: W3C guidebook

Brad: There are two options
… one is to be adopted by an existing WG; and the other is to charter a new WG
… but there are other proposals that could be adopted by existing groups

Wendy: yes, thank you Brad

Wendy: In W3C there are 30+ Working Groups each with a scope of work
… if there is work closely related to an existing group, it would be appropriate to bring it there
… on the more substance side of what goes into that scope
… that is where we, in this group, can be helpful along with work in one of the incubation communities
… of pulling together the cluster of supporters of the work
… to figure out where is this going to happen
… who are the people, organizations, editors
… who will be interested in seeing this work move forward
… Ideally, as you have been discussing in issues and in calls
… is to start to see a cluster of individuals who are interested in moving the work forward
… And I think both the Privacy CG and the Web Platform Incubator CG
… have calls and ad hoc meetings
… where sub-groups can help to take that work on an identified common interest further forward
… Anyone can start the work to propose a charter and W3C Team are happy to help

Brian: thank you

Wendy: regarding TD specifically, Michael Kleber said he will raise the standards questions next weerk

Wendell: I would like to start off by setting up the end game here
… It's important given how long we have been working at a rapid cadence
… and find a way to get it into orbit or abort the orbit
… Some worries that this is such a big vision that it is not doable
… I want to thank you for teeing it up
… I want to make sure we have enough deliberation on it
… I am excited that Michael announced that the Google delegation will give us a plan
… We at Verizon Media are waiting to see what will actually become tangible
… there are a lot of proposals and theory, but there is only one builder
… I have not heard in this group that there are other browsers interested in finishing the proposals that Google has proposed
… It's incubent upon Google team to propose this technology
… I'm excited about Michael's paper, or notes
… the idea that there may actually be code running this in a year, that is exciting for us
… it takes about a year to get into market
… now is a good time
… Q3 and 4 is when revenue happens, so that is important timing
… I will cede the floor

Wendy: Standards in W3C are multiple interoperable implementations
… among the things we are looking at

<dinesh> +1 to Wendell's comments

Wendy: is are there multiple people interested in implementations
… and there is a space for experimental implementations to interest others who are building
… and we very much want to hear if there is potential for multiple implementations
… that will make for something that will become a technical W3C Recommendation

Aram: That matches up what I am hoping as well
… as we move this to standards; that may be what other browsers are waiting on
… why it's important to get it out of here
… and towards a place where implementation is being discussed and worked over

Wendy: Great; any other questions or comments at this point?
… If there are other proposals that people would like to hear about, please note here, or in email or Github so we can seek some follow-up there

Brad: One note
… I'm as eager as anyone to see this work in general progress to standardization and the WG phase
… Incubation is the right stage right now
… still being morphed and explored
… Incubation is right stage to be thinking about these things

Wendy: in some cases we might be looking for the right cadence of report back from the incubation
… so we can be among those gauging when to move forward
… this is one of several different groups
… talking
… Lots of different threads; want to make sure we are pulling back information to give input
… Incubation is an important phase in the process of W3C
… it is testing and experimentation and harmonizing potentially different ideas of what should be specified
… giving a chance for broad input, feasibility and interest
… so we've added that from the
… It's key to getting that testing
… and we neither want to rush things into standardization before they are ready, nor keep things in incubation for too long
… so that by the time it gets to the standards process, designs are done and it's at a yes/no vote
… Standards process in a working group is a real consideration of new inputs
… but it should not be starting from nothing
… I think incubation is the right place for lots of this work, but we also want to see when the egg is about to be hatched
… and give it right environment to continue its growth
… with that, we will move onto TEETAR

TEETAR

Basil: Arnaud could not make it
… I will make the presentation
… TEETAR is a proposal
… on how to test all of the proposals that are currently in discussion

<pl_mrcy> TEETAR Link: https://github.com/criteo/privacy/blob/main/TEETAR/README.md

Basil: I think it goes with the incubation and test
… for now
… as Wendy classified, two categories
… modules being...
… we can put into distinct categories
… we are working with users and now working with cohorts

<wseltzer> https://github.com/criteo/privacy/tree/main/TEETAR#module-by-module

Basil: Cohorts
… There is second module
… Bidding service location
… who is doing the bidding
… is it browser like TD
… or gatekeeper as we have proposed in Sparrow
… or Augury, dynamics
… where is the bidding happening
… a third one, close to second one is the ad generation
… same thing it happens on device on TD
… and on Sparrow
… and it is very clear in Dovekey
… ready to note
… last module that's important for this proposal
… is Reporting, Attribution and Machine Learning
… talking about aggregate reporting
… in Sparrow, in MURE
… all these proposals
… we feel that this should be tested as much as possible
… independently
… then work on the combination of this proposal and their input
… we think they should be tested
… if we test TD...would be a huge amount of work
… complex, easy to make mistakes
… what if we cannot get performance, and preserve...
… maybe it's because of cohorts, or the bidding
… if we go into testing mode and test everything togeher
… it will be tough to test what is working or not
… testing module by module
… and for first test
… why do we have this preference
… First we want to test cohort module
… All the proposals I have seen do rely on cohorts
… It means we can discuss whether bidding service can be...
… we did not debate opportunity at user level
… but targeting at cohort level
… in all proposals, cohorts seems most likely
… They are even on this
… somehow the most @ part of the proposal, still a lot of questions
… what is the size of the cohorts; what is size of the information
… about display; so users are not exposed all the time to the same ads
… we do not know the impact of move from user-level to cohorts
… and with smaller publishers
… it is likely to have a bigger impact than on bigger publishers
… we want to work what is working with cohorts
… there are other points
… that are still open
… we proposed to create cohorts built on cohorts, what we called meta-Interest Group
… bad name, but anyway
… Idea was to use that to target
… brand, maybe use or choose info from muliple web sites
… for instance, we got good feedback but not sure if it works
… again point is even on cohorts, the less controversial point, there are a lot of things to learn
… I feel
… this is my understanding
… maybe I'll be corrected by Chrome team
… Cohorts is the core of the proposal
… bidding is to prevent abuse
… if we were in world where threat model was one where eveyone could be trusted to play by rules
… we could stop; but there is a need for bidding service, reporting
… we feel that the real revolution on the ....the real changes is really we stop targeting user and start targeting cohorts
… This is the reason we are looking to start testing cohorts
… we think it's actually the big benefit
… from privacy standpoint; to have a more private but working
… wondering if I should let people ask questions now, or go into technical aspects of the spec
… no questions, so I will continue
… Just to explain what we propose on cohort testing
… what we propose is to ask the big SSPs
… who want to be part of this test
… to reserve small part of the audience, as in A-B
… for testing
… on cohorts
… whether some user will be targetable to DSP
… and actually doing this test
… would be managed on DSP site
… why we are proposing is because it's easiest way to do at first
… we want it to be...have the minimum investment as soon as possible
… Cohorts will be on DSP
… as if they had not info on the user
… since it's a test, we don't really see why some people might cheat

<wseltzer> https://github.com/criteo/privacy/tree/main/TEETAR#the-technical-setup-for-cohorts-module-testing

Basil: why we think we start at DSP side; DSP answers with an ad, using only cohort information

<bleparmentier> https://github.com/criteo/privacy/blob/main/TEETAR/README.md

Basil: there is a big part to do on technical setup
… invite everyone to look on Github
… and see how exactly we propose this setup
… have SSP...only use cohort
… reporting is as it is today
… fully granular
… compute impact and how to bid on cohort
… in the open web and see what should be taken, changed, what is reasonble
… impact on the system given their sites
… are cohorts too big
… maybe go for a smaller cohort size; an idea we want to test
… no one is on the queue
… Either I've been unclear or too clear

Wendy: Thank you, Basil

Don: I wanted to ask about the metrics
… that were chose

n
… and I can understand the choice of revenue metric
… have you also thought about including a site engagement metric
… to show that cohort placed ads are not causing users to have shorter sessions or engagement with sites
… or that cohorts are having @

Basil: we proposed revenue
… user....we found it something to measurement
… see if ad made them feel uncomfortable
… we are open to metrics such as this one
… happy to add it to the testing protocol
… if you would like to propose it

Wendy: we may have lost Don
… thank you for that answer

James: quick question on what sort of sample size would be needed
… good initiative to see success of these propositions

Basil: At first start with a very small one
… to have it working remotely well
… start with small sample size to check on clicks; then later measure sales, engagement on web site
… start with small one that we could measure with @
… as experiment goes by, as we build better cohort, go to a bigger sample
… and build more data

<dmarti> (thank you, reading notes but having intermittent Internet problems)

Basil: and have some time for actors to adjust
… it will take some time
… I would start small
… and then maybe ramp up once we start the thing

James: thank you
… sounds like there is an establish, refine, plan approach; interesting to see what sort of time frame
… thank you for explaining

Wendy: What input or engagement would you like to get from others on this call?

Basil: first, I would like to understand, who would like to run such a test
… and who feels starting with cohort testing is the wrong idea?
… we feel it's the best way to go forward
… but some people may oppose it
… like to hear any arguments against
… and if Google or SSPs interested to work on that
… on DSPs, who would be interested to run such a test
… Google Chrome team, like to get their feedback about starting with cohorts first
… do you feel this is right way to start testing, or start somewhere else
… and starting somewhere else is also a question for all the participants

Wendy: Anyone with feedback right now?
… thank you for putting this into a Github repository
… imagine you welcome issues and comments in the repo

Basil: we are happy to have feedback about this
… last point raised
… this will be a huge change to the industry
… we have only one year
… I think we will need to start testing quite soon
… and we have to start somewhere
… Please do say if you are interested, and if it's the wrong way to start the testing

Wendy: thank you for that proposal
… encourage others to followup with questions, comments, issues in the repo
… feel free to bring back to a future call if you have other items to raise

<dmarti> https://github.com/dmarti/in-browser-auction-publisher-issues/blob/main/reporting.md

Publisher reporting use cases

Wendy: Don, did you have sufficient Internet connection to speak about reporting

Don: we have wireless antennaes on rooftops with high winds
… the previous session on publisher requirements was covering what publishers need to see from ad placement
… proposals

<wseltzer> https://github.com/dmarti/in-browser-auction-publisher-issues/blob/main/reporting.md

Don: and since that sessions, I've gone ahead to produce pull requests and put them into core documents on github
… Paul and I would like to go over the reporting use cases
… Paul has some intro

Paul Bannister, CafeMedia: can you hear me?

Paul: In terms of publisher reporting use cases, we focused our attention on impression reporting
… conversion measurement and back side of ads
… two interesting things here
… will matter a lot or not at all

Paul: Reason for that is because some of proposals for ad delivery
… allow for impression reporting immediately
… where TD does not allow for impressions to be recorded immediately
… that is a big separating point
… I'm interested to see Michael's notes on that
… if impressions can be reported instantaneously
… but if not recorded immediately and publishers have to go through a reporting API that will be more difficult
… second part is publishers need significant granularity
… delays and dimensions; noise injected
… those will cause a lot of issues for what publishers needs are
… which path will have impact on what we do
… let Don dig into the details

Don: The four use cases we are covering here
… think of them as being divided into the normal, contractual reporting
… and the reporting needed to resolve some kind of problem
… Paul mentioned need for reporting earnings, placements; need to report on revenue
… to those to whom you are contractually required
… that is an every day, normal requirement
… moving on to use case number two
… most of the problems that you might see in reporting
… or some of the problems in reporting are software problems
… could be everything from a mistake in your own JS on your site
… to a new version of a browser rolled out that is breaking the impression tracking on your site for whatever reason
… typical use case for ad ops is you are looking at your dashboard, number falls off cliff and you ahve to fix it
… make it clear where any kind of obfuscation or delay might be imposed
… we've got potential of imposing not just delay that is coded into the system
… but mulitple instances of that delay
… might see impression tracking failed
… did a deploy, go back, employ working code
… see if that fix worked
… if fix did not work, might need to revert fix
… you are seeing quickly delay of X minutes in reporting, and perhaps 3X delay to do that actual cycle of
… notice the problem, correct the problem and see if that correction works,
… for sites with limited capacity or employees;
… we don't all have multi-browser testing capabilities with qualified teams
… Happy to take questions on the trouble shooting cycle
… Onto the next one
… Fairly simple day to day; it's A-B testing
… you'll get a variety of experiments with different layouts, diff code
… A-B has some of same latency requirements as trouble shooting
… not get to where it's supposed to be, but get it optimized
… same kinds of reporting issues associated with it
… Number four is kind of sensitive
… considering recent events
… the number that needs to be reported in category four is really simple; it's just the number zero
… that number needs to be reported very quickly
… use case goes like, publisher runs an ad
… for a particular advertiser that is not within the advertiser of that site
… could be ordinary out of policy activity, illegal, malware
… publisher gets report from concerned user or another advertiser
… what is this other ad we are running along side of
… publisher needs to say, yes, its out of policy, yes we fixed it
… and show fix within one news-rage cycle
… not well served by ad system if that reporting of having fixed problem is delayed
… or if their update to ad placement was incorrect and they continue to display the problem
… even if reporting comes back and says here's this number
… before we fixed was zero, after we fixed was zero
… that is still an important piece of reporting for sites to keep up their reputation and continue to be a viable place to get legit users
… and legit advertisers

<jrosewell> dmarti: Thank you for highlighting the practical challenges the majority of smaller organisations face.

Don: happy to take other questions

Wendy: thanks, Don
… we have four minutes left
… any questions?
… I take it you are proposing these publisher use cases to help evaluate proposals we have seen and perhaps to spur other proposals
… looks like good input
… now I see people queuing up

Charlie: thanks for this document, it's pretty useful
… I agree that a lot of this only applicable when ad placement embeds sensitive cross-site info
… when ad contains info from some third-party info

<AramZS> Should we push discussion of this to the top of next week's agenda? It seems like there is enough interest?

Charlie: otherwise you can use existing tools
… one question about A-B test experiments in particular
… whether that A-B test is sensitive or non-sensitive
… if per publisher experiment related to user ID on one site and does not embed cross-site with a global seed
… that seems like something that could be done, unless you are in the TD context
… if we need the A-B diversion to be consisten
… then we need to do something similar to TD
… can see diversion but cannot see what got rendered, so do something more like aggregate reporting

Don: main issue on sensitive advertising is which ad gets put in front of the user
… If there is a contextual placement of a problem ad
… typically publisher would have access to see, yes, I had an ad
… from puppykicker.com show up on my site and I can block that through normal methods
… quesiton of sensitive placement
… when it happens inside a fenced frame that is exposed to reporting
… obfuscation or delay, then the publisher may find out about a problem placement from getting a screen shot from an angry user
… they need to resolve that based upon reporting at the time

Charlie: question more on A-B testing use case on a publisher site
… does user need to go into A group on multiple sites
… or fine if diversion is scoped per publisher

<angelina> most diversion from advertisers point of view is across publishers

Paul: good question, but may require more thinking on our side
… what things can be grouped; we can think about it some more

Angelina: just to add on

<AramZS> Gotta drop, would love to talk this through more next week if we can.

Angelina: this is great for the publisher side
… and makes me think about how to put things together
… the A-B testing across publishers

<pbannist> i have to drop also, thanks all

Angelina: making sure exposure is done consistently
… also think you should capture creative itself
… and understand what creative is being delivered
… sometimes an advertiser has an incorrect ad swapped in
… happens on either publisher or advertiser side
… that is importnat
… not to have creative served at wrong time or to wrong target
… or when click tracker is served as impression and impression server is a click
… that needs to be captured immediately
… if you are serving a million clicks, instead of a million impressions, that becomes a big financial issue

Don: swapping out of creative issue should be addressed...but see if document accurately reflects the creative swapping issue

Wendy: thank you for staying a few extra minutes
… I hear Aram asking to bring this back next time
… thank you and see you again on 26 January

<wseltzer> [adjourned]

Minutes manually created (not a transcript), formatted by scribe.perl version 127 (Wed Dec 30 17:39:58 2020 UTC).