2017-11-10-minutes

From Credibility Community Group

Agenda

Attendees

Present (around room to the left)
Sandro_Hawke, Manu_Vincent, Newton_Calegari, Aditya_Ranganathan, Reto_Gmür, Dan_Brickley, Wendy_Seltzer, Nick_Adams, Ed_Bice, An_Xiao_Mina, Dan_Whaley, Chris_Needham, Dave_Singer, Aviv_Ovadya, Jeff_Chang
Remote (in arrival order)
Jason_Chuang, Jenny_8_Lee, Dario_Taraborelli, Jon_Udell, Rebecca_Weiss, Sara_Terp, Tessa_Lyons, Amy_Zhang
Regrets
Chair
Sandro Hawke
Scribe
Dan Whaley


Contents



<axm-meedan> starting at 10a

<sandro> scribenick: dwhly

<danbri> https://www.w3.org/wiki/Credibility

Here and reporting for duty

<axm-meedan> hi

Welcome & Admin

First meeting of Credibility Community Group

scribe: Sandro will be running this meeting
... Many here were at session on Wed

<danbri> Weds notes (in agenda): https://www.w3.org/2017/11/08-cred-minutes

scribe: Some slides will be the same, some different
... Main difference in today's meeting is oppy for collab w/ platforms
... Towards end of this session we'll get their input, emphasis is dialog

Ed: Distinctions... Credibility coalition was formed out of misinfocon
... we view the coalition is key overarching org over this effort
... not abandoning previous momentum, keep existing participants engaged.
... elements of what we're doing are not purely technical
... user experience, etc.
... Will be an important part of success
... The CG will let us focus on key tech issues
... The CC will be around other things
... I also want to comment on common retort "Oh you want to create a truth standard for the Web"
... Beauty and genius of web is decentralized
... Reason why we are taking this on, Meedan as a founding org
... We've been working globally
... Truth is contextual
... Based on linguistic lenses
... Ideological lenses
... We want to build out framework for individuals, developers and others to contribute.
... we're not trying to build a truth standard
... Thrilled to be here.

Sandro: Looks like we have 9-10 remote people
... If you want to get on Queue type q+ here
... Say Scribe off if you want scribe to stop
... Lets move to intros

<Jenny8lee> (Can we see who is on the IRC channel?)

https://www.irccloud.com/pastebin/xJULHg2c/

<axm-meedan> LimeChat is a good app for the Mac folks

Introductions

<sandro> Sandro Hawke. MIT Technical Staff, W3C Staff. Decentralized systems, web development, semweb/linked data tech, W3C standards process. As society moves online, trust becomes increasingly important - I want to help social media be smart about trust.

Manu Vincent: Climate Feedback, leading the climate feedback project

<reto> Reto: startup company https://factsmission.com/. Goal is to use linked data technologies to have more facts-oriented discussion. Not an authority (or federation) for truth but best practices similar to scientific discourse. An open source tool we're working on is twee-fi that allows to write ClaimReviews about tweets: https://twee.fi

Newton Calegari: Nic.br, working on project related to fake news, we have gen elections next year

<danbri> Intro: Dan Brickley. I work for Google (previously at W3C with Sandro & co). I run the schema.org project in collaboration with founders from other search engines (bing yahoo yandex) and a wider community,

<danbri> operating through W3C Community Groups and Github. Schema.org is super widely used in Web markup including News articles, organizations, TV/media. Last year we worked

<danbri> on a fact checking markup, and recently have been integrating indicators and improvements from the trust project.

<Aditya> Aditya Ranganathan. Berkeley Institute of Data Science, Public Editor. Helped develop a Scientific Critical Thinking Course at Berkeley. Interested in how citizen scientists can be used in the evaluation of news.

<SJ> SaraTerp, data scientist at AppNexus (online advertising exchange). Background in AI, have been working on belief analysis for a very long time and online credibility since I started crisismapping in 2010; got quite cross about the recent belief hacks in the US, have been working on potential mechanisms to track and push back on them since. In CredCo because I see it as a vital part of this system.

wseltzer: W3C strategy, hoping to help W3C be a good home for this community

NickAdams: Public Editor, we have a tool that formalizes this process

Ed: CEO at Meedan, been around since 2006. Collab software to make sense of global web.

<newton> Intro: Newton Calegari works for the Web Technologies Study Center at NIC.br, is working on a project which explores how fake news spread on the Web.

<NickAdams> Nick Adams, Ph.D. – Sociologist and Citizen Science Software Creator from UC Berkeley Institute for Data Science. Truth/trust are always social achievements. With Public Editor, we have a tool that formalizes this social process, allowing thousands of people to collectively read through and label the words and phrases within news articles, identifying mistakes and misinformation.

dwhly: CEO of Hypothesis, we develop annotation software that builds of the W3C Web Annotation model that can help provide scaffolding for credibility indicators.

<axm-meedan> I'm An Xiao Mina, director of product at Meedan and affiliate researcher at Berkman Klein Center for Internet and Society. My background is in design research, product and journalism and media studies. I've looked at misinformation and propaganda in China and have now turned a lot of my energies to the US. I'm interested in making sure we have a globally diverse and culturally specific process that understands the context in which credibili

<axm-meedan> ty develops.

<dsinger> intro: dave singer, ac rep for apple, sees a societal problem but a difficult technical one, currently curious to see what can be done and where the group might go

Chris Needham from BBC: Here thinking about how they apply public service principles.

<cpn> Chris Needham, from BBC Research and Development. BBC is a news publisher on Web, TV and radio. News innovation groups, Public Service Internet project

Aviv: Been doing work in this area for a year and a half. At Columbia and Univ Mich, building tools.

JeffChang: Product manager at Google, trust, credibility, has been involved in claim review, worked w/ trust project.

JasonC: Research scientist at Mozilla, HCI, NLP. Currently looking to online news and discovery.

Jenny8Lee: w/ Hacks Hackers. Former NYT reporter, trying to make sure theres $ for this project

<JasonC> Jason Chuang. Research scientist at Mozilla. Background in computer science (HCI, ML, NLP). Previously, PhD dissertation on how people interpret and trust artificial intelligence algorithms. Currently looking into online content discovery and news engagement.

Dario: Dir of Research for WMF. Over last several years working on programmatic approaches for how wikimedians cite references. WikiCite and WikiData could be informative here.

<aviv> Aviv Ovadya. Doing work in misinfo space for over a year and a half, building off my background in computer science from MIT, as a researcher, engineer, and later product designer. Knight news innovation fellow at the tow center digital journalism at Columbia and directing a project at the new center for social media responsibility at university of Michigan. Building tools to track the reach of misinformation and junk news across various platforms.

<Amy_Zhang> Sorry all! My mic seems to not be working well. I am Amy Zhang, and a PhD researcher at MIT CSAIL. My expertise is in human-computer interaction, particularly how we can design interfaces to help end users use things like credibility signals to manage their social experiences and their information seeking online.

<judell> I'm Jon Udell, with Hypothesis, here representing the view (https://misinfocon.com/lets-use-the-annotated-web-to-coordinate-the-struggle-against-fake-news-1627ff4919a3) that open web annotation is the part of the tech stack we'll use to connect claims and ratings to specific statements in news stories.

judell: from Hypothesis, convinced that there is a common pattern that has to do w/ attaching claims and ratings to not just documents, but elements within those documents.

<judell> Why? To ensure that heterogeneous activities in the anti-info-disorder space is discoverable and interoperable.

Rebecca Weiss: Sr manager of Data science at Mozilla team

<rweiss> fun!

<rweiss> i'll come back on, but to continue the intro: we are interested in understanding how interventions influence browsing behaviors and understanding more about how people use the web

<dartar> I’m Dario Taraborelli (Wikimedia Foundation). You can learn more about WikiCite here: https://twitter.com/wikicite & https://meta.wikimedia.org/wiki/WikiCite

SJ: Data Scientist, background in AI, online credibility, I got cross about last election. EU Theme project...

Tessa: Product Manager at Facebook. Trying to predict content that needs to be fact checked, and sending it to them. Trying to identify repeat abusers.

<axm-meedan> From Tessa: The remote text isn't working for me so I will post intro here. Tessa Lyons - Product Manager, News Feed Integrity - One of my team's projects is predicting content that should be fact checked, working with fact checkers to get that content reviewed, and using that signal to inform ranking and provide more context for people on Facebook. We're also interested in both first party and third party signals of integrity/credibility.

Sandro: Next item: An is going to go through slides on Cred Coalition.
... for 15 minutes

Credibility Coalition Progress

<sandro> https://bit.ly/credco_slides

axm-meedan: Cred Coalition started at Misinfocon
... out of a late night coffee session
... we recog need for common set of indicators
... evolved into: an effort to define these standards
... We also had meetings in SF and NYC
... through this started to clarify goals
... got a knight prototype fund

<ed-meedan> Ed Bice: CEO at Meedan, a hybrid organization that works on collaboration software for journalists (https://meedan.com/Check) and translators (https://meedan.com/Bridge). Following our work one year ago on Electionland using Check to crowdsource the curation and fact-checking of election day voting incidents and issues during US Presidential election, I authored a blog post calling for a common set of indicators that might enable a distributed community...

<ed-meedan> of journalists to collaborate remotely on the task of making sense of links, claims, and sources.

axm-meedan: Core goals for fund: identify indicators and use cases / establish cooperation: working process and data model / generate and evaluating test data.
... want to test about 100 articles
... prioritizing indicators
... others in the group: Robin caplan, Renee Diresta, Vinny Green, Connie moon, Evan Sandaus, ...
... related projects: non-profit: Trust project, Schema.org, News Quality Scoring Project, NewsTracker, Vubble. Commercial: factmata, veracity.ai, News Guard
... Credibility Coalition is identifying stories, marking them up according to automated, nonexpert human and expert human.
... aggregating that testing data and providing to platforms.
... we've been looking at how claims review has been expressed.
... Examples like how politifact has been embedding these signals.
... facebook has been showing disputed articles.
... Information Disorder from First Draft: three key terms, misinformation is incorrect, disinformation intent to deceive, malinformation, leaks online harrassment, intention to harm.
... Seven different types of mis and disinfo
... Satire, Misleading, Imposter, Fabricated, False connection, False context, Manipulated content

<danbri> (sorry I missed that last satire detail - the new trend was what exactly?)

axm-meedan: Matrix: helpful to think about intentions, such as passion, provocation, political influence, propaganda,
... Satire trend: Sites that might previously have been labeled as disinfo, relabeling themselves as Satire
... intention to game system

Ed: use cases
... these are just a few use cases that we jammed on.
... in web search, as I am searching the web, want to know when a link result has been disputed.
... want to see target links and reviewing fact checking links
... on newsfeed, as I'm viewing feed, want to see when item has been disputed, also view substance of dispute. or see range of opinion when present

<danbri> w.r.t. linking to disputes/responses, it might be worth considering the dynamics around 'response videos' in YouTube, https://youtube-creators.googleblog.com/2013/08/so-long-video-responsesnext-up-better.html

Ed: in browser, we can imagine how this info might be shown.
... want to see some indication
... Lastly, we're trying to evolve the citation. Historically been used that author has done the work to source conclusion. Hyperlinked era, we should be doing better job of showing evidence, that first party or third party has done work to verify.

<axm-meedan> from Tessa: "The remote text isn't working for me so I will post intro here. Tessa Lyons - Product Manager, News Feed Integrity - One of my team's projects is predicting content that should be fact checked, working with fact checkers to get that content reviewed, and using that signal to inform ranking and provide more context for people on Facebook. We're also interested in both first party and third party signals of integrity/credibility.

axm-meedan: DRAFT indicators
... thinking about different structures, article structure, byline, publication, outbound references, inbound refs, metadata, revenue model.
... for Content: logic and reasoning, rhetoric, number of claims, journalistic rigor.
... for people: Author's reputation, Reader's behavior, how are they sharing.

Platform Perspective

Tessa: I wanted to say thank you. Excited by work this group is doing.
... wanted to share context, thoughts on collaboration.

<NickAdams> re: article content (actual sentences and phrases): The PublicEditor team also imagines specific labels appearing in article content on the web.

Tessa: in terms of current work, using machine learning to identify artles that need to be fact checking.

<ed-meedan> tessa: "going to be doing more of the partnership work on misinformation and excited to work with this group."

Tessa: right now only partnering w/ orgs that are part of Intl Fact Checking Network
... Surfacing related articles
... using signals that we get from fact checking to ban advertisers
... for 2018 5 themes
... 1) internationalization: strong in US, not so much elsewhere.
... 2) photos: can be a vector

<ed-meedan> tessa: "FB working only with the IFCN members" - comment: btw, Alexios who heads that project is involved and supportive of CredCo

Tessa: 3) claims vs content: instead of just content, also looking at instances of claims
... 4) intent to cause harm
... 5) transparency, have aspirations to be more transparent.
... in terms of this group, want to do more partnerships.
... also think there's an oppy to collab on things that are slowing us down.

<ed-meedan> tessa: 4. "how to define real world harm [that an article or claim is doing]- we'd like help with that."

Tessa: internationalization.
... apologies that my time is short today.

NickAdams: Thank you tessa. Want to flag claims vs content.
... There other ways to suss out content errors that are signals of misinfo other than claims.

Ed: To follow up, An's research is visual memes, which are critically important.

tessa: 4 types of misinfo videos: memes, ...?
... my scope is specifically newsfeed.

<axm-meedan> Photoshops out of context, screenshots

<ed-meedan> photoshops, memes, historic out of context, and screenshots (this last is an increasing trend).

Tessa: Agree with comment that we don't want to be arbiters of truth, or indicators.

<danbri> (where in the room is the microphone? Sandro's machines)

dartar: +1 that we want to create an open corpora.

<ed-meedan> "as little as the platforms want to be arbiters of truth, even less do they want to be the arbiters of the indicators of truth"

JeffChang: To offer our perspective. Agree w/ ed, don't want to be arbiters of truth. We want to empower users with context.
... context about publishers
... and about authors, articles
... 2 aspects: Algorithmically and UI wise.

<ed-meedan> Jeff: "we want to empower users with context"

JeffChang: In terms of UI, if we can prove the value of context to users, we can provide it in many way.s
... in terms of 1st party vs 3rd party, both can be useful.
... Trust project is an example of 1st party. There is value in combining both.
... We talk to publishers a lot, sometimes we ask a lot of them in terms of what they should be doing.
... We want to point them to a single set of recommendations. Glad to see this work happening.

danbri: We keep talking about factchecking. One question at schema.org is whether we should start up a fact checking community group, interest from archive.org, etc.

<ed-meedan> +1

danbri: all these things could be separate from this group, or bring it to this group.

reto: I wanted to add to what jeff said, not wanting to be arbiter of truth. AI can help, suggesting that content probably has material that needs to be checked, but ultimately we need humans.

JeffChang: Want to be transparent about which indicators we're using, open for anyone to use. For the platforms they can have a unfied stance, here's the schema of best practices we should be using, here are the things we expect high quality journalism to be using, expressing.
... So then hopefully you get less of a concern.

General Discussion

dsinger: thinking about problem, thinking about work in content protection, security. In those two areas, important to role play black hats, white hats. need devious minds.

<sandro> dsinger: we're going to have to have devious minds :-)

dsinger: At least in security, the victim is not an accomplice.
... "I know you posted that, but it's not true."

<SJ> Ooh - did someone say "devious mind"? I think I may have one of those...

<SJ> And in security, the victim's *computer* is often an accomplice...

NickAdams: That's just such a valuable insight. "Did you want the article to be true?" Leverage that need for confirmation bias.

Aviv: Want to second what danbri said. And observation about security world.

<dsinger> as we progress with work in this area, having people who think like “attackers” will be important (just like security work). It’s complicated by the problem that (unlike security), our victims are sometimes ‘willing victims’ — they WANT to believe these untrue stories

<SJ> I've been reaching out to the infosec community on thi

<SJ> How do I get on the speaker queue?

<SJ> q

danbri: The point about security, have a slight concern about indicators. If they're definitive, then system may adapt to them.

SJ: Infsec groups have similar concerns, definitely correlates between classic incursions, and misnfo ... ?

<ed-meedan> please share link to slides Sara

Ed: I think that the data gathering markup effort. We should see taht as an oppy to see things we're missing. It will be useful training data. The way the platforms injest these signals should be black box, can't be transparent.

Jeff: Transparency on the what not the how.

danbri: As soon as we say "these signals" publishers will adapt.

<SJ> Ed: those slides from last week's talk: https://www.slideshare.net/bodacea/online-misinformation-theyre-coming-for-our-brainz-now

NickAdams: It may be a problem at article level, but when we get to the claim level that's going to be harder to fight.

<ed-meedan> :) thx SJ

<SJ> - doing a more targetted one as the AppNexus engineering talk at the end of this month

NickAdams: It's going to take a few years before we get this right. Needs to be updated continuously.

<SJ> I also have a set on the equivalences between old-school AI methods and belief hacking. Now I'm not travelling all the time linking people up, I'l have time to write some of thist uff up

reto: I'm sketpical of this black box.... (missed the rest).

<SJ> It's a conflict, the same way that infosec people are in constant conflict. Expect it

aviv: You were pointing out that this is an arms race, there is hundreds of billions of $ in winning this arms race, need to be prepared for that.
... continuous training, evolution required.

<NickAdams> reto: the scientific method is a robust way to evaluate content. If we can break that down and into a light version of scientific review of micro-content, we may actually improve truth value over time

aviv: need to have advisory, with recommendations.

Sandro: Only about 10 minutes left.

<SJ> My twop'th: this isn't an arms race, it's a new layer in infosec. Same rules, same problems, different arena

Sandro: Can we get a sense of the room. Been curious with this group.
... is whether with these various indicators which matter.

manu: Some of these will be easy to get, some harder, like logic and reasoning.

sandro: we could see if we can get coordinators for these.

an: also keen to see if there are major categories that we're missing.

aviv: solar system slide. difference in indicators in terms of who controls them.

jeffchang: diff topic, one of my main asks is to clarify trust project, fact checking coalition, cred workshop. which indicators are owned by who?

sandro: w3c tendency is to be the biggest umbrella

danbri: schema.org would be very receptive to include what this group comes up.
... wikidata is another strong place to publish.

Ed: agree. would love to work w/ you dan.
... reached out to sally at trust project. logical dividing point between that project is around 1st party indicators. Promoting to publishers to include them.
... we're really looking at 3rd party.

sandro: we're hoping to look at everything.

NickAdams: Want a common pool of indicators, want to avoid duplication.

ed: Other important differentiators. We are looking at how many parties make assessment against same referent. Starts to look like github for claims.
... It obviously doesn't make sense for multiple parties to assert first party claims.
... Trust project important to develop brand that can be shown on sites.
... for us we're not focused on brand, looking for utility of indicators.

danbri: We take same perspective on branding, need to be used on all sites, not just trusted ones.
... this is a hub where many communities can collab

rweiss: My main interest in the collaborative efforts. Our incntives are to work towards a better web.
... happy to continue to participate.

Sandro: Let's adjourn formally.

<SJ> thakn you scribe!

<dartar> thanks dwhly

<Zakim> danbri, you wanted to ask if there are events coming up over the next year to consider colocating meetings at

<sandro> Places over the next year people might be at?

<sandro> - Web Conference, Lyon

<sandro> - CredCon as a concept

<tantek> I saw a lot of mentions of "scientific *" as approaches to evaluating assertions / facts - and theories of what could work - did anyone mention Wikipedia and the methods they use?

<tantek> just saw the mentions of Wikimedia - very cool - will read up on the links

<sandro> Scribe: Dan Whaley


Summary of Action Items

Summary of Resolutions

[End of minutes]



Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2017/02/06 11:04:15 $

Then converted by "pandoc -f html -t mediawiki 10-cred-minutes.html" for posting here.