Skip to toolbar

Community & Business Groups

Credible Web Community Group

The mission of the W3C Credible Web Community Group is to help shift the Web toward more trustworthy content without increasing censorship or social division. We want users to be able to tell when content is reliable, accurate, and shared in good faith, and to help them steer away from deceptive content. At the same time, we affirm the need for users to find the content they want and to interact freely in the communities they choose. To balance any conflict between these goals, we are committed to providing technologies which keep end-users in control of their Web experience.

The group's primary strategy involves data sharing on the Web, in the style of schema.org, using existing W3C data standards like JSON-LD. We believe significant progress toward our goals can be reached by properly specifying "credibility indicators", a vocabulary/schema for data about content and the surrounding ecosystem, which can help a person and/or machine decide whether a content item should be trusted.

Please see the group wiki for more details.

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

final reports / licensing info

date name commitments
Technological Approaches to Improving Credibility Assessment on the Web Licensing commitments

drafts / licensing info

name
Reviewed Credibility Signals

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

Publish Reports

Reviewed Credibility Signals

Last month, we decided to try writing a spec for some credibility signals. What observations can be made (perhaps by machine) which are likely to indicate whether you can trust something online? In the past, we’ve listed well over a hundred, but that turned out to be not very useful. With so many signals, we weren’t able to say much about each one, and it wasn’t clear to readers where to focus their attention. (See https://credweb.org/signals for some drafts.)

This time, we decided to focus on a small handful, so we could say something useful about each one and only include ones we actually expect to be useful. Personally, I was able to think of about 15 where I have some real confidence.

Today, based on discussion at five of our weekly meetings, we’ve released a draft with five signals, three of which are variations on a theme. This is intended as a starting point, and we’re very interested in feedback. What signals do you think are most promising? Are you involved in building products or services that could produce or use this kind of information? Is the format and style of this document useful? The draft includes some ways to contact us, but perhaps the best way is to join the group and introduce yourself.

Please take a look: Reviewed Credibility Signals.

Update: WikiCredCon, ClaimReview, IPTC

Last week we talked about ClaimReview, with a presentation from Chris Guess (group member and lead technologist at Duke Reporters Lab). See notes and a video is available on request. ClaimReview continues to see wide adoption as a way for fact checkers to make their results available to platforms and other applications, and various improvements are in the works. There’s now a high-level website about it, at claimreviewproject.com

This past long weekend a few of us attended WikiCredCon, a credibility-focused instance of the North American Wiki Conference.

I was fascinated to see more behind the scenes of the Wikipedia world and was surprised how much difficult work is necessary to keep Wikipedia running. Perhaps most daunting from a credibility perspective is how hard it is to combat the sock puppets / bots. Many parallel tracks, so each of us could only see a small slice of the conference. Most sessions had extensive note-taking and even video recording, thanks to sponsors. Not all the video is online yet, and currently session notes are at the “etherpad” links from the session detail pages; I imagine those might move soon.

This week, group member Brendan Quinn (Managing Director of IPTC – the global standards body of the news media) will present and lead a discussion about their current work with credibility data interoperability. See you there!

Update: AMITT, Scores, ClaimReview

Last week, we talked about AM!TT, an approach to extending currently deployed infrastructure for coordinating efforts to resist information security attacks to also help with misinformation attacks. Most of us aren’t experts in that technology, so we didn’t get into the details during the hour, but it was a good start. Folks are encouraged to followup with the presenter (Sara-Jayne Terp), or the misinfosec group. Lots of notes, details, links, and slides in the meeting record.

Today, (the 50th anniversary of the internet) we talked about “credibility scores”, numbers people sometimes assign to content or content providers to indicate their “credibility” (or something vaguely like credibility, depending on the system). I can’t do justice to all the points of view presented (please read the meeting record for that), but I’ll highlight a few that struck me:

  • There’s a difference between scoring the process and scoring the content produced. For a restaurant, there’s a kind of process review done by the health department, which gives one score, which is different from the reviews of how the food actually tastes.
  • There’s another distinction between scoring done by professionals and experts (eg Michelin restaurant reviews) vs crowd sourcing (eg Yelp). They are each vulnerable to different kinds of manipulation.
  • Some folks are quite skeptical that any kind of scoring could ever be net helpful. Others see great potential in systematizing scoring, aiming for something like “Mean Time Between Failure” numbers used in engineering some systems (eg airplanes) to be extremely reliable.
  • There is clearly a danger here, as the scoring systems would be a major target for attack, and their vulnerabilities could make the overall media system even more vulnerable to misinformation attacks. Even if they are highly secure, they might not be trusted anyway.

Aviv agreed to start drafting a document to focus discussion going forward.

Next Week, we’ll have a presentation and discussion on ClaimReview. This is the technology which enables fact checkers to get search engines (at least Google and Bing) to recognize fact checks and handle them specially. It is a clear success story of how standard data interchange can help against misinformation, but much remains to be done. I expect we’ll hear more about plans and ideas for the future.

As usual, people are welcome to join the group, or just joint for a meeting of special interest to them.

     — Sandro

Weekly meetings: JTI, AM!TT, and beyond

Last week’s meeting was about the Journalism Trust Initiative (JTI), the European effort to standardize quality process for journalism. Thanks so much to JTI lead Olaf Steenfadt for joining us, along with Scott Yates. We have recorded video of the presentation and some of the discussion. As usual, this is available on request to group members, but remains group confidential.

Some things discussed that particularly resonated for me:

  • I appreciated the point that as with most industry self-regulation efforts, it’s about providing consumer safety. Many industries face this problem of some members of the industry not living up to the standards most of the industry considers appropriate.
  • The word “trust” in JTI is both problematic and redundant, although it’s too difficult to change now. This is more about defining what is legitimate, real, high-quality journalism. All journalism is supposed to be trustworthy.
  • I still get confused on how a whitelist (like this) is anything other than the complement of a blacklist (which this is explicitly not). I’m still looking for a distinction that feels right to me.
  • There’s no answer yet on how this data might be interchanged, or how this all might be verified and used in practice
  • Even though we’re past the comment period, and JTI is about to be finalized in the standards process, work will continue, and there should be ongoing revisions in due course.

Lots more details in the Meeting Records, scribed in duplicate.

Tomorrow’s meeting is about a plan to categorize misinformation attacks and allow data about them to be shared, potentially in real time. It’s an extension to MITRE ATT&CK™ (“A knowledge base for describing behavior of cyber adversaries across their intrusion lifecycle”) and is intended to be compatible with Cyber Threat Intelligence data exchange technologies STIX and TAXII.

AMITT Framework Navigator

For more, see the AM!TT Framework and/or come to tomorrow’s meeting.

Beyond tomorrow, for now I’ve scheduled four more meetings, continuing our Tuesday pattern (29 Oct, 5 Nov, 12 Nov, 19 Nov). I have IPTC penciled in for the 12th, and we have several other pending topics:

  • What data protocols and formats should NewsQA, JTI, etc be using for exchanging data?
  • How can we help manage these overlapping signals schemas?
  • Is there a good objective framework for measuring credibility? (We asked the question in last years report. I recently had an idea I really like on this.)
  • Should we update and re-issue the report? Are there people who want to help?
  • What about credibility tools inside web browsers?
  • Claim Review, data about fact checking
  • NewsQA part 2, looking at specific signals

If you’d like to present or help organize on any of these topics, please let me know. We could also run them as an open discussion, without a presenter.

More about NewsQA and JTI

In retrospect, I should have described NewsQA before yesterday’s meeting, to frame the discussion and help people decide whether to attend. With that in mind, I want to say a few things about next week’s topic, JTI. Both projects use online data sharing and have real potential to combat disinformation, in very different ways.

First, here’s what I should have said about NewsQA:

  • The NewsQA project, run by Jeff Jarvis’s group at CUNY, is building a service that will aggregate signals of credibility, to be available at no cost in 1Q20. Access will likely have restrictions related to intent.
  • They’re currently working with about 100 credibility signals about domains, provided by various commercial and public data providers. They have data for about 12,000 domains serving news content, mostly in the US.
  • They would like wider review and potential standardization around those 100 signals. They also have experimental/research questions around the project for which they’d like community discussion and input.

The presentation included more about this, as well as plans for the future and open questions. You can read the notes of yesterday’s meeting or see the slides for more detail. Here’s the key architecture slide:

Looking forward, I’m thinking:

  • We should look over the list of 100 signals, try to align them with other signals folks are using, and make sure they’re documented in a way others can use
  • I’d like to understand the ecosystem around data providers and consumers. What’s motivating each party now, and what do we expect in the future
  • How secure are these signals against manipulation and misuse?
  • And then we have all the questions that came up during that meeting, still needing a lot more work before we have answers. (Like, “What is news?”)

Meanwhile, next week we’ll be hearing from the Journalism Trust Initiative (JTI). My understand of JTI:

  • The project is led by Reporters Without Borders (RSF), which has a strong reputation in fighting news censorship and bias
  • Using an open standards process (under CEN), they’ve gathered a community and together written down a general consensus of how news organizations ought to behave. The idea is that if you follow these practices, you’re far more likely to be trustworthy. If you can show the world you’re following them, especially via some kind of certification, you probably ought to be trusted more by individuals and by systems.
  • There’s a survey (start here) with about 200 questions covering all these rules and practices. Some of the questions are about whether you do a thing journalists are supposed to do, and others are asking you to disclose information that journalists ought to make public.
  • There are still wide open questions about how the data from those 200 questions might be published, distributed, and certified.
  • The deadline for comments is 18 October so now is the time! Issues around data transfer can (and will have to be) settled later.

That’s the topic for next week’s meeting. We’re expecting several key people from JTI to attend. I hope to see many of you there.

Summer Status Update

Hey everyone,

First, I want to apologize for the long silence. In April, I circulated a survey for working in six subgroups, and response was good. As things turned out, however, I didn’t do the necessary follow-up. Worse, I didn’t post anything about any change of plans. I’m sorry if that caused you any difficulty.

Going forward, I’m hoping a few people will volunteer to help me organize these activities. If you’d be up for this, please reply directly to me and let’s figure something out. Also, I’ve left the survey open so you can edit your previous response or fill it out now if you never did: it’s here.

On September 17th, at the next W3C TPAC meeting, we’re scheduled to hold a face-to-face meeting of this group. It’s in Japan, though, and only four group members have registered, along with 13 observers. I’ve decided not to make the trip, so we’ll need a volunteer who is there to set up and run the meeting with remote participation, or I think we should cancel. The meeting is at 10:30am Japan time, which is probably okay for folks in Asia and the Americas, but quite difficult for Europe or Africa. Bottom line: if you’re going to be at TPAC and might be able to run things, please talk to me soon.

From responses to the scheduling survey, it looks like Tuesdays and Thursdays 1pm US/Eastern would be good for nearly everybody. If that’s changed, or you didn’t fill out the survey before and you want your availability taken into account, please do so now. (If doodle won’t let you edit your old entry, because you didn’t log in, just add another entry with your name.)

I’m thinking we’ll start having some subgroup meetings in 2-3 weeks, depending in part on responses to this post.

Thanks, and talk to you all soon,

— Sandro

Plans and Priorities for 2019

A quick update on where things are with Credibility at W3C. If you’re interested, please join the group (if you haven’t) then answer this survey on how you’d like to be involved. Newcomers are welcome, not just folks who were involved last year, and please help spread the word.

Things in the works for 2019:

  1. Evolve Credibility Signals into more of an open directory/database of credibility signal definitions, with filtering and data about adoption and research for each signal, when available.
  2. Document best practices for exchanging credibility data. Primarily technical (json-ld, csv), but also legal and commercial aspects.
  3. Revise our draft report on credibility tech, maybe splitting it up into chunks people are more likely to read, and with different section editors.
  4. Have some general meetings, with presentations, to discuss various credibility-tech topics. This might include some of the signal provider companies or credibility tool projects.
  5. Document how credibility issues fit into larger Online Safety issues. I’d like a more specific and concrete handle on “First, Do No Harm”.
  6. Prototype a browser API which would support a market of credibility assessment modules, working together to protect the user in the browser. (See mockup.)

If you’re up for working on any of these topics, please fill in the survey. We’ll use that to help with meeting scheduling and general planning.

And of course, if you think the group should work on something not listed above, please reply on or off-list.

Thanks!

First Draft of Technological Approaches to Improving Credibility Assessment on the Web published by Credible Web Community Group

On 2018-10-11 the Credible Web Community Group published the first draft of the following specification:

Participants contribute material to this specification under the W3C Community Contributor License Agreement (CLA).

If you have any questions, please contact the group on their public list: public-credibility@w3.org. Learn more about the Credible Web Community Group.

Draft Report on Credibility Nearly Ready

News for people not closely following the group: we’ve put together a report summarizing what the group has discussed, framed as a guide for people considering technological interventions around credibility, with recommendations for areas to standardize.  The plan is to publish this next week as a “Draft Community Group Report”, get public comment, then do a “Final Community Group Report” by the end of the year.  At this same time, we’re looking at transitioning to doing more standards work.

Members of the group are strongly encouraged to review the report this week — Monday at the latest — and let us know if they support publication (as a draft) or see a need for changes first.

Also, small changes keep happening as people make comments, so keep an eye on the changelog at the end.

Report at: https://credweb.org/report/

Vote at: https://www.w3.org/2002/09/wbs/103073/report/