W3C

- DRAFT -

Sonar, linting tool for the Web

08 Nov 2017

Attendees

Present
Regrets
Chair
SV_MEETING_CHAIR
Scribe
clmartin

Contents


<scribe> scribenick: clmartin

alrra: should we start?

clmartin: yes

alrra: hey everyone we're gonna get started
... we're gonna talk about sonar today. sonar started at Microsoft at first but we're gonna get into why it was started
... we realized today that doing web development is hard. it's complicated to keep base with everything. there are a lot of frameworks.
... it's impossible to keep track of everything that's changing. It's even worse as developers tend to search for quick answers which may not be the best.
... There are a lot of tools, some tools are good some are bad, some go deep while others scratch the surface.
... Developers don't know about all the tools.
... We thought about this internally and created sonar. The goal was to tackle all of these problems
... The intent is to help developers regardless of skill, educate them and lastly get rid of obsolete patterns and by doing so move the web forward
... from the start we wanted to be community driven with up to date best practices and flexible enough to be integrated with various developer workflows
... going to show some demos on how it works but there is a package you can install and also a live site scanner
... starts with a simple json config file, can be generated via sonar --init
... it asks a few questions when you kick it off but before that let's review the main components
... 3 main components, connector that handles connecting to browser-like things (chrome, edge, jsdom, etc.)
... rules which are the tests that run
... from the start we made the decision to not re-invent the wheel so to that end we integrate tools that do the job (axe a11y, ssllabs, etc.)
... with the rules we have 3 main things we look at, they should go deep and not just blindly check for things
... when it makes sense they should be configurable and lastly be context aware
... going to show some examples
... like i said you can run it from command line after you install it
... i'll just run it and sonar will detect that i don't have any configurations and it will ask me to choose a connector (which asks which browser to connect), formatters are how results are explained
... it opens the browser and starts analyzing the site. an example that touches on all the previous things - shows simple doc
... scans site using "sonar localhost"
... by default it will say something like "x-ua-compatible header is not specified" or "the meta tag is discouraged"
... the default is to have the most strict defaults
... rules go really deep and they know what to test by default but they are configurable
... documentation will explain edge cases and what the rules mean/do
... rules are context aware (aka provide the browsers you care about to sonar e.g. chrome 50)
... when you re-run with the new browser targets it will say the meta tag isn't needed
... rules are smart enough to adapt to the browsers/browser-like things your testing
... going to scan meetup.com/sfhtml5
... in this case you can see it found an image and even though the media type is .jpg the image itself is actually .png
... downloads the file and shows it's actually a .png
... rules are flexible/configurable, you can make them warnings or errors and pick the rules that are right for you/your team
... doing a scan on sonarwhal.com
... scanning example.com, in this case there are two meta tags that don't need to be there
... everything is documented, every test has full documentation including edge cases with links to the spec and community best practices
... there is core and then plugins, what makes sense for the entire community will go into core, otherwise plugins
... want users to be in control of what they're doing
... Why all of this? We want to help developers evolve their site as the web platform evolves
... the idea is to get rid of obsolete patterns from the web/cleanup the web
... for the future we're adding more tests, more configurations, start collecting data to generate stats
... plan to integrate them with editors (vscode to start)
... the project is open source, donated it to the js foundation
... if you want to engage you should go to sonarwhal.com (scanner/documentation and the rest are there)
... github.com/sonarwhal for issues
... and twitter is where we talk about stuff
... we built this and want input from standards bodies on what problems they have/what issues they see developers having and can we integrate that so we can test that
... questions/feedback?

rbyers: what's the goal/objective?

alrra: we've seen users don't test in all browsers and the goal is to help automate that
... we say about 17000 scans in the first few days
... make it easy as possible to test new websites
... the initial idea is that we know best what's happening behind the scenes so we can recommend what to do/not to do. the other reason is to help the community so the website works on all browsers including edge

rbyers: to help developers ensure that they're building great websites that work in all browsers
... does it do any perf metrics

alrra: it integrates with cloudinary

jrossi: the original motivation was the old modern.ie compat scanner
... wanted something more modern
... something you could run locally, that's more of a tool vs a prescription

alrra: the modern.ie scanner was static but this runs in a real browser, can do more things like wait for a site to load/do other things
... for a11y we inject script and wait for it to run

rbyers: can you give some example of issues you've seen

alrra: if you have a link with target _blank everyone else uses no referrer except edge

rbyers: can you call out javascript api's that are called that don't work everywhere

alrra: that's interesting, the idea in the future is to grab webidl's from all browsers and build a database
... as even caniuse is out of date

rbyers: it's still a hard problem to tell what api's the page is actually calling
... can you tell if they're doing a feature detect? we tried designs but haven't found anything that's useful/practical to build

sterns: could the content performance policy rules be rolled up into sonar

yoav: I don't necessarily think cpp set of rules would apply here. the target was to reign in 3rd parties and this is mostly targeted at first party content

alrra: not necessarily

yoav: but third parties vary

stern: they have amp rules

yoav: you could have cpp apply a similar set of rules saying your 3rd parties are awful

alrra: or you can ignore it, we've seen issues where you can't control 3rd parties so it's easier to ignore as well for some developers
... we can integrate other parses over time (typescript)
... and add best practices for things like typescript into sonar

stern: one thing that might be interesting is if the site is using an @supports rule for a new css feature to scan the css outside of the @supports rule to see if you have fallback styling for browsers who don't have the feature

alrra: how do you create a 404 page that's correct

yoav: going back to the cpp idea, theoretically you could try to figure out what features people shouldn't be using and notify them
... also notify them when new alternatives have shipped so they can adopt new policies

rbyers: that's the plan for lighthouse, warn people when a new feature has shipped
... interesting tradeoff between static/dynamic
... for dynamic you can guess but increasingly we have to rely on telemetry

alrra: one idea is to have steps like using webdriver for more complicated scenarios
... we started scanning a lot of websites and we've seen devs fail on a lot of things
... for instance compression, we haven't seen anyone using brotli

rbyers: one of the differences between this and lighthouse is that lighthouse has a score, that can be bad but it also motivates developers

alrra: we talked with lighthouse before releasing this but we didn't think scoring is right
... a service worker makes sense but maybe a manifest doesn't
... when you look at the scanner it will tell you where you fail and your competition passes
... we're tracking data over time, we're considering what to do but we can do a scan as well instead of just developers doing scans
... but we were sure we didn't want a scoring system

rbyers: it's great to see tools with multiple strategies to see how they're adopted

alrra: with lighthouse we know we'll overlap but we should agree when we do

rbyers: the other tradeoff is that lighthouse is intentionally designed to rely on internal hooks in chromium

alrra: rules can have connectors that only work with one browser

rbyers: in particular some of the things we added for lighthouse can be used here?

alrra: yeah
... some rules will only work in certain browsers as they're only exposed in certain browsers
... some tools have limitations, for instance with compression to really make sure we need to get raw bytes
... some things will be connector specific
... other questions or remarks?

: we have an internal tool for Edge where we can see code coverage of our DOM APIs (part of DRT suite); curious to see if other browser vendors are interested in a feature to turn on that feature in their browsers to get a scan of the web

rbyers: Chrome is interested in that; did a crawl run of 10k sites to prioritize which APIs to implement first--was a one-off hacky kind of thing. would love to instrument APIs but worried about the overhead

alrra: lighthouse sometimes recommends things that are Chrome, but some browsers do it differently, so idea was to come up with an abstraction that works in all browsers

Summary of Action Items

Summary of Resolutions

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.152 (CVS log)
$Date: 2017/11/09 00:22:18 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.152  of Date: 2017/02/06 11:04:15  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: Irssi_ISO8601_Log_Text_Format (score 1.00)


WARNING: No "Present: ... " found!
You can indicate people for the Present list like this:
        <dbooth> Present: dbooth jonathan mary
        <dbooth> Present+ amy
        <amy> Present+

Found ScribeNick: clmartin
Inferring Scribes: clmartin

WARNING: No "Topic:" lines found.


WARNING: No meeting chair found!
You should specify the meeting chair like this:
<dbooth> Chair: dbooth

Found Date: 08 Nov 2017
People with action items: 

WARNING: Input appears to use implicit continuation lines.
You may need the "-implicitContinuations" option.


WARNING: No "Topic: ..." lines found!  
Resulting HTML may have an empty (invalid) <ol>...</ol>.

Explanation: "Topic: ..." lines are used to indicate the start of 
new discussion topics or agenda items, such as:
<dbooth> Topic: Review of Amy's report


WARNING: IRC log location not specified!  (You can ignore this 
warning if you do not want the generated minutes to contain 
a link to the original IRC log.)


[End of scribe.perl diagnostic output]