RE: Navigation Error Logging spec update

Partial answer to my own question about beacon, but it still feels a bit vague.

https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/Beacon/Overview.html

The sendBeacon<https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/Beacon/Overview.html#sendBeacon> method MUST asynchronously transmit data provided by the data<https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/Beacon/Overview.html#data-parameter> parameter to the resolved URL<http://www.w3.org/TR/html5/urls.html#resolve-a-url> provided by the url<https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/Beacon/Overview.html#url-parameter> parameter. The User Agent MUST use the POST HTTP method<http://tools.ietf.org/html/rfc2616#section-5.1.1> to fetch<http://www.w3.org/TR/html5/infrastructure.html#fetch> the url<https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/Beacon/Overview.html#url-parameter> for transmitting the data. All relevant cookie headers must be included in the request. User agents MUST honor the HTTP headers (including, in particular, redirects and HTTP cookie headers), but MUST ignore any entity bodies returned in the response. User agents MAY close the connection prematurely once they start receiving an entity body. The User Agent SHOULD transmit data at the earliest available opportunity. The User Agent SHOULD make a best effort attempt to eventually transmit the data.


If the logging url and the 'real' site are on different base domains, then the likely hood of the cookies on the logging domain being particularly useful would seem low. If I read this "but MUST ignore any entity bodies returned in the response" correctly, then I can't set a cookie in response to a sendBeacon post.


And just a style question: Should the 'must' in this phrase be all caps, like the others?

All relevant cookie headers must be included

Aaron



From: Aaron Heady (BING AVAILABILITY)
Sent: Thursday, December 12, 2013 8:00 AM
To: 'Chase Douglas'
Cc: Arvind Jain; public-web-perf
Subject: RE: Navigation Error Logging spec update

Hmm, I was thinking about correlating back to server logs. I can't quite envision what you are describing. Can you elaborate a bit?

Your question did lead me to another open item: Cookies.

Will using Beacon or enableNavigationErrorReporting for NavigationErrorLogging send cookies? And if so, are they based on the logging URL (reportUrl) or the actual URL of the request that generated the error?

If get a navigation error on http://example.com and my reportUrl is set to http://example.com/log then cookies for example.com?

If get a navigation error on http://example.com and my reportUrl is set to http://contoso.com/log then no cookies? Or the expected cookies for contoso.com or example.com?



https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationErrorLogging/Overview.html



From: Chase Douglas [mailto:chase@newrelic.com]
Sent: Wednesday, December 11, 2013 5:31 PM
To: Aaron Heady (BING AVAILABILITY)
Cc: Arvind Jain; public-web-perf
Subject: Re: Navigation Error Logging spec update

I want to correlate with requests while still in the browser context doing javascript stuff, not from the server side.

On Wed, Dec 11, 2013 at 2:16 PM, Aaron Heady (BING AVAILABILITY) <aheady@microsoft.com<mailto:aheady@microsoft.com>> wrote:
The timestamp on the navigation error logs should be sequential. If your server logs are sequential, it ought to be possible to correlate, depending on how much data mining you want to do.

We tend to use a unique identifier per URL to help reduce this problem. Not on every page, but on everything you might want to be able to clearly correlate logs with, or across systems.


https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationErrorLogging/Overview.html
startTime attribute

The startTime attribute MUST return a DOMTimeStamp<http://www.w3.org/TR/DOM-Level-3-Core/core.html#Core-DOMTimeStamp> with the time immediately after the user agent finishes prompting to unload<http://www.w3.org/TR/html5/browsers.html#prompt-to-unload-a-document> the previous document while navigating<http://www.w3.org/TR/html5/browsers.html#navigate> to the document that resulted in an error.
And

4.4 Monotonic Clock

The value of the timing attributes must monotonically increase to ensure timing attributes are not skewed by adjustments to the system clock while recording error data. The difference between any two chronologically recorded timing attributes must never be negative.


Aaron



From: Chase Douglas [mailto:chase@newrelic.com<mailto:chase@newrelic.com>]
Sent: Wednesday, December 11, 2013 10:47 AM
To: Arvind Jain
Cc: Aaron Heady (BING AVAILABILITY); public-web-perf

Subject: Re: Navigation Error Logging spec update

One thing I've noticed in the timeline specs, so not just this spec, is that it is not easy to match up a timeline event entry with a specific request. If I hit the same endpoint twice, but one time it errors with one reason, and the other time it errors for a different reason, I don't really know which failed why unless I track the chronologic order of requests. Has there been any discussion about this issue?

On Mon, Dec 9, 2013 at 6:03 PM, Arvind Jain <arvind@google.com<mailto:arvind@google.com>> wrote:
I think we should not retry the logging fetch. I hope that addresses the DDOS issue.

Arvind

On Mon, Dec 9, 2013 at 10:06 AM, Aaron Heady (BING AVAILABILITY) <aheady@microsoft.com<mailto:aheady@microsoft.com>> wrote:
At some point we discussed how the UA should behave if it is a private browsing session. It should likely not log anything. Does that need to be alluded to in 5 Privacy and Security?

Looking at enableNavigationErrorReporting. This looks like it could spiral out of control if both the content origin and the logging origin is host on the same architecture. For example:

A request to http://example.com results in a HTTP 500 because of a bug on the origin when it is under too much load (DDOS, internal capacity issue, etc). The UA formats a NavigationErrorEntry and prepares to send it to http://example.com/logging. That in turn results in a 500 error, and increases the load on the origin.

Should there be some back off logic? If a logging fetch fails, don't try again for n*2 seconds, doubling as it continues to fail. If a logging fetch fails, does it retry? That is getting into the beacon logic, but it seems like if we are going to allow some global set of UA's to automatically send logging fetches during errors, then we need some idea to limit how much further impact they could cause, or it is DDOS waiting to happen.

Aaron


From: Arvind Jain [mailto:arvind@google.com<mailto:arvind@google.com>]
Sent: Sunday, December 8, 2013 11:32 AM
To: public-web-perf
Subject: Re: Navigation Error Logging spec update

Checking in.

Do folks have any comments on the draft?

Arvind

On Fri, Nov 29, 2013 at 6:25 PM, Arvind Jain <arvind@google.com<mailto:arvind@google.com>> wrote:
I added two methods to the interface to allow reporting of errors in real time to a report url as per ACTION-117 - Add method to allow ability to send to a third party url.

https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/NavigationErrorLogging/Overview.html

Please review and let me know if you have any concerns.

Arvind

Received on Thursday, 12 December 2013 16:25:37 UTC