Bad RDF Crawlers

From W3C Wiki
Revision as of 20:35, 23 June 2011 by Rcygania2 (Talk | contribs)

Jump to: navigation, search

This page is intended to hold a list of poorly behaving crawlers that target RDF-publishing websites, unfortunately a recurring problem.[1] The list will allow publishers to defend themselves by blocking such crawlers.

Best practices for web crawlers

Dereferencing is a privilege, not a right. Crawlers that don't use server resources considerately abuse that privilege. It has bad consequences for the Web in general.

A well-behaved crawler …

  • … uses reasonable limits for default crawling speed and re-crawling delay,
  • … obeys robots.txt,
  • … obeys crawling speed limitations in robots.txt (Crawl-Delay),
  • … identifies itself properly with the User-Agent HTTP request header, including contact information therein,
  • … avoids excessive re-crawling,
  • … respect HTTP cache headers such as If-Modified-Since, Last-Modified and ETag when re-crawling.

See Write Web Crawler for further guidelines.

Defensive measures

If you run large web servers, you may want to consider defensive measures against abuse and attacks.

On Apache web servers, mod_rewrite can be used to block bad crawlers based on their IP address or User-Agent string.

There are several sites dedicated to collecting and sharing information about bad web crawlers in general (not RDF-specific):

Stronger defenses using WebID

The above measures have been around since the beginning of Web crawling, and suffer from a number of problems

  • IP addresses are very bad identifiers
    • they can be faked
    • a large number of users can sit behind a single IP address. In the early Web (1995->1998) most addresses came through AOL proxies.
  • headers can be faked or forgotten
  • robots.txt works by convention only - it has no enforcement mechanism
    • robot writers need to know about it, and this is not always an evident thing to understand
    • not all users have access to robots.txt, so in any case it is not a very flexible mechanism for setting access control

Where these were perfectly fine in a world where there were few writers of robots and computing power for running such tools was expensive, they are no longer appropriate for a world where every laptop has more RAM and CPU than the largest machines search engines were running on in 1996. What is required is strong and automatic access control, that works in a distributed manner. But global authentication is required for this to work. Otherwise, robots would need to find the login page for every web site and create themselves a username and password for that site, which is clearly an impossible task.

Global Authentication tied into Linked Data is enabled by FOAF+SSL, also known as WebID. Both HTTP and HTTPS resources can be protected this way

  • HTTPS resources request client-side certificates according to the usual WebID protocol
  • HTTP resources can use cookies and redirect clients to an HTTPS endpoint for authentication if the requestor has no cookie. If the client does not have a WebID-enabled certificate, OpenID or other methods of authentication can be used. Once authenticated, clients (and hence robots) can then be redirected to the HTTP resources and proceed as usual.

The advantages of WebID are many:

  • Robots and crawlers can identify themselves as :Crawler in their WebID Profile document (ontology to be developed), and so get access to special resources more useful to robots, such as full dumps or RSS feeds.
  • Authentication is automatically enforced - so bad robot writers will very soon find out about it, as they won't get access until they do.
  • WebIDs are distributed and can preserve anonymity while enabling authentication. WebIDs can be self-generated and throw-away. There is no center of control.
  • Good WebID users can get better service over time, leading even anonymously-identified robots to pursue a strategy of long-term good behavior.
  • Getting WebIDs is very easy, and most software libraries support client-side certificates, so it should only be a few hours work for robot authors to enable their crawlers with it.
  • Building WebID-enabled application servers is not that much work either.

The WebID Incubator Group is very keen to work with robot writers and linked data publishers to help them WebID-enable their apps.

Incidents

To report a poorly behaving crawler, please provide at least the following information:

  • Date of incident:
  • What the crawler did wrong:
  • User agent string:
  • IP address range:
  • Access logs (if possible):

References

  1. Think before you write Semantic Web crawlers, public-lod post by Martin Hepp, 21 June 2011