World Wide Web Consortium

Before the Commission on Child Online Protection

Testimony of Daniel J. Weitzner
Technology and Society Domain Leader

World Wide Web Consortium

San Jose, California USA
3 August 2000

Introduction

Mr Chairman and distinguished Commissioners, my name is Daniel J. Weitzner and I thank the Commission for the opportunity to testify before you on the pressing questions of online child protection that you face.

I am head of the World Wide Web Consortium's (W3C) Technology and Society Domain, responsible for development of technology standards that enable the Web to address social, legal, and public policy concerns. W3C, an international technology consortium made up of over 440 members throught the world, including Africa, Asia-Pacific, Europe, the Middle East, North America, and South Americas is responsible for setting the core technical standards for the World Wide Web. The most well known results of our work are HTML and XML (the language of all documents on the Web, as well as Style Sheets, and graphics formats such as Synchronized Multimedia Integration Language (SMIL) and Scalable Vector Graphics (SVG). Our membership includes leadering organizations from industry, academe, the user community and public policy experts. W3C was founded in 1994 by Tim Berners-Lee, inventor of the Web, who serves as the Director of the Consortium.

In addition to my work at W3C, I also hold a research appointment at MIT's Laboratory for Computer Science, teach Internet public policy at MIT, and am a member of the Internet Corporation for Assigned Names and Numbers (ICANN) Protocol Supporting Organization Protocol Council.

Today I will touch on three major points:

  1. Current experience with various technology tools
  2. Lessons learned from the first round of technology
  3. Reflections on the future directions of Web technology that might address child protection issues
  4. Suggestions for making practical progress in the field

I am particularly pleased to be here as part of the ongoing dialogue between the Web technology community and public policy makers around the world. Despite the Web's astonishing growth, it remains very much a work in progress. We can and should shape the Web of tomorrow to meet not just technological, but also social needs of the diverse communities around the world who depend on this new medium.

I. Experience with technology tools

Children's ability to access inappropriate material is a direct result of the revolutionary positive power of this new medium. The very same attributes that make the Web an extraordinarily open medium, also allow content considered unwanted or harmful is much more easily accessible. For some time, technologists and policy makers around the world have recognized that the technology of the Web itself can be designed to address the problem of minors having access to such material. In the portion of my remarks I will offer my assessment of various categories of these technologies.

For the purpose of this discussion, I will consider three categories of content control technologies:

  1. user controlled tools: third party filters, etc.
  2. publisher controlled tools: self labeling, rating, and access control based on identity or age
  3. preference-based user-service protocols: preference managers, such as P3P

My testimony is not meant as a thorough inventory of all relevant technologies in these categories, or even a complete taxonomy. Others have documented the field well and resources exist to find up-to-date listing of tools. My remarks will be limited to the issue of material that might fall under the definition "harmful to minors" and does not address illegal material such as obscenity or child pornography. Therefore I will direct my comments to technology tools which have a role in making certain content available to adults while restricting the same content from access by minors.

A. User Controlled Tools

By far the most widely used content control tools are those know as third-party filters, enabling users to block types of content they choose based on assessments made by third-parties, typically the companies that provide the filtering software itself.

The success and advantages of such services are clear:

I note, with some personal disappointment, that while there numerous third-party filtering packages, none of the most popular ones rely on the open standard for labeling and rating, W3C's Platform for Internet Content Selection (PICS).

Though these services are now in widespread use, they do have significant limitations. First and foremost, as currently configured, these tools are not self-initiating. They generally require that parents take some action to install software on the home PC or turn on the service option as presented by the ISP. Also worthy of concern, a number of critics have pointed out the particular filtering software is either under or over-inclusive in its blocking, thus denying children access to sites for no good reason, or allow access to sites that the parents would expect to be blocked. From a global standpoint, there is a noticable market failure in the availability of filtering and blocking products geared toward cultural perspectives other than North America and Europe. Furthermore, even those families who live in well-served regions, have a very hard time making informed choices among the available products. Though some media outlets are beginning to provide reviews and evaluations of different services, there is little information available to enable a parent to chose which service best fits their own values.

B. Publisher Controlled Labeling and Rating Systems

Some content control tools proposed to address objectionable content on the Web depend on actions taken by the authors or online publishers themselves. A variety of proposals have been offered to enable or require online publishers to label or otherwise identify the material that they put on the Web so that it can be blocked by users. The leading plan in this area that came closest to success was the RSACi system. Though such approaches in which labels or ratings are attached to content at the source are widely used in the traditional mass media (movie ratings, harmful to minors blinders, etc.) these techniques have provide uniformly ineffective so far on the Web.

Several factors seems to account for the failure of publisher-controlled content restrictions.

C. New breed of preference negotiation: Platform for Privacy Preferences (P3P)

In addition to the main tools with which we are all familiar, a new hybrid of user and publisher controlled techniques are now being developed. To help address growing concerns about online privacy, W3C launched the Platform for Privacy Preferences (P3P) project to enable the development of a variety of tools and services that give users greater control over personal information and enhance trust between Web services and individual users.

P3P-enable services will enhance user control by putting privacy policies where users can find them, present policies in a form that users can understand them, and, most importantly, enable users to act on what they see in policies more easily. For ecommerce services and other Web sites, P3P can be used to offer seamless browsing experience for customers without leaving them guessing about privacy. Moreover, P3P will help ecommerce services develop comprehensive privacy solutions in the increasingly complex value chain that makes the commercial Web such a success. On today's Web, when a consumer buys a product or service from one Web site, completing the transaction may well involve numerous individual services linked together, each of which has some role in the ultimate delivery to the user and each of which has some responsibility for honoring the privacy preferences expressed by the user at the beginning of the transaction.

Setting the stage where such flexible combinations of services can be offered to users requires widespread agreement on standards, including the means of communicating from one service to another about how personal information should be handled. Standards have a vital role in the operation of the Web in general. The Web is not run by any single organization, but it does enable people to share information around the world because everyone who operates a piece of the Web agrees to follow shared technical standards. In the same way as the HTML standard ensures that everyone who looks at a Web page will see it as the author intended it to look, regardless of what computer or software is used, the P3P standard will enable every user and site operator on the Web to communicate in a common language about privacy.

With the standard definition nearly complete, we are now entering the testing and implementation phase. Our last step in finalizing the design of the standard is to host a series of interoperability testing events, one in June and one in September. We are encouraged that a number of large Web software developers as well as innovative smaller services have committed to implementing P3P in their products. Following this testing phase, we will issue a final standard for the Web community, likely within the next year.

II. Lessons Learned

In the five years that the Web has been in widespread use around the world, we have already learned valuable lessons about what approaches to computer-assisted child protection work well and which do not.

A. Difficulties with publisher control

Experience with publisher control solutions to content blocking or filtering has shown a number a problems associated with efforts to restrict access to content through actions taken by online publishers. The most significant hurdles are:

1.Difficulty in specifying exactly what should be blocked: A variety of technologies (labeling, identify verification, credit card verification) exist to automate the process of blocking access content, but there is no method for automating the process of deciding which content should be restricted. When publishers seek to label content to enable user-side blocking, or to restrict access directly based on password, identity verification, or some assessment of user age, the publisher must have a precise means of identifying which content should be restricted or blocked. To date, we have not seen sufficient agreement on terms to enable clearly defined systems to emerge.

Among the publisher-side solutions that have been recently proposed, a single global Top Level Domain (gTLD), such as .adult or .xxx, have the same problems of specificity. What content, exactly, would be allowed or required to be listed under this domain? What about content that is considered 'adult' in some jurisdications but not others?

2. Lack of incentive/requirement to label leads to inadequate level of participation: Self-labeling systems to date have suffered from dramatic participation deficits. Even the most popular self-labeling system was not even able to attract 1/10th of 1% participation among Web sites around the world. While some sites labeled out of a sense of good citizenship or in order to suggest that they would act voluntarily before required legally, the vast majority of sites have seen no reason to self-label. With the continued explosion in the number of Web sites around the world, it seems unlikely that any significant number of sites would self-label without a clear legal or market requirement to do so.

3. Lack of global identity/age verification: Efforts to restrict access to content by means of age or identify verification have been hampered by lack of global or even national infrastructure for age or identity verification. While individual identification services do exist, no system has gained even close to widespread acceptance nor are any well-enough intergrated into the Web infrastructure that they are of practical use for Web sites or users. Relying on identity or age verification by Web sites also creates potential grave and dangerous privacy risks for users, especially where those users are minors. When a Web site seeks to verify the age or the identity of a user for the purpose of controlling access to content, that site will then also have access to this demographic information about the user for other purposes. Reputable sites would decline to use this information for any other purposes, but unscrupulous sites might well take advantage of the fact this this information is available. The immediate threat to all users privacy rights from unwanted disclosure or use of age or name information is clear. Beyond this, technology which discloses the age, or mere minority, of a user could be used by criminals to target and victimize these minors. This medicine would certainly be worse than the disease.

B. Benefit of user control

In sharp contrast to the weakness of publisher-controlled child protection technologies, user controlled tools have been quite successful. There should be no doubt that these tools require considerable improvement, but indications are the continued technology advances, market incentives, and vigorous criticism of flaws in rating decisions or methodologies, will continue to offer more effective services to meet the diverse needs of Web users around the world. The main benefits are:

1. Proven effectiveness: User controlled, third party filters are widely available to Web users and do function reasonably well to shield children from unwanted content. While not perfect, they do a reasonable job of screening out a significant amount of unwanted content. Most importantly, they can be deployed by users, without having to rely on Web site operators at all. Though these services do place some burden on parents, there is significant benefit in putting the control in their hands.

2. Embracing diversity: By placing control in the users hands, we have the best chance to assure that parents are able to make content choices consistent with their own values. When the control is in the hands of a centralized authority or the content provider itself, parents lose the ability to make clear decisions about what is available to their own children.

III. The Web of the Future: Can the Web be Zoned?

Recognizing the deficiencies in current approaches to content blocking, some have proposed that the Web should be zoned, keeping the adult content in one 'place' and thereby enabling children to easily avoid it. Addressing the problem of harmful to minors content through zoning rules is common in the physical world, though raises serious questions on the Web. Guided by the successes and failures of content control technologies to date we may be able to achieve many of the goals of traditional zoning, but only by following the decentralizing lessons of Web technology.

Traditional zoning approaches to content seek to place all instances of a particular type of material or activity in a clearly marked zone or area. Just as businesses may not locate storefronts on a city block zoned for residential purposes, and just as a strip club may not be located in many predefined areas of a municipality, adult content would be 'located' in a particular area of the Web. Such location would likely be indicated by either a content label or be registrering the Web site in a particular Top Level Domain.

These zoning approaches suffer from many of the same problems that plague the general category of publisher-controlled content screening: first, it will be necessary to define with precision what content is to be included or excluded from a particular zone. And second, the zone will be insufficiently flexible to accommodate the diversity of laws and cultural norms on the Web. Just as the different legal jurisdictions differ on the legality of a particular kind of content, they will also differ on what material belongs in which zone. Yet, to zone the Web, a single, global definition of each zone appears necessary, especially if the zones are to demarcated by global Top Level Domains (gTLDs).

Many of the goals of the zoning approach may be achievable, however, if we take the lessons of decentralization to heart. First, we must accept that the activity which we seek to control (accessing objectionable content) occurs as much at the user end as the provider end of the transaction. Second, since many users from many legal jurisdictions will access the same piece of content, zones must be user defined, not publisher-defined. New preference-based technologies such as P3P may provide some help in allowing users to select those sites which meet their definition of 'wrong for my child' or 'acceptable for my 15 year old.' Such solutions will require the support of content providers, but will first and foremost depend on giving users choice.

IV. Conclusion

Centralized approached, focused on publisher behavior, have, so far, failed. Experimentation may still be worthwhile, but caution should be used not to go against the grain of well-understood dynamics of the Web. Globally-accessible information systems have historically failed because of reliance on centralized solutions, until the decentralized Web came along.

Decentralized approaches, focuses on users and a diversity of third parties have succeeded. They hold the greatest promise for the future because of their:

  1. Global scalability: The Web has grown by relying on lots of services operating independently, based on common standards. This is the best way to meet the needs a the growing number of users on the Web.
  2. Flexibility to accommodate diversity of values and cultures: Only decentralized, user-centered services can hope to meet the extraordinary cultural, moral and legal diversity that characterizes the Web.
  3. Ability to leverage entrepreneurial innovation: The best way to solve the difficult technology problems associated with child protection tools is to unleash the spirit of innovation that has built the Web. Such innovation happens in a decentralized environment, not through negotiation of global agreements on legal matters.

I thank the Commission again for the opportunity to appear before you on these important issues and stand ready to continue this dialogue as your deliberations procede.