PICS Debate, September 1998



Or read it as one large file.

-- More (100%) --

Date: Sep 19, 1998 (Sat, 1:13:41)

To: link@www.anu.edu.au

From: rene@pobox.com (Irene Graham)

Subject: Re: Censoring the Internet with PICS



On Thu, 17 Sep 1998 20:25:50 -0400 "Joseph M. Reagle Jr." <reagle@RPCP.MIT.EDU>
wrote:

By way of background, for Linkers unfamiliar with the PICS debate, Joseph is
W3C's Public Policy Analyst. He and I have discussed PICS on prior occasions.

>At 09:27 PM 9/17/98 +1000, Colin Richardson wrote:
> >Caroline Kruger. Censoring the Internet with PICS: An Australian
> >stakeholder
> >analysis (Research Report No 5)
>
>Interesting report. Its unfortunate that it is in PDF format (Will it be in
>HTML at some point?)!

Perhaps they don't want it to be able to be rated and blocked by
PICS-facilitated systems. Afaik, although HTML documents can be PICS-labelled,
PDF documents cannot be. Is that correct? If it is not correct, where may I find
information on how to label PDF documents, please. I've never seen anything
about that on the PICS site.

> Otherwise, its a balanced treatment of the topic but
>it does not include a reference to the following,

True, it doesn't. Hardly surprising given that most of the research for the
report had apparently been done prior to June 98. Unless one kept checking the
W3C PICS site to see if the PICS folk had decided to make any more
statements/announcements, *after* they'd stated that development of PICS had
ceased (approx Dec 97), one would not know about this June 98 statement. If the
PICS developers want the world to know what they are subsequently claiming, I'd
recommend they distribute their statements to appropriate lists and newsgroups.
Of all the relevant lists etc I'm on, and that various signatories to the "Note"
are aware of and/or on, not a word has been mentioned about this Note. One could
surmise that they didn't seriously want it to become widely known.

>particularly in the context of Roger's comments.
>
>Statement on the Intent and Use of PICS:
>Using PICS Well
>W3C NOTE 01-June-1998
>http://www.w3.org/TR/NOTE-PICS-Statement

What comments of Roger's, precisely, are you referring to Joseph? Perhaps this:

"A major problem with this technology is that PICS enables censorship
regimes to be put in place, an attractive proposition for an authoritarian
government. Mr. Clarke says that despite repeated requests from him, W3C had
refused to distance themselves from this, preferring instead to offer this
technology as value neutral, without taking into account its effects."

While you draw attention to a W3C Note, that does not address the point Roger
raised. W3C Notes are *not* W3C recommendations or policy. In fact, the
particular Note you refer to explicitly states:

"This document describes the intent of PICS development and recommends
guidelines regarding the responsible use of PICS technology...It has no official
W3C standing."

Repeat: No official W3 standing. Even if it was a W3C recommendation, those are
unenforcable. W3C developed a technology that makes the Web censor friendly, and
the best they can now do is claim that that's not what they intended to do.
However, W3C isn't even prepared to say that. We merely see a "Note" signed by
some of those involved in the development of PICS, attempting to distance
themselves from increasing, world-wide, criticism of PICS; to disclaim
responsibility for developing a system which enables censorship regimes to be
put in place, on the ground that they didn't intend it to be used in that way.
It's too late. The technology exists, and neither the PICS developers nor W3C
have control over how it's used.

The Note emphasises that the re-statement of principles therein is that of the
"original 22+ organizations that proposed the PICS Specifications" in late 1995.
However, as anyone who's followed the PICS debate knows, as far back as 11
September 1995, the PICS Technical Charter stated:

"Our schemes will permit filtering either at an end-user's PC, or
somewhere in the network."

"Somewhere in the network" does not suggest that the PICS scheme was originally
intended to solely empower end users of the Internet to control what they
themselves access and this became particularly evident with the approval by W3C
of the PICSRules specification in Dec 97.

Furthermore by mid 1996, if not before, Jim Miller, Co-Chair of PICS at W3C from
the outset, was being quoted as follows:

-"The 'veiled threat' of the US Communications Decency Act and similar
laws in Australia and other countries would force Web page creators to
rate their own content, he said. 'It's going to happen and the
publishers are going to resist it as long as they can, but they'll have
to realise that they must rate their content or face prosecution.' "
(Aust. Financial Review, 28/6/96)

So much for voluntary rating.

-" 'It's sort of my nightmare that every country would adopt PICS but
each would decide to have its own rating system,' he said."
(Aust. Financial Review, 8/7/96)

So much for a multiplicity of rating systems.

- " '...the thrilling thing about PICS is that it changes the whole
debate,' he said. In the past the debate was about censorship and
turning the whole thing off - now we can talk about how to regulate a
fully operational Internet, and the whole landscape has changed."
(Aust. Financial Review, 23/7/96)

Indeed it has. Thanks to W3C and the PICS developers.

By October 1996, Jim Miller and Paul Resnick had amended their original article
"PICS: Internet Access Controls Without Censorship" to include additional uses
of PICS:

"Governments may want to restrict reception of materials that are legal
in other countries but not in their own."

"Governments may also mandate country-specific vocabularies."

Original article: http://www.w3.org/PICS/iacwc.htm
Revised article: http://www.w3.org/PICS/iacwcv2.htm

It is far far too late for the developers of PICS to now expect that re-stating
their alleged original intent will make the slightest bit of difference to how
PICS can and may be used.

The W3C Note is notable for its lack of signatories over 4 months after its
release. A mere ten. Furthermore, two of them represent a company that sells
PICS-compatible proxy servers and promotes them as being able to filter material
at the proxy instead of the browser stating that "This provides a consistent and
focused filtering capability and removes it from the user's control." (according
to an announcement of 9/12/97). Another signatory represents a company that
distinguished itself in July 97 by proposing (US) legislation that would enable
any parent who felt their child was harmed by "negligent" publishing to sue
publishers who fail to rate or mis-rate material. Parents would not be required
to prove actual harm, only that the material could "reasonably" be required to
have had a warning label or a more restrictive label.

At least some of the signatories of the recent W3C Note are not, in my view,
credible.

For those unfamiliar with the PICS debate, further background is available at:

The Net Labelling Delusion
http://rene.efa.org.au/liberty/label.html

Global Internet Liberty Campaign (GILC) submission to W3C re PICSRules (12/97)
http://www.gilc.org/speech/ratings/gilc-pics-submission.html


Irene


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Irene Graham, Brisbane, Queensland, Australia. PGP key on h/page.
Burning Issues: <http://www.pobox.com/~rene/>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-- More (100%) --

Date: Sep 19, 1998 (Sat, 11:21:26)

To: rene@pobox.com (Irene Graham)

From: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au


In-Reply-To: <36093dd3.9087711@mail.logicworld.com.au>
References: <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu>
X-Eudora-Signature: <MIT>
X-Persona: <RPCP>

At 01:13 AM 9/19/98 +1000, Irene Graham wrote:
>On Thu, 17 Sep 1998 20:25:50 -0400 "Joseph M. Reagle Jr." <reagle@RPCP.MIT.EDU>
>wrote:
>By way of background, for Linkers unfamiliar with the PICS debate, Joseph is
>W3C's Public Policy Analyst. He and I have discussed PICS on prior occasions.

But I haven't really participated in that "official" capacity on this list, more as an academic. Also, by way of background, I'm actually on sabbatical from MIT; I'm currently a Resident Fellow at the Harvard Law School [1], so I am definitely not speaking in any W3C capacity here.

[1] http://cyber.harvard.edu

>Perhaps they don't want it to be able to be rated and blocked by
>PICS-facilitated systems. Afaik, although HTML documents can be PICS-labelled,
>PDF documents cannot be. Is that correct? If it is not correct, where may I find
>information on how to label PDF documents, please. I've never seen anything
>about that on the PICS site.

This is rather hostile!? I'm not asking that it be in an open (nonproprietary and smaller) format to extend the PICS architecture! <lol> Wow.

Regardless, of course PICS can apply to PDF. Any meta-data system worth its salt must be able to specify a referent, which in PICS case is anything identified by a URL. PICS, by way of protocol slight of hand, can be embedded in HTML where the implicit referent is that document. But it is supposed to be served in the HTTP stream, or at a label bureau. The PICS slight of hand is to include it in the HTML as http-equiv, which means, "pretend this is in the HTTP header."

Otherwise, to respond to your points in brief. The people that worked on PICS have made _some_ effort to distance themselves from poor uses of the PICS protocol. That was my point.

From an organizational point of view, I'd posit that the W3C standards are not enforceable beyond the good will of those wishing to extend such work on their technical merits. Furthermore, the W3C as a body has no strong mechanism of issuing policy statements, a NOTE with signatories is the best they have at the moment. Yes, I (when I worked on this in a W3C capacity) did not widely publicize this, nor did I push to get many signatories. I had been interested in doing something like this for a long time, but by the beginning of the year had my hands quite full with P3P, so I only cast it as far as the PICS-interest group and moved on.

Finally, I'm not responsible nor accountable for the statements of Jim Miller or Paul Resnick in the press or other venue. I may agree with some of their statements, their views (and my own) may have changed over time, and I even disagree with some of their statements -- this is not an uncommon position to be in with respect to one's colleagues. I understand that as a Staff (Jim) and as co-creators (Jim and Paul) their statements strongly reflect upon PICS and the W3C. However, the most "official" (though generic) voicing of the W3C's statement on public policy is [2]. I believe the W3C is accountable for this statement. I think the most cogent exposition of views on how PICS is used -- and one I feel responsbility for -- is in the statement [3]. Including:

- No single rating system and service can perfectly meet the needs
of all the communities on the web.
- The decision to self-label should be at the discretion of content
creators and publishers.

[2] http://www.w3.org/Policy/statement.html
[3] http://www.w3.org/TR/NOTE-PICS-Statement


-- More (100%) --

Date: Sep 21, 1998 (Mon, 13:6:36)

To: rene@pobox.com (Irene Graham)

From: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au


In-Reply-To: <36084111.1393203@mail.logicworld.com.au>
References: <3.0.5.32.19980919112126.009fec80@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980919112126.009fec80@rpcp.mit.edu>
X-Eudora-Signature: <MIT>
X-Persona: <RPCP>

At 12:53 AM 9/22/98 +1000, Irene Graham wrote:

I am on leave from any "W3C spokesperson" role -- my point was, I never played that role in this forum regardless. You should speak to Danny Weitzner <djw@w3.org> if you wish to speak to someone in that position.


>Yes I realise this. I should have made it clear that I was referring to it being

>able to be voluntary-mandatory PICS rated by the content provider (eg. coerced

>by ISPs coerced by government). While there are no tools enabling content

>providers to PICS label their PDF documents, such coercion is not possible,

>unless of course PDF format is outlawed. :-)


I believe there is label bureau software out there.


>Interesting. Self labelling enabled by a sleight of hand.


I was using that in a technical sense. <smile> HTTP-equiv is a technical sleight of hand in a sense, telling the HTML client, "pretend this was in the HTTP."


>of thing a number of times now, from reasonably reliable sources, that PICS

>labels were/are primarily intended to be served in the HTTP stream, or at a

>label bureau, i.e. beyond the control of most content provider to specify the

>rating label applicable to their content. Unsurprising actually.


The spec. is very clear where they may reside, in the content (through HTTP-equiv), the HTTP headers, or a label bureau.


>In short, PICS was designed and optimised for third party censorship, and by a

>sleight of hand, the developers ensured they could attempt to sell it to the Net

>community as something benign. To date, three years later, the vast majority of

>Net content providers have shown themselves not to be so easily conned.


I believe meta-data often works best when it is not embedded in the content. Through http-equiv, one can also stick it in the HTML file itself. Though they are related, there are distinctions between meta-data transport and the case: "self-labeling" vs. "3rd party labeling." The one is "how I learn," the other is "who is speaking."


>In my view, PICS has failed in one of its key design goals - that of

>enabling/encouraging third party groups to set up rating systems and labelling

>bureaus suitable for their needs.


The technology enabled self rating and 3rd party rating. Self rating was seen as rather nice by some, and it is an instance of a technology that requires strong interoperability requirements between clients and services. Something I've realized is that there is not a lot of incentive for 3rd party raters to adopt PICS. For an org that is going to go out and rate the whole Web, PICS doesn't buy you a whole lot. In fact, you probably want to keep your ratings proprietary and closed. What the interoperability buys you in the 3rd party scenario is the ability to switch or use multiple 3rd parties. However, people seem fairly happy using a single proprietary selection/filtration tool, with an encrypted set of ratings from that single source. If you are a propietary third party supplier, you may want to "lock-in" folks from moving to another service.


>>>>

By enabling self-labelling in order to try to

>sell it to the Net community, the entire focus became centred on means of

>forcing content providers to self-rate their content - rather than on content

>providers being free to speak without pressure to self-censor and others being

>able to choose what they would or wouldn't read.

<<<<<<<<



That is the strategy some governments could choose, particularly if they have precedent in other regulatory venues. (Food labelling, blocking the naughty bits of magazines on public display, motion picture rating, TV program ratings, etc.) In the US, some of the proposals have been that some 3rd party system must be available to the user. I think the latter approach is much more likely to be successful in keeping children away from content that may be offensive and much less problematic from a free speech point of view -- though problems do develop if they are required, or if there isn't much transparency in the options and choices made by the 3rd party.


My current belief on how to best achieve a "family friendly" Web space is (in order of importance)


1. 3rd party children spaces (like that offered on AOL).

2. 3rd party white lists (like that offered by the ALA, or what you could do in PICSRules).

3. selecting appropriate content when labelled and trusted.


Note, that the trade off here, of the white list approach (and again, PICS/PICS-rules allows you to easily change or use multiple white lists) is that you are being confined to homogenized sets of content. It would be nice to be able to go to any content and have it filtered only the merits of what you care about ("offensive content" in this case), rather than other hidden criteria a creator of a white list may use. But this is predicated on most/everything being labelled, which is fraught with difficulties. Its not a predicate of P3P. The deployment model is a little different and scales well. The predicate there is: it'd be nice of services to declare their privacy practices, there may even be government pressure to do so, but the real hook is, "if you want information from a user, you must inform them why, how, etc." There's a carrot in there.


> Basically, imo, the PICS

>developers were too focussed on the CDA and the US 1st A to see how PICS could

>be hijacked by authoritarian governments. While it would be impossible for

>government authorised entities to rate the Net (even all content in their own

>jurisdiction) as third parties, it is not impossible for them to mandate

>self-rating with penalties for misrating or failing to rate, or similarly, to

>coerce ISPs to coerce content providers to self-rate. And that, essentially, is

>the main cause of the wide-spread opposition to PICS.


I understand -- though I believe it is possible for governments to rate, to censor, to require everyone to go through a central proxy. One of the most arbitrary and capricious regulatory models is to pass a law and selectively enforce it with grotesque penalties. It can be quite effective and works in Asia. That wouldn't work in Australia, so that government is pursuing a policy more likely to appeal to its constituency. That approach might not work in the US, and it has its own process of push-and-shove, passing constitutional muster, and appealing to the citizenry.


>That is not to say that absent the ability to self-rate, there would be no

>opposition to PICS. If the anticipated third party rating and labelling bureaus

>had or do eventuate, the opposition would likely be on similar grounds as that

>towards the likes of Cybersitter et al.


Yep. Homogenized content with nontransparent/filtration selection.


>Precisely. They just develop and recommend technology that makes the Net censor

>friendly and then say: not our responsibility how it's used.


[forgive me for going off on a tangent here, but I wanted to get some of my thoughts on this stuff down <smile>]


Not exactly. The blind spot in the concept of "neutral technology" is that the policy is already set by legitimate political (hopefully democratic) processes. The technology can then implement that policy. For instance, if Australia -- through "legitimate" democratic processes -- chose that mandatory self labelling, and the technology is used that way, who am I to challenge that?


However, if you were to predicate that the political process is corrupt or non-representational, that the government is exceeding its authority and broaching inalienable civil rights, the technology is not neutral. It is not neutral because it is _part_ of the process which determines the eventual policy. For instance, one could take the position that speech is an absolute and unhindered right, (no restriction regardless of safety, obscenity, hate-speech, libel, etc.) Furthermore, one may act completely on principle -- rather than pragmatism. In such an instance, I can hardly see PICS as being appealing. PICS, as developed, seems to be in that fuzzy space of where some restrictions on content seem appropriate, particularly for children. While its intent was "self-empowering" and decentralized, the nature of the technology does not make it immune from centralized control -- though what it offers over closed systems in such situations is questionable. One might also be a pragmatic, and believe that in a certain context (when the CDA was active) this path was an improvement over the present situation or likely future course. However, in a different context, the path it provides may be more problematic given where you stand on the spectrum. There's a lot of decision paths here by which a person would make a decision as to whether they support PICS.


- Assumptions about society and civil rights.

- Assessment of present day situation and likely future paths.

- Preference in choosing "principle" vs. "pragmatic" strategy.


Regardless, I believe the statement is more akin to "here's some technology, with the intent to make the world, in a given context, a better place. This organization, is incapable of challenging the legitimacy of political processes in determining their policies. So, we generally try to make the technology as flexible as possible w/ an acknowledged bias towards enabling decentralized and individualistic policy setting." An interesting thought is to posit that the Internet challenged the legitimacy of political processes, either accidently (by designing for technical efficiency) or purposefully. However, was this "right"? I am of the belief that a beneficient/wise tyrant is preferable to democracy, but the likelihood of the first is unlikely, so I'll go with the secodn. Where preference aggregation and governance have taken place on the Net, has the process been better than that offered in the real world, or are some of us, just happier with the result, a beneficent oligarchy?


Otherwise, where you need not deliberate or aggregate, the Internet governance mechanism is your clickstream. Do you care about privacy? Where did you click, who did you give your cookie to? Is child porn wrong? It is accessible on the Net and most porn sites are orientated towards the "barely legal" context. The problem is what happens when deliberative proccesses must aggregate preferences? In some situations society routinely enforces norms of the majority on a minority. In a political context, we've carved out niches where this shouldn't happen with "civil rights." The Internet, as designed, also creates a niche of things that are difficult to do. The niche carved out on the political side is continually being argued about and redefined. The niche on the technology side is influencing the civil rights side -- and vice versa. The big tension here seems to be related to:


1. The Internet governance structure does not match the political governance structure.

3. There are more than one political governing structures, each of which is constantly in tension within itself. There only seems to be one Internet structure. Which political structure maps to the Internet?

2. The Internet structure has the capability to influence the policy structure.

4. The political structure may also influence the Internet structure. This upsets some because the Internet structure has been good to their position so far.


What is at issue here is the concepts of "rights" (I've had conversations with some folks on this), political legitimacy, and deliberative processes. All of which I wish to think further about at Harvard. But, given this complexity, my simple philosophy when I was at the W3C was that a responsible technology should do two things:


1. Allowed other processes (hopefully democratic) the most freedom in determining which policies should be in play. Yes, I could specify a technology which only appeal to my sense of right -- I could be the beneficent tyrant I'm sure! However, this doesn't seem very pluralistic or responsible. *

2. Allow multiple policies to co-exist.


* And as a pragmatic individual in the process, as far as possible, try to bias things in a way that I'm personally happy with. In the end, on balance, to do good to my own principles and interests.


>There's no indication that the views of those others have changed, least not as

>at Nov/Dec 1997/PICSRules, and there's been no public discussion since then,

>afaik. But then, as I've said before, most of the PICS proponents aren't willing

>to discuss the issues publicly with the Net.plebs.



Well, I try to, though I can only spend so much time to mailing list debates. <smile>


-- More (100%) --

Date: Sep 21, 1998 (Mon, 20:2:10)

To: reagle@rpcp.mit.edu (Joseph M. Reagle Jr.)

From: Stanton McCandlish <mech@eff.org>

Subject: Re: Censoring the Internet with PICS

Cc: bigthoughts@cyber.law.harvard.edu, mech@eff.org, barlow@eff.org,

rene@pobox.com, link@www.anu.edu.au, djw@w3.org



I'm copying the other parties to the discussion that were mentioned in the
text but not copied on your foreward, since I'd like them to see my
response as well.

Joseph M. Reagle Jr. typed:

> Here's a recent thread on PICS <. But within it, I got to ramble on
> about some of my thoughts on "neutral technoloyg" that I'd like to speak
> to some folks about and maybe formalize.

Good. I've been wanting to have something like this discussion with you
(or more accurately with someone from W3C willing to listen) for some time
(again.)

> From an organizational point of view, I'd posit that the W3C standards
> are not enforceable beyond the good will of those wishing to extend such
> work on their technical merits.

I beg to differ. They are, at least in a way that has social impact,
enforceable by the ill will of those wishing to abuse the tools for actual
censorship, as we've seen attempted in Australia, and newly in Singapore.
Ultimately, they may not work, but they do have the immediate and palpable
effect of cowing local ISPs, chilling free speech of users in those areas,
etc.

> Furthermore, the W3C as a body has no
> strong mechanism of issuing policy statements, a NOTE with signatories is
> the best they have at the moment.

That's weak. ANY mechanism would have been something, and even with NO
such mechanism W3C certainly had the capability to design PICS in a such
a way that it was difficult if not impossible to abuse as we are seeing it
abused (or seeing attempts, at any rate.) Yet W3C resisted doing so with
all of its collective might, despite a united concern from essentially
everyone with an opinion on the topic. I find this irresponsible and
reprehensible.

> Yes, I (when I worked on this in a W3C
> capacity) did not widely publicize this

That's a shame.

> Finally, I'm not responsible nor accountable for the statements of Jim
> Miller or Paul Resnick in the press or other venue.

Understood, but one needn't be an apologist for them either.

> That is the strategy some governments could choose, particularly if they
> have precedent in other regulatory venues. (Food labelling, blocking the
> naughty bits of magazines on public display, motion picture rating, TV
> program ratings, etc.)

Shooting people in the head for daring to question the government,
torturing their children in front of their eyes for speaking out
against totaliltarianism, that kind of thing. (Those are real examples
from recent media reports, no hypotheticals).

One of W3C's biggest examples of seemingly intentional myopia is what I
call the Fallacy of the Not-So-Bad Dictatorship, in which it consistently
pooh-poohs the concerns of the civil liberties and civil rights
communities by resorting to the pretense that governments with no
restrictions on what they do to their citizens aren't really all THAT bad,
that they deserve a "choice", an "option" to use PICS, as currently
instituted, wisely or evilly. By this fallacious reasoning, providing
them tools to do worse isn't such a bad thing, and certainly worth the
supposed gain for "parental empowerment" (or more honestly "government
off of industry leaders asses") in the comfy West.

PICS, like the RIAA movie ratings and the Comics Code Authority is a
text-book example of the govt. scaring the industry into instituting
"voluntary" censorship the govt. could never get away with directly
mandating.

> Not exactly. The blind spot in the concept of "neutral technology" is
> that the policy is already set by legitimate political (hopefully
> democratic) processes. The technology can then implement that policy.
> For instance, if Australia -- through "legitimate" democratic processes
> -- chose that mandatory self labelling, and the technology is used that
> way, who am I to challenge that?
>
>
> However, if you were to predicate that the political process is corrupt
> or non-representational, that the government is exceeding its authority
> and broaching inalienable civil rights, the technology is not neutral.

All of this is precisely what we and others have been telling W3C for,
what, 4 years now? If this is so clear to you, why is it not clear to
your (sometime) employer? Is this no-brainer so hard to get thru their
heads? The political process *IS* corrupt and non-represenatational over
a whole hell of a lot of the surface of this increasingly wired planet.

Every time I've asked anyone at W3C about what they were doing, I got back
an answer that can be summarized "PICS is just technology, and technology
is neutral. We aren't responsibile for what bad things people abuse our
neutral technology for." Kind of an anti-Nobel attitude that is
shockingly ignorant and irresponsible. It's almost as outdated as
Lamarckianism, for chrissakes.

> It
> is not neutral because it is _part_ of the process which determines the
> eventual policy. For instance, one could take the position that speech is
> an absolute and unhindered right, (no restriction regardless of safety,
> obscenity, hate-speech, libel, etc.)

This is essentially a staw man, though it suffers more acutely from what I
call the Fallacy of the Free Speech Absolutist. The fact is that there
*are no* free speech absolutists taking the position you prop up here
(aside from a very small handful of lunatics). Even hard-core anarchists
do not believe in this abosolute form of free speech with no
responsibility (in their case, they simply call for privacy sector
accountability, e.g. in contracting, or with the more recent idea of
reputation markets, etc. I have little favor for their viewpoint, I
simply give it as probably the most extreme example. "Regular"
libertarians, and non-libertarian anti-authoritarian free speech adherents
are far less extreme, and even "mainstream" libertarians believe in
government enforcement, when necessary, of responsibility for harms caused
by irresponsible speech and action.)

> While its intent was "self-empowering" and
> decentralized, the nature of the technology does not make it immune from
> centralized control

Well, aside from the fact that many of us consider it highly questionable
whether that was in fact the intent, the question remains, why has W3C to
date refused to publicly acknowledge that PICS DOES lend itself to
centralized control, and even vehemently denied this clear fact?

> One might also be a pragmatic, and believe
> that in a certain context (when the CDA was active) this path was an
> improvement over the present situation or likely future course.

I don't buy this argument. No one in their right mind believed the CDA
would withstand constitutional scrutiny. I don't think even Bruce Taylor
believed that. He simply wanted to cause a ruckuss (and get paid for it).
He frequently lied, blatantly, in public about what the CDA said (and I
know he knew the facts), because trying to defend what the CDA actually
did and said was impossible. Ditto for Cathy Cleaver, and the legislative
sponsors of the CDA - and their CDA-II counterparts right now.

> However,
> in a different context, the path it provides may be more problematic
> given where you stand on the spectrum. There's a lot of decision paths
> here by which a person would make a decision as to whether they support
> PICS.
>
>
> - Assumptions about society and civil rights.
>
> - Assessment of present day situation and likely future paths.
>
> - Preference in choosing "principle" vs. "pragmatic" strategy.

So why has W3C never acknowledged this (as far as I've ever seen) or
dealt with any of the concerns raised by those with somewhat or very
different viewpoints? PICS was essentially produced in a vacuum. NOTHING
was done, at all, period, to assuage the concerns raised by those with
"assumptions about society and civil rights". More often than not we were
simply told that W3C had faith that the product would be used well,
despite the clear possibility of the opposite, and plans in Australia and
the UK already afoot to abuse PICS (and RSAC-I more specifically).

> Regardless, I believe the statement is more akin to "here's some
> technology, with the intent to make the world, in a given context, a
> better place. This organization, is incapable of challenging the
> legitimacy of political processes in determining their policies.

That is not true. EFF does it all the time. It is a very simple matter
to read the UN Declaration of Human Rights, and come to an informed
decision on whether or not the govt. of, say, Burma or Singapore, adheres
to those principles. If they don't, then PICS should not have been
designed in a way that lended itself to abuse by them. You don't need to
start a revolutionary war to make the "challenge", you can do it in
pragmatic terms, by simply saying, "this government is doing wrong, and we
do not trust them with this tool, so we are going to redesign it so it
cannot be (at least cannot easily be) abused by them." That's called
social resonsibility, something W3C seems to completely lack, at least
when it comes to PICS.

> So, we
> generally try to make the technology as flexible as possible w/ an
> acknowledged bias towards enabling decentralized and individualistic
> policy setting."

I do not buy this for an instant. The "white paper" or "article" or
whatever one wants to call it that introduced PICS to the world was very
carefully and selective edited, after publication, to specifically
include, in phrasing that can be construed as quite encouraging, the
(ab)use of PICS by governments to censor their entire citizenry.

Years later this still enrages me. Not only was the initial publication
effectively a lie, it was only corrected sub rosa, and as far as I can see
it was done so that W3C could say "We hear the civil liberties concerns,
and don't care about them" rather than taking on the real work of
retooling to prevent those fears from coming true. W3C effectively didn't
even take a "our technology is neutral" position, but one of "our
technology not only can be abused, hell, we specifically designed it to be
abused because some governments expressed interest in doing so"!

> An interesting thought is to posit that the Internet
> challenged the legitimacy of political processes, either accidently (by
> designing for technical efficiency) or purposefully.

True, but of borderline relevance. In many places all over the world,
that legitimacy is, and has been for decades in many cases, already
challenged, internationally, by NGOs and other governments and coalitions
thereof. You would have had to have lived in a sealed box to not know
this. So why did W3C pretend this was not true, pretend that it had no
basis on which "challenge" the bad-acting governments by not handing them
a tool ready-made for nationwide censorship?

> However, was this
> "right"? I am of the belief that a beneficient/wise tyrant is preferable
> to democracy, but the likelihood of the first is unlikely, so I'll go
> with the secodn.

While I don't agree with you, I don't see the relevance. If as you admit
we aren't going to get the beneficent tyrant, let's stick to reality:
democracy or fascism. That's it.

> Is child porn
> wrong? It is accessible on the Net and most porn sites are orientated
> towards the "barely legal" context.

This is a fallacious argument and a non-sequitur, since what legit porn
sites provide has nothing to do with availability of content that is
generally illegal everywhere. If it's "barely legal", it's as legal
as legal gets. Something is either legal or it isn't. "Barely legal" is a
marketing phrase, nothing more. NB: The problem of child porn is a
problem of underenforcement, not of not enough laws or not enough tools.
I'm unaware of any modern country in which child porn (at least with
pre-pubescents - post-pubescent age of consent varies from jurisdiction to
jurisdiction) is legal. This really has no connection to the core
conflict, which is about content that is legal, in one place, but Damning
the Speaker to the Wrath of Allah or whatever, in another.

> The problem is what happens when
> deliberative proccesses must aggregate preferences? In some situations
> society routinely enforces norms of the majority on a minority. In a
> political context, we've carved out niches where this shouldn't happen
> with "civil rights."

Niches that W3C has conveniently "forgotten".

> The Internet, as designed, also creates a niche of
> things that are difficult to do. The niche carved out on the political
> side is continually being argued about and redefined. The niche on the
> technology side is influencing the civil rights side -- and vice versa.
> The big tension here seems to be related to:
>
>
> 1. The Internet governance structure does not match the political
> governance structure.

This point is actually subsumed by the ones below it (there is no "the"
polit. gov. struc., but many of them, and they don't map to the Net, as
you observe. But anyway...)

> 3. There are more than one political governing structures, each of which
> is constantly in tension within itself.

More to the point, with eachother. They do not map to the Net, because
they are contradictory and localized, and the Net is largely homogenous
(in the ways that are relevant here) and increasingly global.

> There only seems to be one
> Internet structure. Which political structure maps to the Internet?

This is a very simple question with a very simple answer: That meatspace
jurisdiction with the least-restrictive governing structure becomes the
default "map" for Internet governance, since even if all other countries
make it a capitol offence to post a picture of a naked boobie on the Net,
the freer jurisdiction acts as content haven for the rest of the world.
"Duh". From what I can tell that jurisdiction may be the Netherlands,
though there are some smaller island nations that might be even less
restrictive, since Holland does occasionally try to enforce some of the
wacky European "hate speech" laws, kind of half-heartedly (I haven't
really tried to discover the most-free jurisidiction, at least in freedom
of expression & publication terms, though that would be an interesting
excercise).

> 2. The Internet structure has the capability to influence the policy
> structure.
>
> 4. The political structure may also influence the Internet structure.

Right.

> This upsets some because the Internet structure has been good to their
> position so far.

It upsets many for a whole raft of other, less cynical, reasons, some of
which are that the Net actually helps breed freedom (it brings a new
avenue of free speech into a censored area, at least until the local
regime catches up), and as another example, the mapping of one censorious
jurisidictions' policies on the Net censors the rest of the world, in at
least two ways (by denying speakers outside that jurisidiction part of
their audience, and by denying recipients outside the censorious area the
right to read what those within it have to say.) It's very convenient to
fall back on bullshit "multiculturalism" arguments to gloss over this, but
that argument simply doesn't work when it comes to a global medium. The
bare fact of the matter is that either the entire world is going to simply
have to live with (at least increased availability of, if not being
subjected to others') free expression, whether they like it or not, or
most of the world is going to slip one step at a time toward increasingly
fascist modes of governance, as one jurisdiction after another takes
censorship actions, and inspires others to do the same. That won't stop
Net content of course, since as long as there is a haven, the content will
remain available. It's really quite futile in the long run. In the short
run, though, you end up with a net increase in human rights violations all
over the place, and even increased levels of censorship in juridictions
that have heretofore been at least fairly liberal on the issue (e.g.
Australia and the UK, and the US for that matter, though we get these laws
knocked down pretty quickly due to the First Amendment.)

> What is at issue here is the concepts of "rights" (I've had conversations
> with some folks on this),

I would hope so!

> political legitimacy, and deliberative
> processes. All of which I wish to think further about at Harvard.

Funny, but I have a pretty clear grasp of this stuff sitting at a desk in
San Francisco. You brain isn't at Harvard, it's in your head. I wish the
"powers that be" at W3C would realize the same about themselves and stop
pretending they don't have the necessary facts and influence to do the
right thing.

> But,
> given this complexity, my simple philosophy when I was at the W3C was
> that a responsible technology should do two things:
>
>
> 1. Allowed other processes (hopefully democratic) the most freedom in
> determining which policies should be in play. Yes, I could specify a
> technology which only appeal to my sense of right -- I could be the
> beneficent tyrant I'm sure! However, this doesn't seem very pluralistic
> or responsible. *
>
> 2. Allow multiple policies to co-exist.
>
>
> * And as a pragmatic individual in the process, as far as possible, try
> to bias things in a way that I'm personally happy with. In the end, on
> balance, to do good to my own principles and interests.

I think both of these points are sheer folly. You're falling for the
fallacious "multicultural diversity" argument, and ignoring
long-established human rights basics, such as the concept that *no*
government can legitimately take away human rights. As for multiple
policies, it's the same fallacy again, in different form. It's like
giving dictatorships a "choice" between treating their citizens as humans
or as cattle. This choice does not legitimately exist, and it is the
height of "politically correct" self-delusion to to convince oneself that
one is being pluralistic and sensitive by offering evil fascist regimes
the "choice" to obtain and abuse technology to violate the rights of their
citizens, perhaps with *fatal* results (e.g. "we catch you trying to
bypass The People's Proxy, and you will be executed"). It could happen.
Imprisonment, torture, gulaging, sneaky reprisals, and other forms of
punishment would of course be more likely, but let's just stare the horror
in the face: It is quiet conceivable that PICS is going to literally kill
people if the damned thing ever really gets airborne. The ability to
communicate online without monitoring and without censorship is already
saving lives around the world, as underground human right workers have
attested when thanking Phil Zimmermann for PGP.

[Irene Graham quoted here:]
> > afaik. But then, as I've said before, most of the PICS proponents
> > aren't willing to discuss the issues publicly with the Net.plebs.

Exactly.

> Well, I try to, though I can only spend so much time to mailing list
> debates. <

That's cute, but it's not responsive to the very serious criticism and
challenge here. Hell, you aren't even speaking on behalf of W3C here.

--
Stanton McCandlish mech@eff.org http://www.eff.org/~mech
Program Director, Electronic Frontier Foundation
voice: +1 415 436 9333 x105 fax: +1 415 436 9333
PGPfone: 204.253.162.21

-- More (100%) --

Date: Sep 22, 1998 (Tue, 0:53:33)

To: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

From: rene@pobox.com (Irene Graham)

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au



On Sat, 19 Sep 1998 11:21:26 -0400 "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>
wrote:
[...]
> >By way of background, for Linkers unfamiliar with the PICS debate, Joseph
> >is W3C's Public Policy Analyst. He and I have discussed PICS on prior
> >occasions.
>
>But I haven't really participated in that "official" capacity on this list,
>more as an academic. Also, by way of background, I'm actually on sabbatical
>from MIT; I'm currently a Resident Fellow at the Harvard Law School [1], so
>I am definitely not speaking in any W3C capacity here.

I wasn't suggesting you were, but that I think your position as W3C staff is
relevant, in that you can hardly be considered to be an independent commentator.
It is you, after all, who has responded on behalf of W3C in discussions about
PICS on other mailing lists and to GILC re their submission to W3C. I trust you
realise it's difficult for other people to know what hat you're wearing from
time to time. Insofar as your being on sabbatical from MIT is concerned, if you
are also on sabbatical from W3C (which is not part of MIT, afaik) might I
suggest you ask the relevant W3C webmaster to update the staff page to that
effect.

[...]
> >Perhaps they don't want it to be able to be rated and blocked by
> >PICS-facilitated systems. Afaik, although HTML documents can be
> >PICS-labelled, PDF documents cannot be. Is that correct? If it is not correct,
> >where may I find information on how to label PDF documents, please.
> >I've never seen anything about that on the PICS site.
>
>This is rather hostile!? I'm not asking that it be in an open
>(nonproprietary and smaller) format to extend the PICS architecture! <lol>
>Wow.

I agree. Wow. Ascribing such a motive to you would be ridiculous. I'm surprised
and sorry you'd interpret that from my remarks.

My point was that I've often thought that governmental enthusiasm for PICS and
self-rating is quite misplaced in view of the increasing popularity of PDF
documents (even PICS enthusiasts, like the ABA, are now using that format) and
given that content providers cannot self-rate such documents at present, nor as
far as I know are there any indications that they will be able to do so in the
foreseeable future, if ever. I thought you may know something more about this
than I do (and be willing to impart any such information); nothing more, nothing
less.

>Regardless, of course PICS can apply to PDF. Any meta-data system worth its
>salt must be able to specify a referent, which in PICS case is anything
>identified by a URL.

Yes I realise this. I should have made it clear that I was referring to it being
able to be voluntary-mandatory PICS rated by the content provider (eg. coerced
by ISPs coerced by government). While there are no tools enabling content
providers to PICS label their PDF documents, such coercion is not possible,
unless of course PDF format is outlawed. :-)

>PICS, by way of protocol slight of hand, can be
>embedded in HTML where the implicit referent is that document. But it is
>supposed to be served in the HTTP stream, or at a label bureau. The PICS
>slight of hand is to include it in the HTML as http-equiv, which means,
>"pretend this is in the HTTP header."

Interesting. Self labelling enabled by a sleight of hand. I've heard this kind
of thing a number of times now, from reasonably reliable sources, that PICS
labels were/are primarily intended to be served in the HTTP stream, or at a
label bureau, i.e. beyond the control of most content provider to specify the
rating label applicable to their content. Unsurprising actually.

Self-labelling was never likely to be considered reliable and effective by those
who want to control what others say and read. So, in view of the threat of CDA
1, PICS primarily had to provide a means for those people to rate, label and
censor other people's speech. However, at that time especially, PICS had to be
sold to the Net community as much as to the legislators and the masses.
Self-labelling was far more likely to appeal to the Net community than having
their speech rated and censored by others.

In short, PICS was designed and optimised for third party censorship, and by a
sleight of hand, the developers ensured they could attempt to sell it to the Net
community as something benign. To date, three years later, the vast majority of
Net content providers have shown themselves not to be so easily conned.

In my view, PICS has failed in one of its key design goals - that of
enabling/encouraging third party groups to set up rating systems and labelling
bureaus suitable for their needs. By enabling self-labelling in order to try to
sell it to the Net community, the entire focus became centred on means of
forcing content providers to self-rate their content - rather than on content
providers being free to speak without pressure to self-censor and others being
able to choose what they would or wouldn't read. Basically, imo, the PICS
developers were too focussed on the CDA and the US 1st A to see how PICS could
be hijacked by authoritarian governments. While it would be impossible for
government authorised entities to rate the Net (even all content in their own
jurisdiction) as third parties, it is not impossible for them to mandate
self-rating with penalties for misrating or failing to rate, or similarly, to
coerce ISPs to coerce content providers to self-rate. And that, essentially, is
the main cause of the wide-spread opposition to PICS.

That is not to say that absent the ability to self-rate, there would be no
opposition to PICS. If the anticipated third party rating and labelling bureaus
had or do eventuate, the opposition would likely be on similar grounds as that
towards the likes of Cybersitter et al.

>Otherwise, to respond to your points in brief. The people that worked on
>PICS have made _some_ effort to distance themselves from poor uses of the
>PICS protocol. That was my point.

Yes, and mine was that some of those who signed the Note have been involved with
developments, and/or made statements, which are not in compliance with the
"Using PICS Well" note. Imo, this detracts from its overall credibility, and
their attempting to distance themselves is far too late. The damage has been
done.

>From an organizational point of view, I'd posit that the W3C standards are
>not enforceable beyond the good will of those wishing to extend such work on
>their technical merits. Furthermore, the W3C as a body has no strong
>mechanism of issuing policy statements, a NOTE with signatories is the best
>they have at the moment.

Precisely. They just develop and recommend technology that makes the Net censor
friendly and then say: not our responsibility how it's used.

>Yes, I (when I worked on this in a W3C capacity)
>did not widely publicize this, nor did I push to get many signatories. I had
>been interested in doing something like this for a long time,

I didn't realise you had anything to do with it. The idea, in principle, was
commendable, it's just a pity it has no force, etc.

>but by the
>beginning of the year had my hands quite full with P3P, so I only cast it as
>far as the PICS-interest group and moved on.

Well, one might have hoped that would bring more signatories, but it's not a
means of telling the general public, since participation in that mailing list is
not open to the public.

>Finally, I'm not responsible nor accountable for the statements of Jim
>Miller or Paul Resnick in the press or other venue.

I've never remotely suggested you are.

>I may agree with some of
>their statements, their views (and my own) may have changed over time,

There's no indication that the views of those others have changed, least not as
at Nov/Dec 1997/PICSRules, and there's been no public discussion since then,
afaik. But then, as I've said before, most of the PICS proponents aren't willing
to discuss the issues publicly with the Net.plebs.

>and I
>even disagree with some of their statements -- this is not an uncommon
>position to be in with respect to one's colleagues. I understand that as a
>Staff (Jim) and as co-creators (Jim and Paul) their statements strongly
>reflect upon PICS and the W3C. However, the most "official" (though generic)
>voicing of the W3C's statement on public policy is [2]. I believe the W3C is
>accountable for this statement.

Hmm... that's the stuff about realising the full potential of the Web. I'd refer
you to the GILC submission and TBL's response, if you weren't already well aware
of those.

>I think the most cogent exposition of views
>on how PICS is used -- and one I feel responsbility for -- is in the
>statement [3]. Including:
>
> - No single rating system and service can perfectly meet the needs
> of all the communities on the web.
> - The decision to self-label should be at the discretion of content
> creators and publishers.

> [2] http://www.w3.org/Policy/statement.html
> [3] http://www.w3.org/TR/NOTE-PICS-Statement

Yes, sounds wonderful. Trouble is, it's overridden by [2] which makes clear that
W3C is in the business of telling governments "what is technically possible; how
effectively the technology can meet policy requirements". Miller (and probably
others) has apparently done wonders in that regard in Australia.

Regards
Irene


-- More (100%) --

Date: Sep 22, 1998 (Tue, 20:14:1)

To: rene@pobox.com (Irene Graham)

From: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au, Stanton McCandlish <mech@eff.org>


In-Reply-To: <360eb460.22223502@mail.logicworld.com.au>
References: <3.0.5.32.19980921130637.009e0970@rpcp.mit.edu> <3.0.5.32.19980919112126.009fec80@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980919112126.009fec80@rpcp.mit.edu> <3.0.5.32.19980921130637.009e0970@rpcp.mit.edu>
X-Eudora-Signature: <MIT>
X-Persona: <RPCP>

At 12:45 AM 9/23/98 +1000, Irene Graham wrote:
>PDF documents. If you are suggesting that most ordinary content providers could
>set up and run their own label bureau in order to self-label their PDF documents
>(and that there would be some point in their doing that) then I think you are
>being ridiculous.

This is what I believe. This is the way it should be done. Placing meta-data in content itself can be problematic 1) if the data at the URI is likely to change or 2) in properly discovering the meta-data. For instance, does a tree leaf have all the meta-data in its parents roots applied to it, in which case one must always climb the tree to understand all data associated with it? If I wish to say that the whole tree is X, except node A, its much easier to make two entries in a database, rather than go through the whole site embedding the meta-data in every node/leaf. In P3P there is a compromise of sorts, where within the HTML you can provide a link to an XML/RDF file, but cannot embed it in the content itself.

Furthermore, I believe in the future, we will move towards content (HTML/XML) and presentation(CSS/XSS) being stored in databases. Results returned from a get to a URI will be dynamically generated based on negotiation with the client. For instance, the client might signal its a B&W, low bandwidth, high latency (wireless) client in Cambridge, please send me the local movie listings. All of this happens by way of "meta-data." We already see this implemented in many advanced sites.

>This is one of things that exasperates me about discussions with PICS advocates.
>They frequently seek to avoid addressing the issue raised, which in this
>instance was that if governments mandate self-rating and labelling now, PDF
>could become very popular amongst those not desirous of self-censoring in accord
>with someone elses' values because there are no tools now available to enable
>them to self-rate PDF documents.

I agree. Furthermore, I've made the point that PICSRules does potentially enable governments to promulgate regulations, but not in the way most people were ranting about [1,2].

[1] http://www.w3.org/Talks/9803-DCSB/slide15-0.htm
[2] http://www.w3.org/People/Reagle/papers/tprc98_SP_submission.html (I had to withdraw this for a lack of time, though I may return to the topic later this fall.)

>>I believe meta-data often works best when it is not embedded in the
>>content.
>
>How often? Justification? Please explain.

I tried to explain above.

>I really must look into how RDF / P3P / whatever is implemented. I'd assumed,
>until just now, that web sites would express their privacy policies in meta-data
>embedded in the page.

Its something I've discouraged, and we have a compromise as described above.

> However, I now wonder if it's intended that web site
>operators will input this info to some third party meta-data database (and
>perhaps have to pay to do so)? Similar with RDF enabled Dublin Core categories.
>Is this info to be embedded in the page, or is it anticipated that the meta-data
>will be distributed by third parties?

P3P -- as one of the first applications of RDF/XML -- defined its own transport mechanisms. RDF/XML communities should specify this generically, though they haven't yet. In WebDAV, the XML is fetched as a property of the resource at the URI.

-- More (100%) --

Date: Sep 22, 1998 (Tue, 22:20:9)

To: Stanton McCandlish <mech@eff.org>

From: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

Subject: Re: Censoring the Internet with PICS

Cc: bigthoughts@cyber.law.harvard.edu, mech@eff.org, barlow@eff.org, rene@pobox.com, link@www.anu.edu.au, djw@w3.org


In-Reply-To: <199809220302.UAA08903@eff.org>
References: <3.0.5.32.19980921131033.00972980@rpcp.mit.edu> from "Joseph M. Reagle Jr." at Sep 21, 98 01:10:33 pm>
X-Eudora-Signature: <MIT>
X-Persona: <RPCP>

At 08:02 PM 9/21/98 -0700, Stanton McCandlish wrote:
>> From an organizational point of view, I'd posit that the W3C standards
>> are not enforceable beyond the good will of those wishing to extend such
>> work on their technical merits.
>
>I beg to differ. They are, at least in a way that has social impact,
>enforceable by the ill will of those wishing to abuse the tools for actual
>censorship, as we've seen attempted in Australia, and newly in Singapore.
>Ultimately, they may not work, but they do have the immediate and palpable
>effect of cowing local ISPs, chilling free speech of users in those areas,
>etc.

Well, be it good or ill, the way of described the adoption of such technology is that its adoption is dependent on users', developers', and markets' desire to use the technology. The market can/does include regulators, perhaps with ill will.

>> Furthermore, the W3C as a body has no
>> strong mechanism of issuing policy statements, a NOTE with signatories is
>> the best they have at the moment.
>
>That's weak. ANY mechanism would have been something, and even with NO
>such mechanism W3C certainly had the capability to design PICS in a such
>a way that it was difficult if not impossible to abuse as we are seeing it
>abused (or seeing attempts, at any rate.)

I would disagree with this latter part regarding its design, but perhaps. The only other alternatives I've seen was Olsen's 18-year old IP bit (which I dislike immensely), and Lessig's adult-cert, which I find equally problematic as PICS and worse for privacy.

>One of W3C's biggest examples of seemingly intentional myopia is what I
>call the Fallacy of the Not-So-Bad Dictatorship, in which it consistently
>pooh-poohs the concerns of the civil liberties and civil rights
>communities by resorting to the pretense that governments with no
>restrictions on what they do to their citizens aren't really all THAT bad,
>that they deserve a "choice", an "option" to use PICS, as currently
>instituted, wisely or evilly. By this fallacious reasoning, providing
>them tools to do worse isn't such a bad thing, and certainly worth the
>supposed gain for "parental empowerment" (or more honestly "government
>off of industry leaders asses") in the comfy West.

First, I don't think anyone has tortured children with PICS. Second, I believe there is a substantive issue -- call it moral/cultural subjectivity -- that states some cultures may have norms that legitimately differ from others. This characteristic is reflected in the "community standards" prong of the Miller obscenity tests for instance. In light of differing norms, one approach is to say that technology should be able to "support multiple" options. The questions is what options does the technology support, either purposefully or intrinsically? (I believe meta-data systems have the properties PICS has and there is little one can do about it. The reason I pushed the "Using PICS Well" document was to at least outline the intentional/purposeful uses.) Regardless, is there a deterministic/solid way of determining which poles [1] are wrong and a way to design accordingly? You've responded yes, yes, its trivial, but I've not been able to find any objective universal measure, particularly since nearly all political contention is associated with differing norms regarding rights.

[1] http://www.w3.org/Talks/9803-DCSB/slide14-0.htm

Any technical artifact has external characteristics associated with its context (use): what other resources it requires, what its impact is on its environment, which policies are easily supported by or co-exist with its existence. Some artifacts are more closely coupled to a specific context, others are more general -- "friction free." As enunciated in the slide above, it seems one has two options:

a. specify one option and fight over that policy.
b. support multiple options and allow legitimate processes to resolve the policy.

Now, with respect to PICS, one could say the W3C pursued B, and unfortunately, one of those supported options is centralized control. As I've stated elsewhere, I at least made a pragmatic decision on the likelihood of that happening, whether it would be achieved through other means regardless, and the context of other ills it seemed to remedy at the time. So one of the poles out that there I'm generally uncomfortable with (but I'm still confronted with a subjectivity issue regardless), but that didn't seem all that likely. However, I believe this policy makes sense given the global nature of the Web, the differing norms/regulations, and the general principle that it is better to design for multiple options than for a single option that the particular architect wants to force on others. I place this philosophy in a personal context in that I make a pragmatic decision, and look at the range of poles and see that, in reality, that evil-extreme pole out there (even accounting for cultural subjectivity) is outweighed by the other poles of its application. But in general, I think its a good idea.

You could almost picture this as a sea saw. With a fulcrum set at a point, and various weights based on a person's assumptions.

Now, the actual deployment and adoption of such a technology/standard is dependent:

on users', developers', and markets' desire to use the technology.
The market can/does include regulators, perhaps with ill will.

Its a very legitimate question to ask, "ok, who deployed and is using this? Are 'evil' governments running away with the ball." If the answer is yes, how would an org. like the W3C "retract" it is hard to say.

>PICS, like the RIAA movie ratings and the Comics Code Authority is a
>text-book example of the govt. scaring the industry into instituting
>"voluntary" censorship the govt. could never get away with directly
>mandating.

Yes, its an interesting regulatory tactic (that I call "herding" [2]) that is not restricted to the domain of content control.

[2] http://www.w3.org/Talks/980922-MIT6805-princ/slide7-0.html

>Every time I've asked anyone at W3C about what they were doing, I got back
>an answer that can be summarized "PICS is just technology, and technology
>is neutral. We aren't responsibile for what bad things people abuse our
>neutral technology for." Kind of an anti-Nobel attitude that is
>shockingly ignorant and irresponsible. It's almost as outdated as
>Lamarckianism, for chrissakes.

Any time I've used the term "neutral" its not to say that technology has no impact on external policies, but that it should be as "policy friction free" within the realm of policies that fall within an acceptable range given common norms -- if possible. (What this range is, is the big political question.)

>This is essentially a staw man, though it suffers more acutely from what I
>call the Fallacy of the Free Speech Absolutist.

Its not a straw man but an extreme example to test a general principle. Consider the case of Germany outlawing Nazi literature. I personally would find this abhorrent in my, a US context. However, I do not feel comfortable in saying that what the German state has done is wrong necessarily. The cool thing about the Net -- to me -- is not that its "unregulatable," but that one can create/join one's governing mechanism of choice. This is why I like user empowered decentralized social protocols. I can select my choice of law and norms. I'll select bozo filter those that I don't like and the German Privacy Law and US 1st Amendment, thank you very much. This is the "vote with your clickstream and configuration principle." However, their are instances, where societies have found they need to regulate the private conduct of a minority of individuals. (Both in good ways, 1) outlawing "bad things" and 2) creating rights to protect the minority from the majority, and in bad ways.) Now we are into the position of determining the legitimacy of rights, deliberative processes/governance, and such. Something -- I think -- the technology should punt on if possible.

The question is to what degree can real space governance impose itself upon cyberspace governance using social protocols, particularly when its policies are not held by a majority of its users? Is there any situation in which real space governments have the right to regulate on-line conduct? While I admit PICS could be used poorly, my continued support of it is predicated on my belief that mechanisms of filtration/selection/recommendation/reputation are critical to on-line communities and governance. (Aside from my beliefs on responsible design and my fuzziness on cultural subjectivity.) I want to filter, select and block, I do every day. Governments could co-opt these mechanisms, but I am convinced that people can still get around the mandatory use of such technologies and that such tools are unavoidable as the Web deploys. Consequently, you are better off building a general technology that "supports multiple policies" intended for decentralized self-empowering use -- though it could be detrimental to a cause in a particular situation -- than "tyrannical" protocols.

>Well, aside from the fact that many of us consider it highly questionable
>whether that was in fact the intent, the question remains, why has W3C to
>date refused to publicly acknowledge that PICS DOES lend itself to
>centralized control, and even vehemently denied this clear fact?

As you know, I think its explicitly addressed in the PICS FAQ.

__

Could governments encourage or impose receiver-based controls? Does PICS make it easier or harder for governments to do so?

Yes. A government could try to assume any or all of the six roles described above, although some controls might be harder than others to enforce. As described below, governments could assume some of these roles even without PICS, while other roles would be harder to assume if PICS had not been introduced.

__

>So why has W3C never acknowledged this (as far as I've ever seen) or
>dealt with any of the concerns raised by those with somewhat or very
>different viewpoints? PICS was essentially produced in a vacuum. NOTHING
>was done, at all, period, to assuage the concerns raised by those with
>"assumptions about society and civil rights". More often than not we were
>simply told that W3C had faith that the product would be used well,
>despite the clear possibility of the opposite, and plans in Australia and
>the UK already afoot to abuse PICS (and RSAC-I more specifically).

I'm not speaking for the W3C. Even when I was -- on rare instances -- I was not in any position to make any sort of statement on behalf of the W3C because its structure/process is orientated towards the members making a technical recommendation.; not policy statements, retractions, or arguments. It was my own personal belief that any such representation by myself on this topic would be inappropriate. The W3C does not even have a technical deprecation mechanism for its standards presently. (What happens to HTML3.2 10 years from now?) It certainly has no mechanism for political/policy issues and would cause more problems than solve IMHO.

Plus, you can sort of call a spade a spade. If PICS is poorly used in Australia or the UK in your opinion, then PICS is bad. I'm not saying a particular implementation in a given context is necessarily. I personally would object to any mandatory self labelling requirement. Same with P3P. I want good things to come of it, and a lot of people are working such that it will. But if its just is a better mechanism for extracting data from users that is ultimately harmful to them, then P3P isn't a good thing IMHO. I'm not defending either technology really. Rather the technical philosophy that one should create decentralized, self-empowered, multiple-option, selected community governance technology when possible.

>> Regardless, I believe the statement is more akin to "here's some
>> technology, with the intent to make the world, in a given context, a
>> better place. This organization, is incapable of challenging the
>> legitimacy of political processes in determining their policies.
>
>That is not true. EFF does it all the time.

That is the EFF.

>to read the UN Declaration of Human Rights, and come to an informed
>decision on whether or not the govt. of, say, Burma or Singapore, adheres
>to those principles.

While you spend a great deal of time and words on this issue -- good/thoughtful words -- I'm not convinced of this case. When I read the UN Declaration of Human Rights, and based on other folks I've spoken to given its creation, the fuzziness and country exceptions within don't provide that objective/universal metric I'm looking for.

>designed in a way that lended itself to abuse by them. You don't need to
>start a revolutionary war to make the "challenge", you can do it in
>pragmatic terms, by simply saying, "this government is doing wrong, and we
>do not trust them with this tool, so we are going to redesign it so it
>cannot be (at least cannot easily be) abused by them." That's called
>social resonsibility, something W3C seems to completely lack, at least
>when it comes to PICS.

If someone came forward with something (much W3C work is based on Member submissions or pre-existing work) that solved these problems (more likely to be adopted/deployed, satisfy social concerns so we don't have to fight the CDA N battles, and less likely to be "abused") the W3C could consider it as a work item as it may anything else. I'd certainly push for it if I was there.

>Years later this still enrages me. Not only was the initial publication
>effectively a lie, it was only corrected sub rosa, and as far as I can see
>it was done so that W3C could say "We hear the civil liberties concerns,
>and don't care about them" rather than taking on the real work of
>retooling to prevent those fears from coming true.

I believe you have to take this up with the editor of the article.

>> The problem is what happens when
>> deliberative proccesses must aggregate preferences? In some situations
>> society routinely enforces norms of the majority on a minority. In a
>> political context, we've carved out niches where this shouldn't happen
>> with "civil rights."
>
>Niches that W3C has conveniently "forgotten".

The definition of those niches is a political process, that should be left in that venue.

>Funny, but I have a pretty clear grasp of this stuff sitting at a desk in
>San Francisco. You brain isn't at Harvard, it's in your head.

No, it means I have some more time to think about it when its at Harvard. <s>

-- More (100%) --

Date: Sep 23, 1998 (Wed, 13:46:29)

To: reagle@rpcp.mit.edu (Joseph M. Reagle Jr.)

From: "Danny Yee" <danny@staff.cs.usyd.edu.au>

Subject: Re: Censoring the Internet with PICS

Cc: mech@eff.org, bigthoughts@cyber.law.harvard.edu, barlow@eff.org,

rene@pobox.com, link@www.anu.edu.au, djw@w3.org



> At 08:02 PM 9/21/98 -0700, Stanton McCandlish wrote:
> >I beg to differ. They are, at least in a way that has social impact,
> >enforceable by the ill will of those wishing to abuse the tools for actual
> >censorship, as we've seen attempted in Australia, and newly in Singapore.
> >Ultimately, they may not work, but they do have the immediate and palpable
> >effect of cowing local ISPs, chilling free speech of users in those areas,
> >etc.

Joseph Reagle rambles:
> Well, be it good or ill, the way of described the adoption of such
> technology is that its adoption is dependent on users', developers', and
> markets' desire to use the technology. The market can/does include
> regulators, perhaps with ill will.

This rambling about markets and regulators is just euphemistic.
Singapore is making RSACi mandatory. This is a straightforward case of
state censorship. It has nothing whatsoever to do with markets.

> Plus, you can sort of call a spade a spade. If PICS is poorly used in
> Australia or the UK in your opinion, then PICS is bad. I'm not saying a
> particular implementation in a given context is necessarily.

But it's not a matter of some implementations being bad. The *only*
PICS-based system with any serious deployment is RSACi. And RSACi is a
total joke, or would be if people and countries didn't keep advocating the
damn thing. (I'm still waiting for the ABA to retract their support for
it, though that might reduce the number of conferences they get invited to
and lose them the awards from obscure German foundations.) And Safesurf
isn't much better, even if I could find any sites that actually used it.

I see nothing to suggest that other PICS-based system will ever have
any success -- and the more of them there are the more they will compete
for limited interest by web site developers, anyway.

In fact I have not seen *any* uses of PICS for anything valuable.
Surely, when you can't point to A SINGLE POSITIVE USE FOR YOUR PRODUCT,
doesn't that suggest there's something fundamentally wrong with it?

> If someone came forward with something (much W3C work is based on Member
> submissions or pre-existing work) that solved these problems (more likely to
> be adopted/deployed, satisfy social concerns so we don't have to fight the
> CDA N battles, and less likely to be "abused") the W3C could consider it as
> a work item as it may anything else. I'd certainly push for it if I was
> there.

But there are *no* problem for which PICS is a solution. There may
be problems for which it _appears_ to be a solution (especially with
the amount of misinformation that RSAC throws around), but that is a
different matter. And no, I'm not forgetting that there are people out
there who want some very weird things -- I just don't think there is
*anyone* who, if they really understood what it was doing, would want to
filter using RSACi, with a totally bizarre set of criteria applied to
less than 1% of the content on the Web. Much as I loathe them myself,
non-PICS filterware programs are a much more effective solution for
people who absolutely insist on keeping their children blinkered.

The W3C should scrap any further work on PICS. There are far more
valuable things for them to be working on.

Danny.


-- More (100%) --

Date: Sep 23, 1998 (Wed, 0:45:32)

To: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

From: rene@pobox.com (Irene Graham)

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au, Stanton McCandlish <mech@eff.org>



On Mon, 21 Sep 1998 13:06:37 -0400 "Joseph M. Reagle Jr." <reagle@RPCP.MIT.EDU>
wrote:

>At 12:53 AM 9/22/98 +1000, Irene Graham wrote:
>
>I am on leave from any "W3C spokesperson" role -- my point was, I never
>played that role in this forum regardless.

Understood. However, when you promulgate W3C-newspeak, in whatever role, you'll
have to forgive my remembering and drawing to attention your (past and
presumably future) role with W3C.

>You should speak to Danny
>Weitzner < if you wish to speak to someone in that
>position.

What I wish is for W3C people, and those of the PICS developers who are not W3C
staff, to come out from hiding in their hallowed halls and discuss, publicly and
in an official capacity, issues relative to this "neutral" (as defined by W3C
folk) technology, which is known as "Platform for Internet Content Selection"
but which in fact doesn't, and likely never will, enable anyone to *select*
anything. There is little likely benefit in my discussing PICS with Danny
privately - my concerns are not unknown to him, from discussion on other lists,
prior to him replacing Jim Miller at W3C. It is perhaps to the detriment of
understanding my position in regard to PICS, that I have respected the privacy
of personal email discussions with other people associated with its development
and/or associated implementations.

> >Yes I realise this. I should have made it clear that I was referring to
> >it being able to be voluntary-mandatory PICS rated by the content provider (eg.
> >coerced by ISPs coerced by government). While there are no tools enabling
> >content providers to PICS label their PDF documents, such coercion is not
> >possible, unless of course PDF format is outlawed. :-)
>
>I believe there is label bureau software out there.

What does the availability of label bureau software have to do with it? The mere
availability of software to set up and provide third party rating services/
labelling bureaus does not enable ordinary content providers to self-label their
PDF documents. If you are suggesting that most ordinary content providers could
set up and run their own label bureau in order to self-label their PDF documents
(and that there would be some point in their doing that) then I think you are
being ridiculous. The only other way I can see to interpret your remark is that
you are saying governments, ISPs or others can set up a label bureau (using
software that is somewhere out there) and then governments or ISPs can mandate
that content providers input their self-ratings to that label database.
Certainly this is possible. However, if and when it ever happens, then "tools
enabling content providers to PICS label their PDF documents" will be available.
In the meantime, they are not, which was my point.

This is one of things that exasperates me about discussions with PICS advocates.
They frequently seek to avoid addressing the issue raised, which in this
instance was that if governments mandate self-rating and labelling now, PDF
could become very popular amongst those not desirous of self-censoring in accord
with someone elses' values because there are no tools now available to enable
them to self-rate PDF documents.

> >Interesting. Self labelling enabled by a sleight of hand.
>
>I was using that in a technical sense. < HTTP-equiv is a technical
>sleight of hand in a sense, telling the HTML client, "pretend this was in
>the HTTP."

Yes, it was enabled in the HTTP. The pretend was necessary to sell it to the Net
community.

> >of thing a number of times now, from reasonably reliable sources, that
> >PICS labels were/are primarily intended to be served in the HTTP stream, or
> >at a label bureau, i.e. beyond the control of most content provider to
> >specify the rating label applicable to their content. Unsurprising actually.
>
>The spec. is very clear where they may reside, in the content (through
>HTTP-equiv), the HTTP headers, or a label bureau.

True, but the spec. was not available until long after the initial sales pitch
to the Net community. In any case, the specs do not readily lend themselves to
easy comprehension by the majority of Net content providers, nor are most
apparently interested in reading anything more than, at most, the PR propaganda.
When the PICSRules draft spec. was released, there were many people who thought
that the third party censorship ability was some new development. I grant one
thing to W3C, their PR machine is clever.

> >In short, PICS was designed and optimised for third party censorship,
> >and by a sleight of hand, the developers ensured they could attempt to sell it
> >to the Net community as something benign. To date, three years later, the vast
> >majority of Net content providers have shown themselves not to be so easily
> >conned.
>
>I believe meta-data often works best when it is not embedded in the
>content.

How often? Justification? Please explain.

>Through http-equiv, one can also stick it in the HTML file
>itself. Though they are related, there are distinctions between meta-data
>transport and the case: "self-labeling" vs. "3rd party labeling." The one
>is "how I learn," the other is "who is speaking."

What you seem to be saying here is that it is (often, you believe)
technologically more efficient to have PICS censorship labels distributed in
HTTP headers. This may be true, but it certainly sounds like technology
controlling policy and society. Basically, if we want efficient transport of
meta-data, we apparently must accept that it's best to enable other people to
rate, label and censor our speech, because that's a technologically more
efficient means of transporting "preference" data. Cold comfort, Joseph, and I
don't for one minute believe this technical aspect had the slightest thing to do
with PICS being developed so as to provide technological assistance to third
parties desirous of censoring what others say. That was the primary intent from
the outset, before the first PR release in Sept/Oct 1995.

I really must look into how RDF / P3P / whatever is implemented. I'd assumed,
until just now, that web sites would express their privacy policies in meta-data
embedded in the page. However, I now wonder if it's intended that web site
operators will input this info to some third party meta-data database (and
perhaps have to pay to do so)? Similar with RDF enabled Dublin Core categories.
Is this info to be embedded in the page, or is it anticipated that the meta-data
will be distributed by third parties?

> >In my view, PICS has failed in one of its key design goals - that of
> >enabling/encouraging third party groups to set up rating systems and
> >labelling bureaus suitable for their needs.
>
>The technology enabled self rating and 3rd party rating. Self rating was
>seen as rather nice by some, and it is an instance of a technology that
>requires strong interoperability requirements between clients and
>services. Something I've realized is that there is not a lot of incentive
>for 3rd party raters to adopt PICS. For an org that is going to go out
>and rate the whole Web, PICS doesn't buy you a whole lot. In fact, you
>probably want to keep your ratings proprietary and closed. What the
>interoperability buys you in the 3rd party scenario is the ability to
>switch or use multiple 3rd parties. However, people seem fairly happy
>using a single proprietary selection/filtration tool, with an encrypted
>set of ratings from that single source. If you are a propietary third
>party supplier, you may want to "lock-in" folks from moving to another
>service.

Yes, pity the PICS developers didn't do some market research prior to rushing
off in a panic that the 1st A wouldn't hold up under pressure.

I'll refrain, at least for the moment, on commenting on the remainder of your
remarks. I have little of value to add to Stanton's comments.

Irene

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Irene Graham, Brisbane, Queensland, Australia. PGP key on h/page.
Burning Issues: <http://www.pobox.com/~rene/>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-- More (100%) --

Date: Sep 23, 1998 (Wed, 17:51:46)

To: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

From: mech@eff.org (Stanton McCandlish)

Subject: Re: Censoring the Internet with PICS

Cc: bigthoughts@cyber.law.harvard.edu, mech@eff.org, barlow@eff.org,

rene@pobox.com, link@www.anu.edu.au, djw@w3.org



Apologies for the length of this. You know how it goes.

At 6:20 PM -0800 9/22/98, Joseph M. Reagle Jr. wrote:
> At 08:02 PM 9/21/98 -0700, Stanton McCandlish wrote:
> >> From an organizational point of view, I'd posit that the W3C standards
> >> are not enforceable beyond the good will of those wishing to extend such
> >> work on their technical merits.
> >
> >I beg to differ. They are, at least in a way that has social impact,
> >enforceable by the ill will of those wishing to abuse the tools for actual
> >censorship, as we've seen attempted in Australia, and newly in Singapore.
> >Ultimately, they may not work, but they do have the immediate and palpable
> >effect of cowing local ISPs, chilling free speech of users in those areas,
> >etc.
>
> Well, be it good or ill, the way of described the adoption of such
> technology is that its adoption is dependent on users', developers', and
> markets' desire to use the technology. The market can/does include
> regulators, perhaps with ill will.

Regulators are not a market in any useful sense. It's like calling a
sociopath who would choke people to death with a hamburger part of the
market of McDonalds. They are not part of the market per se, they are
simply people who *infiltrate* the market to obtain and then abuse the
product or service. McDonalds, if they knew that this person would do this
with their burger, would probably decline to provide them with it.
Continuing this (admittedly offbeat) analogy, PICS is like a hamburger
specifically designed to be used for choking someone to death, and W3C has
done the equivalent of publishing "how to kill someone with our burger"
instructions, and encouraged sociopaths to use it. McDonalds does not
actively market to sociopaths, but W3C is actively marketing PICS to
governments.

> >> Furthermore, the W3C as a body has no
> >> strong mechanism of issuing policy statements, a NOTE with signatories is
> >> the best they have at the moment.
> >
> >That's weak. ANY mechanism would have been something, and even with NO
> >such mechanism W3C certainly had the capability to design PICS in a such
> >a way that it was difficult if not impossible to abuse as we are seeing it
> >abused (or seeing attempts, at any rate.)
>
> I would disagree with this latter part regarding its design, but perhaps.
> The only other alternatives I've seen was Olsen's 18-year old IP bit (which
> I dislike immensely), and Lessig's adult-cert, which I find equally
> problematic as PICS and worse for privacy.

The simplest solution would be to make the technology non-scaleable above
the individual-user level, which is what we've been saying for years. I
think I even predate you in participation in this debate. One of the
coauthors of the original PICS plan was an EFF boardmember, named Rob
Glaser, now head of Progressive Networks. Back then (ca. 1994), PICS (then
called IHPEG) was *explicitly* to work only at the user level, but when the
plan was turned over to W3C, this was radically altered, pretty much behind
everyone's back.

> >One of W3C's biggest examples of seemingly intentional myopia is what I
> >call the Fallacy of the Not-So-Bad Dictatorship, in which it consistently
> >pooh-poohs the concerns of the civil liberties and civil rights
> >communities by resorting to the pretense that governments with no
> >restrictions on what they do to their citizens aren't really all THAT bad,
> >that they deserve a "choice", an "option" to use PICS, as currently
> >instituted, wisely or evilly. By this fallacious reasoning, providing
> >them tools to do worse isn't such a bad thing, and certainly worth the
> >supposed gain for "parental empowerment" (or more honestly "government
> >off of industry leaders asses") in the comfy West.
>
> First, I don't think anyone has tortured children with PICS.

You are totally missing the point. GOVERNMENTS DO THIS (not with PICS, but
otherwise). They do it every day, on several continents. About the only
avenues of pressure available to end extrajudicial killings, disappearings,
political imprisonment and other human rights violations are:

1) pressure from other governments (this is very spotty, and basically
bends to the whim of economic give-and-take, as per US/P.R. of China trade
agreements, etc.)

2) external news relating human rights issues to the oppressed population
in these areas (the reason behind the peacetime operations of Radio Free
Europe
and the US Information Agency, and an observed effect, more recently, of
the availablility of the Internet).

3) internal news from the oppressed areas reaching the rest of the world.

PICS in the hands of regimes in these areas will greatly harm 2, and
somewhat harm 3.

That's aside from any *direct* human rights violations (beyond censorship,
of course) that result from imposition of PICS, such as political
imprisonment for those caught net.publishing something their government
doesn't like. You say no one will be tortured with PICS. I challenge your
predition. No one will be beat over the head with a printout of PICS
documentation, but people's lives will be ruined by abuse of that tool.

> Second, I
> believe there is a substantive issue -- call it moral/cultural subjectivity
> -- that states some cultures may have norms that legitimately differ from
> others.

This is the "multiculturalism" fallacy again. One more time: However much
this may be true, it is an internationally accepted given that the "right"
of a culture to do unto its own citizens as it will is sharply limited by
human rights accords. PICS not only will, but is specifically designed to,
help bad-acting governments violate human rights.

> This characteristic is reflected in the "community standards" prong
> of the Miller obscenity tests for instance.

This is true, but has zero relevance here. If, say, Kentucky decided to
ban all pictures of nude humans, this would *not* be upheld as OK under
"community standards". There are limits, but governments that W3C is
pandering to do not respect those limits, and the tool itself actually
undermines efforts to get them to do so. PICS is a *double* blow to human
rights in oppressed areas.

> In light of differing norms, one
> approach is to say that technology should be able to "support multiple"
> options.

Not if any of those options violate human rights. Why is this so hard for
you to understand? It's like releasing a product, say for cleaning
carpets, that in the wrong hands could poison 50,000,000 people. IT JUST
DOESN'T MATTER if it's good at cleaning carpets - you don't release the
product. It would be socially irresponsible to do so.

> The questions is what options does the technology support, either
> purposefully or intrinsically? (I believe meta-data systems have the
> properties PICS has and there is little one can do about it.

Well, gosh, look who's leading the development of metadata systems - W3C!
Are we surprised? Maybe I'm just less cynical than you are. I find it
impossible to believe that a metadata system could not be devised that
would not have these negative effects, or at bare minimum would make
governmental misuse so difficult to pull off as to not be worth bothering
with. I have a lot of faith in human ingenuity. And I believe W3C is
basically collectively lazy and simply uncaring of the negative effects of
what they produce.

> The reason I
> pushed the "Using PICS Well" document was to at least outline the
> intentional/purposeful uses.)

That's like "how to use our genocidal carpet cleaning product nicely".

> Regardless, is there a deterministic/solid way
> of determining which poles [1] are wrong and a way to design accordingly?
> You've responded yes, yes, its trivial, but I've not been able to find any
> objective universal measure, particularly since nearly all political
> contention is associated with differing norms regarding rights.

Use the UN Declaration as a starting point. Let's not be silly.

>
> [1] http://www.w3.org/Talks/9803-DCSB/slide14-0.htm
>
> Any technical artifact has external characteristics associated with its
> context (use): what other resources it requires, what its impact is on its
> environment, which policies are easily supported by or co-exist with its
> existence. Some artifacts are more closely coupled to a specific context,
> others are more general -- "friction free." As enunciated in the slide
> above, it seems one has two options:
>
> a. specify one option and fight over that policy.
> b. support multiple options and allow legitimate processes to resolve the
> policy.

Try c. support multiple options within limits, and front-load the system
with one policy in a way that the fight becomes moot.

The ironic thing is that exactly process c has characterized almost all
Internet development up to this point. An exception might be the DNS, but
most of the rest of it was *designed* to resist monopolization, censorship,
surveillance, physical attack, and other centralization weaknesses. W3C
is the *only* Internet standards body I've ever encountered that does not
follow process c. Given that all the rest of them that I know of are made
up of members of the general public rather than consisting of an exclusive
members-only industry good ol' boys club, this is perhaps not surprising.
If you are at a loss for an example, I draw your attention to IPNG, IPv6
and IPSec, our next-generation Internet protocols, all of which are being
front-loaded with strong end-to-end encryption, *specifically in defiance*
of US and some other government's anti-privacy and anti-security policies.
Every other Net standards body has no problem deciding that certain things
are right and wrong, and building their standards accordingly. W3C stands
alone in casting out Net specs that not only undermine human rights but
*are intended to do so*.

>
> Now, with respect to PICS, one could say the W3C pursued B, and
> unfortunately, one of those supported options is centralized control.

That's exactly what one would say, and it was clearly a bad decision from a
human rights point of view (which I would hope anyone who is not a big fan
of fascism would hold).

> As
> I've stated elsewhere, I at least made a pragmatic decision on the
> likelihood of that happening, whether it would be achieved through other
> means regardless, and the context of other ills it seemed to remedy at the
> time.

Fine, but you (and more to the real point, W3C) made this decision largely
in a vacuum, and in direct contradiction to almost all other expert opinion
(e.g. on human rights matters and the negative effect PICS could have, on
the likelihood of the CDA withstanding constitutional scrutiny, etc.)
Making a decision is fine, but such decisions need to reflect reality, and
at the very least consider prevailing informed opinion, and be grounded in
a substantive debate over any differences of opinion. That debate never
happened. W3C effectly just ignored everyone else, and did what it was
going to do, with no regard for the fallout. All I can say is that I'm
overjoyed that PICS looks to be flightless, though I'm very concerned about
the W3C metadata spec, which may have stronger wings and which includes
everything that was bad about PICS. Time will tell, I guess.

> So one of the poles out that there I'm generally uncomfortable with
> (but I'm still confronted with a subjectivity issue regardless), but that
> didn't seem all that likely. However, I believe this policy makes sense
> given the global nature of the Web, the differing norms/regulations,

It does not, because many of those norms/regulations are illegitimate in
international law (though woefully underenforced against). The 'debate'
such as it is, is almost identical to that of mandatory clitoridectomy and
other female genital mutilation. It was long tolerated as "cultural
diversity", but the UN recently condemned the practice, as have human
rights groups for years, and countries where it takes place have cracked
down on such practices. You having doubts about whether government
censorship is OK or not doesn't affect the general human rights (and UN)
view that it is not OK. The question has essentially already been settled
except in your own head.

> and the
> general principle that it is better to design for multiple options than for
> a single option that the particular architect wants to force on others.

You're contradicting yourself. You just said that PICS was only
"enforceable" inasmuch as entities willfully chose to adopt it. If this is
true, then you have a social responsibility to produce a system that
supports, or at very least does no harm, to human rights. Cf. below, you
say:
> Now, the actual deployment and adoption of such a technology/standard is
> dependent:
>
> on users', developers', and markets' desire to use the technology.
> The market can/does include regulators, perhaps with ill will.

The architect *cannot* force it on others, and you know it. All the
architect can do is refuse to offer options that architect knows are easily
abusable, e.g. for human rights violations. This is not brain surgery,
Joseph, just basic logic.

> I
> place this philosophy in a personal context in that I make a pragmatic
> decision, and look at the range of poles and see that, in reality, that
> evil-extreme pole out there (even accounting for cultural subjectivity) is
> outweighed by the other poles of its application. But in general, I think
> its a good idea.

That's an opinion that, as it has been applied by W3C, has never been
subjected to any debate or scrutiny as to its veracity. It's like K-Mart
selling a toy that people say is dangerous, but K-Mart insisting that it's
their opinion that it isn't dangerous, they're going to keep on selling it,
and that's that. There's a fundamental disconnection between W3C's policy-
and decision-making processes, and any form of reality check or other
checks-and-balances. Even public outrage has no effect whatsoever (that
is, even basic market-pressure forces don't sway W3C at all. It simply
forges ahead, willy-nilly, with whatever plan it has settled on, regardless
of concerns about and opposition to those plans. In other words, W3C is
behaving like a headstrong 5-year-old brat that MUST have its way,
regardless of whether that way is right.

You seem genuinely perplexed that people like myself and Ms. Graham are
pissed as hell at W3C. Well, this is the reason. W3C refuses steadfastly to
examine its own actions and proposals, to have a public dialog about the
proposals and their known or likely effects, or to make any changes based
on constructive criticism.

> You could almost picture this as a sea saw. With a fulcrum set at a point,
> and various weights based on a person's assumptions.

There is more to the debate than assumptions. Some of the other factors
include facts, laws and treaties, probabilities, etc.

> Now, the actual deployment and adoption of such a technology/standard is
> dependent:
>
> on users', developers', and markets' desire to use the technology.
> The market can/does include regulators, perhaps with ill will.

There is no "perhaps" about it. You are yet again falling prey to the
Fallacy of the Not-So-Bad Dictatorship.

> Its a very legitimate question to ask, "ok, who deployed and is using this?
> Are 'evil' governments running away with the ball." If the answer is yes,
> how would an org. like the W3C "retract" it is hard to say.

You've missed the boat. The question was "OK, who is likely to deploy
something like this and use it, and will evil governments run away with the
ball, before we actually release any specs." The answers were crystal clear
in 1994, when IHPEG rejected the idea of making the technology scaleable.
W3C reversed that decision *for no good reason that is not greatly
outweighed by the dangers* when it took the project over and renamed it
PICS.

At this point, yes, it is hard to see how to undo the damage, and more to
the point, undo those aspects of PICS and metadata that are likely to cause
more harm in the future when the scheme really catches on. I'm not even
trying to address this issue here, which is a big one. I'm just trying to
get you and W3C to FESS UP and stop trying to pretend you did the world a
favor. Admit some fault, accept some criticism, stop burying your head in
the sand so you can't hear.

> >PICS, like the RIAA movie ratings and the Comics Code Authority is a
> >text-book example of the govt. scaring the industry into instituting
> >"voluntary" censorship the govt. could never get away with directly
> >mandating.
>
> Yes, its an interesting regulatory tactic (that I call "herding" [2]) that
> is not restricted to the domain of content control.

I'm glad you have a name for it. Why did you (or rather W3C) decide to
participate in it, and why will W3C not admit to doing so?

> [2] http://www.w3.org/Talks/980922-MIT6805-princ/slide7-0.html
>
> >Every time I've asked anyone at W3C about what they were doing, I got back
> >an answer that can be summarized "PICS is just technology, and technology
> >is neutral. We aren't responsibile for what bad things people abuse our
> >neutral technology for." Kind of an anti-Nobel attitude that is
> >shockingly ignorant and irresponsible. It's almost as outdated as
> >Lamarckianism, for chrissakes.
>
> Any time I've used the term "neutral" its not to say that technology has no
> impact on external policies, but that it should be as "policy friction free"
> within the realm of policies that fall within an acceptable range given
> common norms -- if possible. (What this range is, is the big political
> question.)

It's the big political question W3C has consistently tried to duck, day in
and day out. "We aren't a policy organization". "W3C as a body has no
strong mechanism of issuing policy statements..." "We are just
technologists; we leave the political issues to other organizations." I've
been told all of these things by you or other W3C people in person or in
e-mail. I've never once gotten any substantive response about any of these
issues.

> >This is essentially a staw man, though it suffers more acutely from what I
> >call the Fallacy of the Free Speech Absolutist.
>
> Its not a straw man but an extreme example to test a general principle.

It is a straw man - you have propped up an argument to attack that your
opponent is not making. That is the definition (in short form) of the
Fallacy of the Straw Man Argument.

> Consider the case of Germany outlawing Nazi literature. I personally would
> find this abhorrent in my, a US context. However, I do not feel comfortable
> in saying that what the German state has done is wrong necessarily.

That is because you have an inadequate understanding of human rights law.
Germany and several other European countries have gone WAY too far in
enforcing the anti-genocide clauses of the UN Declaration to the detriment
of the freedom of expression clauses. The problem is that it is
economically and politically infeasible for the US or any other country to
challenge them on this, and they policy makers that could possibly do so
simply do not care. It doesn't change the observable facts of the matter.
The UN Declaration does not guarantee free speech "except if you are saying
racist things". The anti-genocide and anti-racism clauses are severable,
and do not supercede the freedom of expression clause.

> The cool
> thing about the Net -- to me -- is not that its "unregulatable," but that
> one can create/join one's governing mechanism of choice.

That's fine and dandy, but not germane. If you live under a hostile
government you largely do not have this option, and will not have it at all
if PICS, PICSRules, and metadata enable them to be censored even further.

> This is why I like
> user empowered decentralized social protocols.

Me too. Unfortunately, PICS isn't one.

> I can select my choice of law
> and norms. I'll select bozo filter those that I don't like and the German
> Privacy Law and US 1st Amendment, thank you very much. This is the "vote
> with your clickstream and configuration principle." However, their are
> instances, where societies have found they need to regulate the private
> conduct of a minority of individuals. (Both in good ways, 1) outlawing "bad
> things" and 2) creating rights to protect the minority from the majority,
> and in bad ways.) Now we are into the position of determining the
> legitimacy of rights, deliberative processes/governance, and such. Something
> -- I think -- the technology should punt on if possible.

I agree, but you've missed the point again. That debate has been ongoing
for decades, and has resulted in many already-settled givens. All W3C had
to do was stick to those givens. It did not even have to get into the
debate. Instead, W3C pretended the debate did not exist or had not settled
anything at all, that all human rights issues were simply a big open
question, and ergo ignorable. This was a mistake, and W3C needs to admit
that mistake, and revise PICS and its progeny to reflect a better
understanding, to undo as much as possible what has been done that is
harmful or likely to be harmful.

> The question is to what degree can real space governance impose itself upon
> cyberspace governance using social protocols, particularly when its policies
> are not held by a majority of its users? Is there any situation in which
> real space governments have the right to regulate on-line conduct?

What PICS does is take it out of the real of rights and into
practicalities, by making it *easy* for governments to do it. If it's not
easy, most of them won't try, and the issue would simply never arise
(except in things like legal challenges to stupid unenforceable mandates,
etc.) W3C's given bad-acting governments the *means* to impose real-space
governmance upon cyberspace to a large degree, and if they do so in
violation of human rights accords, very little will ever be done about it,
because those that can do anything about it (e.g. the US Administration and
other big players in the UN), really don't give a damn.

> While I
> admit PICS could be used poorly, my continued support of it is predicated on
> my belief that mechanisms of filtration/selection/recommendation/reputation
> are critical to on-line communities and governance.

This reasoning is also fallacious. What you say may be true, and your
belief is your belief. But this belief does not mean that you have to
support any particular proposal that *claims* to acheive the goal you
believe in, nor one that might actually do so, but at the cost of other
things that matter. It's like supporting an anti-spam bill that would
criminalize a lot of non-spam, or one that would not actually solve the
spamming problem, but just is "intended" to. It's NOT the thought that
counts when it comes to public policy.

> (Aside from my beliefs
> on responsible design and my fuzziness on cultural subjectivity.) I want to
> filter, select and block, I do every day. Governments could co-opt these
> mechanisms, but I am convinced that people can still get around the
> mandatory use of such technologies

How? Lay out your "How to Subvert Your Murderous Fascist Government's
Censorship Regime in the Privacy of Your Own Bedroom" scheme. I'd really
like to see it. Who are you to be *gambling* with other people's lives and
freedoms on the off chance of making idle Western Democracy net.addicts
lives slightly more convenient?

> and that such tools are unavoidable as
> the Web deploys.

Certainly, but they *do not* have to be designed in a way that makes them
easy to use for government censorship. I repeat, it is clear from the
edited PICS "white paper" (or "article" or whatever term we want to give
it) that PICS is *marketed directly at fascist governments for use
nation-wide censorship tools*. The line that W3C considered the possibility
of said censorship and rejected it as unlikely, not very serious a matter,
or outweighed by positive benefits is pure bullshit. The opposite is true:
W3C knows it will happen, and *wants* it to happen. W3C very clearly
*wants* fascist governments to adopt and use its PICS system, because this
will help propagate that system and make it more widely adopted. Yes, this
is a direct and serious accusation. But W3C's own documents are quite clear
on the matter:

From "PICS: Internet Access Controls Without Censorship", Paul Resnick &
James Miller of W3C. (Original version appeared in Communications of
the ACM, 1996, vol. 39(10), pp. 87-93.):

[Emphasis added by ***bracketing with asterisks***.]

[begin excerpts]

Flexible Blocking

Not everyone needs to block reception of the same materials. Parents
may not wish to expose their children to sexual or violent images.
Businesses may want to prevent their employees from visiting
recreational sites during hours of peak network usage. ***Governments
may want to restrict reception of materials that are legal in other
countries but not in their own***. The "off" button (or disconnecting
from the entire Net) is too crude: there should be some way to block
only the inappropriate material. Appropriateness, however, is neither
an objective nor a universal measure. It depends on at least three
factors:

1.The supervisor: parenting styles differ, as do philosophies of
management ***and government***.

[...]

PICS does not specify how parents or other supervisors [***including
governments***] set
configuration rules. One possibility is...for organizations
and on-line services to provide preconfigured sets of selection rules.
For example, an on-line service might team up with UNICEF to offer
"Internet for kids" and "Internet for teens" packages, containing not
only preconfigured selection rules, but also a default home page
provided by UNICEF.

[Obviously, if one govt. organization such as UNICEF could do this, so
could any, in any government.]

[...]

Labels can be retrieved in various ways. Some clients might choose to
request labels each time a user tries to access a document. Others
might cache frequently requested labels or download a large set from a
label bureau and keep a local database, to minimize delays while
labels are retrieved.

[Obviously a government agency might serve a label bureau, with use of that
bureau mandated in law.]

[...]

PICS specifies very little about how to run a labeling service, beyond
the format of the service description and the labels. Rating services
must make the following choices: 1. The labeling vocabulary. A common
set of dimensions would make publishers' self-labels more useful to
consumers but cultural divergence may make it difficult to arrive at a
single set of dimensions. ***Governments may also mandate
country-specific vocabularies.*** Third party labelers are likely to use
a wide range of other dimensions.

[...]

PICS provides a labeling infrastructure for the Internet. It is
values-neutral: it can accommodate any set of labeling dimensions, and
any criteria for assigning labels.

[Tacit admission that PICS can be used for censorship, despite the
disingenuous name of the article, and the even more disingenuous intitial
printing of it with no such references to government use. They were added
in quietly to the archived copy *after* publication. The only reason I can
think for doing this is so that the Internet community in general would
never notice, but any govt. censorship-happy regulators doing research on
how to censor the Net would find this paper and get the bright idea to use
PICS.]

[...]

Any PICS-compatible software can interpret labels from any source,
because each source provides a machine-readable description of its
labeling dimensions. ***Around the world, governments are considering
restrictions on on-line content. Since children differ, contexts of
use differ, and values differ, blanket restrictions on distribution
can never meet everyone's needs. Selection software can meet diverse
needs, by blocking reception, and labels are the raw materials for
implementing context-specific selection criteria.***

[Direct proof, but only in a little-seen document, that PICS is explicitly
intended for use as a government censorship tool, whatever other uses it
might have.]

[end excerpts.]


> Consequently, you are better off building a general
> technology that "supports multiple policies" intended for decentralized
> self-empowering use -- though it could be detrimental to a cause in a
> particular situation -- than "tyrannical" protocols.

This is really naive. Even a child could see the flaws in this reasoning. A
"general" multiple-choice technology in which one of the choices is
censorship is blatantly obviously worse than a fascists-only technology,
because the fascists-only technology will never get adopted by anyone but a
handful of fascists, while the general-use technology might be (and in the
case of PICS, is being) built into basic Net tools that everyone uses,
whether they live under dictatorships or not. This has a doubly bad
effect, because it 1) makes the fascist government's "job" really easy, and
2) makes it easy for comparatively democractic governments to introduce
censorship where it did not have the means or impetus to do so before.
That last isn't necessarily (or necessarily not) a problem in the US, but
in countries with a largely democratic history but no constitutional
protection for free speech, it will be extremely problematic.

PICS *is* the devil, and cannot help be the devil because of the way it is
designed.

> >Well, aside from the fact that many of us consider it highly questionable
> >whether that was in fact the intent, the question remains, why has W3C to
> >date refused to publicly acknowledge that PICS DOES lend itself to
> >centralized control, and even vehemently denied this clear fact?
>
> As you know, I think its explicitly addressed in the PICS FAQ.
>
> __
>
> Could governments encourage or impose receiver-based controls? Does PICS
> make it easier or harder for governments to do so?
>
> Yes. A government could try to assume any or all of the six roles described
> above, although some controls might be harder than others to enforce. As
> described below, governments could assume some of these roles even without
> PICS, while other roles would be harder to assume if PICS had not been
> introduced.

It is nice to see that at least this much is being admitted. Why isn't this
more prominent, and why does W3C react defensively when asked about this
problem, rather than openly discussing it? More to the heart of the
matter, why did W3C engineer PICS so that a govt. could assume any such
roles? I'd have to think long and hard to be certain that there is any way
to devise a system in which a govt. could not assume one single role of the
6, but I'm fairly confident that the system could have been devised to make
it impossible or impractical for the more dangerous roles to be so assumed
(at worst, by eliminating those roles from the system, with the sacrifice
of some degree of functionality. Not every possible functionality must be
built into every protocol. It would be awfully "convenient" for example if
telnet had provisions for a password-free way to access your shell account
in case you forgot your password, but the security considerations outweigh
the benefit of that convenience.)

> >So why has W3C never acknowledged this (as far as I've ever seen) or
> >dealt with any of the concerns raised by those with somewhat or very
> >different viewpoints? PICS was essentially produced in a vacuum. NOTHING
> >was done, at all, period, to assuage the concerns raised by those with
> >"assumptions about society and civil rights". More often than not we were
> >simply told that W3C had faith that the product would be used well,
> >despite the clear possibility of the opposite, and plans in Australia and
> >the UK already afoot to abuse PICS (and RSAC-I more specifically).
>
> I'm not speaking for the W3C. Even when I was -- on rare instances -- I was
> not in any position to make any sort of statement on behalf of the W3C
> because its structure/process is orientated towards the members making a
> technical recommendation.; not policy statements, retractions, or arguments.
> It was my own personal belief that any such representation by myself on this
> topic would be inappropriate. The W3C does not even have a technical
> deprecation mechanism for its standards presently. (What happens to HTML3.2
> 10 years from now?) It certainly has no mechanism for political/policy
> issues and would cause more problems than solve IMHO.

Then it needs to have that mechanism! Come on, man. This is really basic
stuff. It's like telling the judge that you "have no mechanism" for not
getting violent when you drink. If an entity does not "have a mechanism"
for being responsible for what it is doing, then it had damn well better
stop doing what it is doing.
Incidentally, if W3C has no policy mechanism, how is it that Daniel
Weitzner (and at least one person before him) is working as a paid policy
staff person for W3C?

I do acknowledge that W3C has after all admitted that PICS can be abused,
but it's not:

a) admitted that PICS was *intended* for goverments to use against their
citizens (indeed W3C reps have done the opposite, and claimed repeatedly
that
b) ever discussed or debated the policy, political, social and human rights
implciations of PICS with the civil liberties (or any other) community
c) made any modifications to the substance and direction of PICS in
response to such concerns (on the contrary - W3C has if anything dug in its
heels and at least indirectly *encouraged* such abuse by adding the
suggestion of government use to introductory documents about PICS!)

> Plus, you can sort of call a spade a spade. If PICS is poorly used in
> Australia or the UK in your opinion, then PICS is bad. I'm not saying a
> particular implementation in a given context is necessarily. I personally
> would object to any mandatory self labelling requirement. Same with P3P. I
> want good things to come of it, and a lot of people are working such that it
> will.

A lot of far more powerful people are working such that bad things will
come of it, and this effect was easily predictable (was in fact loudly
predicted) several years ago, but fell on entirely deaf ears at W3C. And
that's being charitable. As I say, it is clear to me that W3C actually
hopes PICS will be used for censorship, and suggests the idea in its own
"white paper".

> But if its just is a better mechanism for extracting data from users
> that is ultimately harmful to them, then P3P isn't a good thing IMHO. I'm
> not defending either technology really. Rather the technical philosophy that
> one should create decentralized, self-empowered, multiple-option, selected
> community governance technology when possible.

This comparison, and your final sentence above drawing a conclusion based
on it, don't work. It's apples and oranges. P3P is designed to allow a user
to willingly and knowingly enter into a negotiation with another party,
fully informed and consenting (or refusing to consent). PICS is designed
to censor a user, period (yes, it does have some potential and largely
incidental other uses, the same way a hammer can be used as a paperweight
or a weapon - it's not relevant to the analysis of *purpose* and *most
likely use*). There is not negotiation, there is no consent, probably no
informing of the victim in many implementations, no knowing and willing
decision on the part of the user/victim.
The technologies are simply not comparable in any way that is meaningful to
this debate.

> >> Regardless, I believe the statement is more akin to "here's some
> >> technology, with the intent to make the world, in a given context, a
> >> better place. This organization, is incapable of challenging the
> >> legitimacy of political processes in determining their policies.
> >
> >That is not true. EFF does it all the time.
>
> That is the EFF.

EFF is an organization with policy staff. W3C is an organization with
policy staff. If we can make policy decisions and take principled
positions so can W3C. Your more analogous counterparts, such as trade
associations (if you want to look at W3C's makeup) and IETF, IAB, etc. (if
you want to look at W3C ostensible purpose) have no trouble making such
decisions and taking such positions either. If W3C is genuinely incapable
of doing so, then W3C is completely FUBAR and needs to be fixed.

> >to read the UN Declaration of Human Rights, and come to an informed
> >decision on whether or not the govt. of, say, Burma or Singapore, adheres
> >to those principles.
>
> While you spend a great deal of time and words on this issue --
> good/thoughtful words -- I'm not convinced of this case. When I read the UN
> Declaration of Human Rights, and based on other folks I've spoken to given
> its creation, the fuzziness and country exceptions within don't provide that
> objective/universal metric I'm looking for.

Is this really in your or W3C's hands? You've said repeatedly that you are
simply a technical, non-policy, organization, so the smart thing to do here
would be have a dialog with civil liberties and human rights groups, who
bailliwick this IS, and defer to their judgement on the issue, just like
IETF, trade associations, etc., probalby would, and in other areas do on a
regular basis. You can't have your cake and eat it, too. Either W3C is a
competent legal/policy machine, in which case it should have done the right
thing on its own, or it is not, in which case it should have sought,
considered and accepted in most or all cases expert advice from
organizations that ARE competent in this area of law and policy.

> >designed in a way that lended itself to abuse by them. You don't need to
> >start a revolutionary war to make the "challenge", you can do it in
> >pragmatic terms, by simply saying, "this government is doing wrong, and we
> >do not trust them with this tool, so we are going to redesign it so it
> >cannot be (at least cannot easily be) abused by them." That's called
> >social resonsibility, something W3C seems to completely lack, at least
> >when it comes to PICS.
>
> If someone came forward with something (much W3C work is based on Member
> submissions or pre-existing work) that solved these problems (more likely to
> be adopted/deployed, satisfy social concerns so we don't have to fight the
> CDA N battles, and less likely to be "abused") the W3C could consider it as
> a work item as it may anything else. I'd certainly push for it if I was
> there.

This is also weak. This is like saying "until someone gives me an
alternative to drinking and driving that I like, I'm going to keep on
drinking and driving, dammit." You do not need a better alternative to
stop developing something that is going to do bad things. You just stop.
Go do something else. The fact of the matter is, W3C has never listened to
any civil liberties concerns, and its only responses to date have amounted
to "we don't care", or "we don't believe you" [despite being evidently
incompetent to actually make such a determination, since they are so
woefully incapable of making any policy decisions!] Did W3C ever *ask* the
civil liberties community to come up with revisions to PICS or to work with
W3C on fixing PICS's problems in this area? Nope. Rebuffed every
suggestion and every attempt at even basic dialog (other than your own
willingness to talk to me, and debate a little about this in public, which
is certainly better than nothing.)


> >> The problem is what happens when
> >> deliberative proccesses must aggregate preferences? In some situations
> >> society routinely enforces norms of the majority on a minority. In a
> >> political context, we've carved out niches where this shouldn't happen
> >> with "civil rights."
> >
> >Niches that W3C has conveniently "forgotten".
>
> The definition of those niches is a political process, that should be left
> in that venue.

You misunderstand me. W3C may or may not need to get actually involved in
those processes, but it cannot pretend they do not exist, nor should it
fail to think about their rammifications and how they will relate to the
effects of what W3C decides to go forward with. But that seems to be
precisely what happened. The end result is a proposal that isn't actually
successfully doing anything useful at all, but which has convinced
censorship-minded lawmakers all over the world from DC to Singapore,
Westminster to Canberra, that Internet censorship is viable after all, and
must be tried and tried again until it works. PICS is, not to put to fine
a point on it, a screwup of astounding proportions. I really hope that
W3C's metadata work doesn't fall down the same rathole, but I'm don't have
high hopes.

I don't blame this on you, Joseph, but I don't understand why you defend
PICS and W3C so much. It's quite a mystery to me.

> >Funny, but I have a pretty clear grasp of this stuff sitting at a desk in
> >San Francisco. You brain isn't at Harvard, it's in your head.
>
> No, it means I have some more time to think about it when its at Harvard. <s>

Alright, I admit I was being a smartass. :)


--
Stanton McCandlish mech@eff.org http://www.eff.org/~mech
Program Director, Electronic Frontier Foundation
voice: +1 415 436 9333 x105 fax: +1 415 436 9333 ICQ: 16631335
PGPfone: 204.253.162.21 ICQ Pager: http://wwp.mirabilis.com/16631335#pager

-- More (100%) --

Date: Sep 26, 1998 (Sat, 15:32:55)

To: mech@eff.org (Stanton McCandlish)

From: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

Subject: Re: Censoring the Internet with PICS

Cc: mech@eff.org, barlow@eff.org, rene@pobox.com, link@www.anu.edu.au, djw@w3.org


In-Reply-To: <v0401170cb22f14ab9817@[204.253.162.12]>
References: <3.0.5.32.19980922222010.00945540@rpcp.mit.edu> <199809220302.UAA08903@eff.org> <3.0.5.32.19980921131033.00972980@rpcp.mit.edu>
X-Eudora-Signature: <MIT>
X-Persona: <RPCP>

[This will be my last response for the time being, I didn't have time to respond to all the points.]

At 05:51 PM 9/23/98 -0800, Stanton McCandlish wrote:
>The simplest solution would be to make the technology non-scaleable above
>the individual-user level, which is what we've been saying for years. I
>think I even predate you in participation in this debate.

Probably, and this is one of the things that makes these conversations difficult for me because I cannot substantively speak to the technical design or process issues of PICS. (Though I do own some of the responsibility for W3C's subsequent policy activity (or lack thereof) and policy position.)

However, even if I was there then, people still have a tendency to remember things differently. <smile>

Regardless, your characterization of W3C purposefully changing the design from a decentralized to centralized for political purposes is not something I'm aware of. Furthermore, based on my understanding of the technology, it is not readily apparent to me how you would do something otherwise. As I mentioned to someone else, there isn't much difference between PICS and XML. Yes, XML. uses pointy brackets, PICS uses curly brackets, but you can accomplish with XML, _exactly_ what you accomplish with PICS. This difference, in this context, is basically that PICS was placed in an obscenity context purposefully, that it was a meta-data format prior to XML, and that the people that worked on PICS were more of a LISP background, so there are "("'s and value pairs.

In fact, if we are worried about an ubiquitous meta-data format that can be embedded in HTML, and used to rate content, XML is likely to be _much_ more widely deployed. I could do the RSACi XML dtd in 2 minutes. The point is, it can be used for other things, just as PICS could, but PICS itself never caught on for that. Why? Because better things were in the works that were being developed independently or that PICS influenced. The technology evolves and merges. Migrate the meta-data syntax to other work on a simple SGML (XML), formalize/extend the data model (RDF), extend the concept of using schema-URIs/referents and XML-namespaces.

So one of my points is that I believe this ability of vertical scalability is nearly intrinsic to metadata. If you provide someone a mechanism to make an assertion -- something that I think is really important -- governments could make assertions or compel people to make assertions.

>You are totally missing the point. GOVERNMENTS DO THIS (not with PICS, but
>otherwise).

I do not disagree. If you look at the historical statistics, I believe you are more likely to die at the hands of your own government than that of a criminal or a foreign army: two justifications by which governments extend their powers.

> 2) external news relating human rights issues to the oppressed population...
>3) internal news from the oppressed areas reaching the rest of the world.
>
>PICS in the hands of regimes in these areas will greatly harm 2, and
>somewhat harm 3.

Here I would argue by way of the Fallacy-of-Technology-That-Is-No-Worse. A country can just as easily block an external site through non-PICS mechanisms. A country could go after an internal Web site regardless of whether the site used PICS or not. I am not saying PICS could not allow governments greater capabilities to control content over its populace (such as mandatory self labeling), just not in these cases. [see below]

>> Second, I
>> believe there is a substantive issue -- call it moral/cultural subjectivity
>> -- that states some cultures may have norms that legitimately differ from
>> others.
>
>This is the "multiculturalism" fallacy again. One more time: However much
>this may be true, it is an internationally accepted given that the "right"
>of a culture to do unto its own citizens as it will is sharply limited by
>human rights accords. PICS not only will, but is specifically designed to,
>help bad-acting governments violate human rights.

I can only assert my original attraction to PICS support (and generally meta-data, digital signatures, rules languages, etc.) was as a mechanism for decentralized grassroots content selection, filtration, collaboration, and community forming. I was very much into these social mechanisms in my S.M. Thesis [1]. Basically stemming from my interest in cypherpunk/crypto-anarchy where crypto and digital signatures (crypto applied to metadata) allows for alternative markets (reputation and otherwise) and deliberative processes to form within cyberspace. Politically, it also appealed to me in the context of the CDA, as a convenient concept as an alternative to centralized government controls. PICS does little (technology-no-worse) as a means of centralized technical control. However, it can be effective as a means of decentralized control -- with hopefully decentralized policies.

While everyone was focussed on preventing centralized policies from being promulgated by centralized technical controls, it turns out that governments can promulgate a singular policy through decentralized mechanisms! This happened in the course of promoting PICS as an alternative to centralized technical controls. (I appreciate from the civil liberties point of view, such interests would be better served if governments continued pushing ineffectual centralized controls.)

[1] http://web.mit.edu/reagle/www/commerce/thesis/thesis.html

I realized this while working on P3P a while ago, where Europeans were concerned that European privacy rights might whither away on the Net (least common denominator), without centralized control. I had little difficulty when talking about P3P as a decentralized technology stating, "a user could use a European privacy preference setting." I of course, believe this should be up to the user, but if a government wanted to mandate it, they could try, and it'd work better than them trying to block US sites probably. Larry Lessig asked a question related to this at the last TPRC when Dr. Cranor and I were presented our social protocol paper [2]. The mechanisms were similar, but I had much less of a problem with governments pushing privacy, than content restrictions. The reaction to PICSRules, clarified this for me. Most criticism focussed on PICSRules being used on centralized government servers, which was off target. My concern was decentralized PICSRules files being mandated with a central policy. However, I think users could always add a new rule if the capability is there. For the capability not to be there, that means the government would have to coerce client side software makers to support only their policy. I can't imagine this happening in the realm of content control, and its hard to say with respect to privacy.

[2] http://www.w3.org/People/Reagle/papers/tprc97/tprc-f2m3.html

I gave a talk about this to the Digital Commerce Society of Boston in March [3], planned writing about it for TPRC [4] -- but had to withdraw because I had zilch time this summer -- and planned on getting around to it as part of a paper on social protocols and meta-data design while at Harvard.

[3] http://www.w3.org/People/Reagle/papers/tprc98_SP_submission.html
[4] http://www.w3.org/Talks/9803-DCSB/all.htm

>Not if any of those options violate human rights. Why is this so hard for
>you to understand? It's like releasing a product, say for cleaning
>carpets, that in the wrong hands could poison 50,000,000 people. IT JUST
>DOESN'T MATTER if it's good at cleaning carpets - you don't release the
>product. It would be socially irresponsible to do so.

I cannot conceive of not having metadata, filtration, and reputation tools in the near future. I support these tools/capabilities because they allow consensual, self initiated Web policies/communities to form. I acknowledge they could be used by governments in ways that I am happy to condemn. I also believe while centralized control can coerce particular policies, that if those policies fall far afield from users actual desires and actions, those policies will be extremely weak. How weak as a function of how far afield is partly part of the political/deliberative process and characteristics of network technology.

>> Regardless, is there a deterministic/solid way
>> of determining which poles [1] are wrong and a way to design accordingly?
>> You've responded yes, yes, its trivial, but I've not been able to find any
>> objective universal measure, particularly since nearly all political
>> contention is associated with differing norms regarding rights.
>
>Use the UN Declaration as a starting point. Let's not be silly.

I've tried, but haven't found it yet to be as useful as I'd like.

>At this point, yes, it is hard to see how to undo the damage, and more to
>the point, undo those aspects of PICS and metadata that are likely to cause
>more harm in the future when the scheme really catches on. I'm not even
>trying to address this issue here, which is a big one. I'm just trying to
>get you and W3C to FESS UP and stop trying to pretend you did the world a
>favor. Admit some fault, accept some criticism, stop burying your head in
>the sand so you can't hear.

PICS has _not_ been applied as I would have liked nor intended. To date, I'd also be hard pressed to find an instance by which someone's speech, hitherto suppressed, is now free because of PICS. There are also instances in which the application of PICS causes me concern. As I stated before, the balance seems to be:

A .PICS probably _has_ prevented rather Draconian policies from being instantiated, whereas such policies would be less likely to be implementable.
B. PICS probably _has_ made possible much more "reasonable" policies from being instantiated, whereas such policies are much more likely to be implementable.

>> >PICS, like the RIAA movie ratings and the Comics Code Authority is a
>> >text-book example of the govt. scaring the industry into instituting
>> >"voluntary" censorship the govt. could never get away with directly
>> >mandating.
>>
>> Yes, its an interesting regulatory tactic (that I call "herding" [2]) that
>> is not restricted to the domain of content control.
>
>I'm glad you have a name for it. Why did you (or rather W3C) decide to
>participate in it, and why will W3C not admit to doing so?

I came to understand the nature of this strategy by working at the W3C. Also, I don't believe the W3C participated in it purposefully. Also, there's a pragmatic issue of where if you are going to be driven v. herded, you are better off being herded. The principled mule may plop down and say, "I refuse to move."

>> Consider the case of Germany outlawing Nazi literature. I personally would
>> find this abhorrent in my, a US context. However, I do not feel comfortable
>> in saying that what the German state has done is wrong necessarily.
>
>That is because you have an inadequate understanding of human rights law.

Perhaps.

>The UN Declaration does not guarantee free speech "except if you are saying
>racist things". The anti-genocide and anti-racism clauses are severable,
>and do not supercede the freedom of expression clause.

I need to study your earlier comments and the document and its history further.

>> The question is to what degree can real space governance impose itself upon
>> cyberspace governance using social protocols, particularly when its policies
>> are not held by a majority of its users? Is there any situation in which
>> real space governments have the right to regulate on-line conduct?
>
>What PICS does is take it out of the real of rights and into
>practicalities, by making it *easy* for governments to do it. If it's not
>easy, most of them won't try, and the issue would simply never arise

It makes it easier for everyone, including governments.

>This reasoning is also fallacious. What you say may be true, and your
>belief is your belief. But this belief does not mean that you have to
>support any particular proposal that *claims* to acheive the goal you
>believe in, nor one that might actually do so, but at the cost of other
>things that matter.

Like speech, yes. I am not sure how to remedy the situation by which you want to enable assertions and decision making, but not allow a particularly entity to do so. I want go into the whole IETF issue, but PICS _is_ decentralized. Larry Lessig's proposal required greater centralized infrastructure/authority than PICS to operate. However, a centralized authority can attempt to promote a centralized policy over a decentralized architectures. The government could pass similar laws for IETF technology if it felt it had the legitimacy and an interest to do so. Its tried with crypto (escrow), and its been unsuccessful because it does not have a very strong mandate outside of law enforcement, the technology isn't there yet and its really hard to do. A government could pass a law that everyone within its network to use UDP.

>How? Lay out your "How to Subvert Your Murderous Fascist Government's
>Censorship Regime in the Privacy of Your Own Bedroom" scheme. I'd really
>like to see it. Who are you to be *gambling* with other people's lives and
>freedoms on the off chance of making idle Western Democracy net.addicts
>lives slightly more convenient?

The ability to use and control information is going to have to be used responsibly.

>W3C knows it will happen, and *wants* it to happen. W3C very clearly
>*wants* fascist governments to adopt and use its PICS system, because this
>will help propagate that system and make it more widely adopted. Yes, this
>is a direct and serious accusation. But W3C's own documents are quite clear
>on the matter:

[Reminding I have no W3C hat on.] As I've said before, the W3C has no apparent mechanism for issuing policy statements. In as far as you are willing to read the comments from its creators and a staff member as W3C policy, so be it. You also have [5] for consideration.

[5] http://www.w3.org/TR/NOTE-PICS-Statement

_My_ interest was in self empowering tools and I never wanted "fascist" governments to adopt PICS. I think metadata is inescapable. To the degree that various real world power/deliberative structures are able to influence cyberspace deliberative structures is unclear. However, by having metadata capabilities (social mechanisms, making statements and automatable decisions) it appears you increases everyone's capability, including the real space power structures. I believe the degree to which this is a legitimate or illegitimate process with respect to the design of decentralized technology should be external to the design of the technology. For instance, I do not believe the W3C necessarily acts unethically with respect to providing a decentralized/self-empowering privacy technology, but one that does not preclude European privacy regulators from extending their privacy policies into cyberspace. The technology to date, probably does not satisfy them with respect to their ability to do this, but it does not preclude them. The same could be said of content.

An intrinsic part of a self-empowering technology is that users can act on their own behalf. However, not all users will bother to read the fine print or able to act on their own behalf all the time. A parent doesn't want to have to investigate every Web site their kid might go to. Nor read the privacy policy of every site they visit. (In some instances, you can design technology so you aren't depedentend on the assertion of the other party, like anonymous cash, anonymizers and such. But sometimes you will want to have an assurance from the other party.) You have two alternatives, rely upon the government to uniformly set all policies, or create mechanisms by which the technology supports mechanisms of individual control, but can support mechanisms of assertions, deferral, trust, and reputation. Governments could also try to relate or extend power through these mechanisms. Granted.

>Then it needs to have that mechanism! Come on, man. This is really basic
>stuff. It's like telling the judge that you "have no mechanism" for not
>getting violent when you drink. If an entity does not "have a mechanism"
>for being responsible for what it is doing, then it had damn well better
>stop doing what it is doing.

Fair enough.

>This is also weak. This is like saying "until someone gives me an
>alternative to drinking and driving that I like, I'm going to keep on
>drinking and driving, dammit." You do not need a better alternative to
>stop developing something that is going to do bad things. You just stop.
>Go do something else.

I did, I wanted to apply PICS type stuff to privacy. <smile>

>I don't blame this on you, Joseph, but I don't understand why you defend
>PICS and W3C so much. It's quite a mystery to me.

I'm not necessarily defending PICS nor the W3C. I'm trying to clarify a number of issues. If I'm defending anything, I'm defending the necessity of meta-data and a rationale of supporting mechanisms of self determination, which have an acknowledged side affect of being able to be abstracted or even co-opted, PICS being the best example of a poor situation in which this may happen.

-- More (100%) -- -- More (100%) --

Date: Sep 26, 1998 (Sat, 17:18:33)

To: rene@pobox.com (Irene Graham)

From: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au, Stanton McCandlish <mech@eff.org>


In-Reply-To: <3613fb1f.52052467@mail.logicworld.com.au>
References: <3.0.5.32.19980922201401.009dab30@rpcp.mit.edu> <3.0.5.32.19980921130637.009e0970@rpcp.mit.edu> <3.0.5.32.19980919112126.009fec80@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980917202550.00bc4530@rpcp.mit.edu> <3.0.5.32.19980919112126.009fec80@rpcp.mit.edu> <3.0.5.32.19980921130637.009e0970@rpcp.mit.edu> <3.0.5.32.19980922201401.009dab30@rpcp.mit.edu>
X-Eudora-Signature: <MIT>
X-Persona: <RPCP>

[This will be my last response for the time being, I tried to focus on the technical issues for completeness.]

At 12:40 AM 9/27/98 +1000, Irene Graham wrote:
>Unfortunately, I have considerable difficulty understanding either: (a) the
>concept; or (b) what you mean - I'm not sure which is the case. Firstly, if
>every content provider runs their own label bureau in order to self-label
>their material, does this not mean that end-users who use PICS-compatible
>software would need to have it configured to query millions of
>pre-specified label bureaus?

They would run the label bureau. It could be implicitly queried every time and return the result in the HTTP header (basically what happens when you do it in the headers) or queried by way of a convention (if no label in content or HTTP headers, ask for it at this port/location).

> I've not seen any implementation of PICS that
>enables this, nor do I see how it would be workable or practical from an
>end-user point of view.

There are a number of server side label bureaus (the PICS page lists some). You are right that they aren't widespread!

>Secondly, I'd question the practicality of content
>providers having to set up and run a label bureau just to label their own
>pages. Even *if* it is possible for them to run a label bureau from a
>subdirectory of their ISP's domain, there are obviously cost (in purchasing
>label bureau software, etc) and time issues.

Yes, there is this trade off.

>Advocating this means of self-labelling seems to me to be advocating making
>it even more difficult and costly, potentially impossible, for ordinary
>people to publish content on the Web (if labelling becomes a pre-requisite
>to speaking) or for them to make content available to a wide audience (if
>PICS-facilitated blocking software ever becomes widely used - unlabelled
>content is blocked). It seems quite contrary to what was claimed to be one
>of the intents of PICS development - to provide content providers with an
>easy means of voluntarily self-rating their content.

Both mechanisms are supported. I believe the maintenance of distributed meta-data in the content tree it relates to actually is often more expensive if the meta-data is at all likely to change. Use of meta-data is something that actually may appear more costly up front, but as the cost of managing data itself increases (total cost as the scale of data increases), meta-data will decrease the over all cost of data maintenance. This is why we see the massive trend towards use of databases on the Web.

>>Placing meta-data
>>in content itself can be problematic 1) if the data at the URI is likely to
>>change
>
>If the data (content) at the URI is changed, the meta-data (label) embedded
>in the content can be changed at the same time, can it not?

Yes.

> If not, it
>seems reasonable to assume that the data is being changed by something
>other than a human. Technology in general is not sufficiently advanced for
>something other than a human to decide on and specify new censorship
>ratings, nor will it ever be. Technology cannot, and never will be able to,
>read the minds of the censors. Hence, insofar as PICS is concerned, your
>(1) is irrelevant.

I don't quite follow, but consider the inline expansion or aggregations of information. Team calendars will be generated from the XML encoded individual calendars of all the team members. Or if I change the icon on all of my pages to something racy. I now have to go and change all the HTML pages it was on. Whereas, if the server served the icon with the icon's own meta-data, I only have to change one thing in the database rather than go edit every page it sat on. The inline expansion of distinct Web resources, likely to change, does get tricky if the meta-data isn't properly served.

>>or 2) in properly discovering the meta-data. For instance, does a
>>tree leaf have all the meta-data in its parents roots applied to it,
>
>Obviously (if we are talking about a file in a subdirectory and PICS
>labels, respectively) it shouldn't unless specifically so intended. I
>*think* we are in agreement on that.

But that can be very costly. For every page you touch, you have to climb the tree. Yes you could cache the results of your wandering but even this cost is more than people are willing to spend on some pieces of meta-data.

>>which case one must always climb the tree to understand all data associated
>>with it? If I wish to say that the whole tree is X, except node A, its much
>>easier to make two entries in a database, rather than go through the whole
>>site embedding the meta-data in every node/leaf.
>
>I agree, but you've built another strawman so you can then knock it down.
>You're saying, or at least implying, that the only way node A could have a
>different PICS label to the rest of the tree (using embedded labels) is to
>label every single page on the site.

I'm not building any straw man to knock down because I'm not arguing. I'm telling you my opinions regarding the difficulties of meta-data discovery. <smile>

>Have you read the PICS spec relative to embedding labels?

Of course, this document was created in the context of these problems. You'll note that in the end they recommend my point, "That is, we suggest that each server that dispenses documents run a label bureau at the distinguished filename PICSLabels. The main idea behind this proposal is to make servers, rather than client, be responsible for resolving defaults and overrides."

>It addresses this
>specific issue, and does not require either labelling every single page or
>"always climb[ing] the tree".

The implementation of generic, default, and override labels has been problematic and is confusing in my opinion. It satisfies really 50% of the easy issues where one person wants to apply a simple label to static content in a fairly homogenous site, but otherwise is fairly tricky. There was a similar source of contention with P3P, which made 2 improvements in my opinion:

1. meta-data semantics are never embedded in the HTML itself, though it can include a link to a resource that includes the semantics. ("I have a privacy policy, but go see it at X.") This is for the mom and pop, not doing any data collection, but wanting to make a privacy statement on their site which is run by someone's else's server.
2. in instances of multiple or alternative proposals throughout a tree, the client and server explicitly say which proposals they think they are operating under through the agreementID.

However, given even this, P3P runs into difficulties. Does a client then have to search in the content and the headers? What if a service sends in the header, but the client only looks for links to meta-data in the content?

>PICS is technically flawed as a result of this. It makes it impossible for
>any content provider who chooses to self-rate their content by embedding
>labels in accordance with the spec, to know what will happen when someone
>tries to access their web pages. It depends on whether the end-user's
>software developer has complied with the recommended specs, or the "hints",
>or has done something else again.

Yep, the spec is somewhat weak because of this and issues related to relative URLs. That's the way of standards and technology.

>Why? If the view of these developers is justified, why haven't the W3C
>recommended specs been amended accordingly? I contend that it's because if
>W3C ever made patently clear how PICS implementations are actually intended
>to operate there would be even less people saying self-rating with PICS
>facilitated systems is a reasonable idea, less (apparent but ineffective)
>take up of self-rating and even more opposition to it being mandated.

Again, I would question your implications of motivation. My understanding is from a technical point of view, some things should be normatively clarified in the spec. (Not just notes and hints.) However, such work on PICS is a low priority given the other work underway.

>At the stage at which W3C has stopped development of PICS, we have a
>situation where if a government mandates self-labelling, and a content
>provider rates their entire site, in accord with the W3C recommendation
>(and RSAC's instructions), with the highest (most restrictive rating) for
>simplicity's sake, people trying to access a page on the site other than
>the home page will get a message saying the page has not been labelled. It
>would be nice to think that the authorities acting on complaints would be
>likely to readily understand that the page *is* labelled, it's just that
>the technology does not work as allegedly intended, nor as advertised.

Given the lack of clarity in the spec. regarding this and relative URLs, I'd agree basing laws on the present implementations of PICS1.1 is problematic.

>In the PICS specs, there is no compromise. It was clearly intended that
>content providers could provide self-ratings without embedding PICS
>compliant meta-data in *every* page. Furthermore, frankly, if I was
>inclined to self-rate I'd consider it one hell of a lot simpler to amend a
>label embedded in the page I was amending, than having to update some other
>file as well.

It depends on the heterogeneity of your pages and your desire to meet that level of granularity in your meta-data. For privacy applications, it isn't likely it would serve anyone's interest to have a different privacy policy page for every page. You'd probably have two: a generic clickstream-cookie type proposal, and a proposal for pages where you actually solicit information. And if you get this sophisticated, your are better off going through headers IMHO.

>> >This is one of things that exasperates me about discussions with PICS
>> >advocates. They frequently seek to avoid addressing the issue raised, which in this
>> >instance was that if governments mandate self-rating and labelling now, PDF
>> >could become very popular amongst those not desirous of self-censoring in
>> >accord with someone elses' values because there are no tools now available to
>> >enable them to self-rate PDF documents.
>>
>>I agree.
>
>Oh, good. Pity you didn't say that in the first place instead of implying
>it's possible to self-label PDF documents now because you "believe there is
>label bureau software out there."

I think there was a misunderstanding here, perhaps on my part. You certainly could self label your own content, but not through embedding PICS labels in PDF. (I make a distinction between the who and the how.)

>>Furthermore, I've made the point that PICSRules does potentially
>>enable governments to promulgate regulations,
>
>I'm pleased to hear that you've come to understand that, particularly given
>that about 9 months ago, in response to my and other peoples' criticism of
>PICSRules, you said:
>"With respect to the contention that PICSRules will make it
>easier for governments to censor their populace. I disagree
>[...]
>PICSRules doesn't make [PICS] any easier to scale for
>governments, the only people it gives more power to are
>smaller organizations and individuals in exchanging
>preferences."
>http://www.w3.org/People/Reagle/stuff/PICSRules_Response.html

I believe this was in the context of government controlled services using PICSRules. I believe somewhere in that discussion I stated that a globalized rating service concerned me. But I was not convinced interoperable rules files exacerbated the problem, though it obviously didn't lessen it.

From my experience in P3P and the potential ability of governments to promulgate policies through decentralized preference files, I publically stated this was a concern by March at least, though not in this forum.

>Insofar as what people were "ranting" about 9 months ago, I refer you back
>to what I said then about PICSRules, proxy servers and global rating
>systems: http://rene.efa.org.au/liberty/picsgrate.html
>Perhaps if you read that again now that you've come to realise that
>PICSRules does indeed present "true danger", you might also realise that
>what you recollect to have been "ranting" was not and is not unwarranted.

At the time, I agreed with your concerns about mandatory self labeling in Australia:
"I think the global rating system is of concern in your context"

Given the passing 9 months and upon rereading your analysis, the concerns raised I think were very merited and I apologize if I dismissed them as "ranting."

-- More (100%) --

Date: Sep 26, 1998 (Sat, 17:17:11)

To: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

From: mech@eff.org (Stanton McCandlish)

Subject: Re: Censoring the Internet with PICS

Cc: barlow@eff.org, rene@pobox.com, djw@w3.org



(I'm leaving the link list off of this, since I'm not on that list, and
they are probably tired of seeing me copy this stuff over to there.
Actually, I had no idea it was a list until Irene told me.)

At 12:32 PM -0700 9/26/98, Joseph M. Reagle Jr. wrote:

> In fact, if we are worried about an ubiquitous meta-data format that can be
> embedded in HTML, and used to rate content, XML is likely to be _much_ more
> widely deployed. I could do the RSACi XML dtd in 2 minutes. The point is, it
> can be used for other things, just as PICS could, but PICS itself never
> caught on for that. Why? Because better things were in the works that were
> being developed independently or that PICS influenced.

I'll differ with you on this point to - PICS flopped as a general metadata
spec because it wasn't marketed as such; it was marketed as a censorship
and parental empowerment tool. I'm less concerned with XML being abused as
a censorship tool, since it's not being marketed as one. Policymakers
generally don't know a thing about technical specs that are not really,
really obvious. XML as a censorship tool isn't very obvious, so most of
them will probably never get the idea to mandate that it be used that way,
unless W3C goes around mucking with the text of white papers on XML, the
way they did with PICS, to specifically include references to govt.
censorship being enabled by it.

> The technology
> evolves and merges. Migrate the meta-data syntax to other work on a simple
> SGML (XML), formalize/extend the data model (RDF), extend the concept of
> using schema-URIs/referents and XML-namespaces.

Right. And you get a complex uber-geek system that only engineers will know
what to do with. That's as it should be. I shudder at the thought of XML
being simple enough for the average US legislator to understand.

> So one of my points is that I believe this ability of vertical scalability
> is nearly intrinsic to metadata. If you provide someone a mechanism to make
> an assertion -- something that I think is really important -- governments
> could make assertions or compel people to make assertions.

PICS did not have to be defined as nearly so expansive a meta-data system.
If it had been kept simpler, the problem would not, I believe, have ever
come up.

> > 2) external news relating human rights issues to the oppressed
> population...
> >3) internal news from the oppressed areas reaching the rest of the world.
> >
> >PICS in the hands of regimes in these areas will greatly harm 2, and
> >somewhat harm 3.
>
> Here I would argue by way of the Fallacy-of-Technology-That-Is-No-Worse. A
> country can just as easily block an external site through non-PICS
> mechanisms. A country could go after an internal Web site regardless of
> whether the site used PICS or not. I am not saying PICS could not allow
> governments greater capabilities to control content over its populace (such
> as mandatory self labeling), just not in these cases. [see below]

The problem with this analysis from my perspective is the "just as easily"
clause. It would not in fact be just as easy. If the govt. is going to
shut down servers on a site-by-site basis or run a nation-wide proxy that
blocks material critical of the govt., as Singapore is trying, they have a
lot of work and expense in store for them, and the likelihood of success of
the effort is really pretty low. It costs none of this to simply issue a
mandate, that people label content or that ISPs direct users' web traffic
thru one or more label bureaus that the govt approves of, if the govt. is
handed a convenient system like PICS, and that system is widely adopted,
e.g. in web browsers and other user-end software, or ISP-end 'ware
(filtering routers, etc.) Likewise, the Coats/Istook legislation to
mandate that libraries install filtering software would not exist w/o the
existence of said filters. If you build the tool, regulators will (ab)use
it. If you don't build a tool they can abuse, they have to try half-assedly
to make their own tool, or give up. I'd rather contend with a joke of a
censorship tool created by some random government people, than one created
by an international consortium of technical industry partners who actually
know what they are doing. I DO have some concerns about XML's
capabilities, but it seems general enough to me to be most likely to be
adopted for other purposes, unless/until W3C "pulls a PICS" and starts
telling governments "hey, you can use this to censor your wretched
peasants, hint-hint!"

> I cannot conceive of not having metadata, filtration, and reputation tools
> in the near future.

Same here, but there is a fundamental difference between a general and more
nearly neutral system like XML, and a narrower system specifically intended
and marketed for abuse (even if also intended for non-abusive uses).

> I support these tools/capabilities because they allow
> consensual, self initiated Web policies/communities to form. I acknowledge
> they could be used by governments in ways that I am happy to condemn.

The problem with PICS was that the actual producer of the spec, W3C as an
entity, was *not* happy to condemn the abuse, but directly encouraged it,
as I demonstrated by quoting from the PICS article/whitepaper/whatever.

> I
> also believe while centralized control can coerce particular policies, that
> if those policies fall far afield from users actual desires and actions,
> those policies will be extremely weak. How weak as a function of how far
> afield is partly part of the political/deliberative process and
> characteristics of network technology.

Right, and that's why I'm happy to see XML supplanting PICS. It has so many
more *obvious* non-censorship uses, and is *not* obviously being marketed
as a censorship tool, so the public and developer and market expectation
for XML is not going to fall down the same rathole that PICS did, and few
if any policymakers should get the idea of trying to use it for censorship.
One would hope. My real hope is that we simply get more and better US and
other court rulings that protect free expression on the Net even less
equivocally than the CDA case did. If enough jurisdictions are
censorship-free, then the rest of the Net ultimately has little choice but
to go along, since the freer jurisdictions serve as a data haven (compare
online gambling - all the proposed US laws against it are simply going to
be a joke, since online casinos in Monaco and Aruba are NOT going to stop
their operations. Provided the tools that actually implement XML at the ISP
and user level don't actually enable centralized censorious mandates, the
USG is simply going to lose on this one.)

> PICS has _not_ been applied as I would have liked nor intended.

I know, and as I said, I don't blame you. But PICS *is* being applied (or,
there are governments wracking their brains to figure out how to force it
to be applied) in bad ways, and W3C directly advocated that PICS be used
this way.
THIS is what I've wanted W3C to fess up to, to accept some guilt and
responsibilty for. Singapore, Australia, the EU people wanting to mandate
RSACi, etc., etc., did not just wake up one morning with the idea of using
PICS and PICS label bureaus for censorship, out of the blue. They got them
from W3C's own exhortations to use them in such a manner.

> To date, I'd
> also be hard pressed to find an instance by which someone's speech, hitherto
> suppressed, is now free because of PICS. There are also instances in which
> the application of PICS causes me concern. As I stated before, the balance
> seems to be:
>
> A .PICS probably _has_ prevented rather Draconian policies from being
> instantiated, whereas such policies would be less likely to be implementable.
> B. PICS probably _has_ made possible much more "reasonable" policies from
> being instantiated, whereas such policies are much more likely to be
> implementable.

This is generally what pisses off civil libertarians, since
hard-to-implement but really-bad policies are both good "poster children"
for public outrage and court sympathy, and are easy to avoid or knock down.
The CDA is a great example. Stupid, unenforceable, vague, overbroad,
obviously unconstitutional, and just all-around badly written. The
"harmful-to-minors"-based CDA-II is much more problematic. It *might*
actually withstand constitutional scrutiny if the court doesn't really
think hard about the matter. Viewed from outside the process, it does seem
to be more "reasonable", but if upheld, we lose much with that "reasonable"
policy that we gained by having an objectively worse "unreasonable" CDA to
blow holes in. PICS raised the same problem. We're likely to win the
current few rounds of the filters-in-libraries debates and cases, because
the existing tools, well, kind of suck ass. If PICS were to be more
broadly adopted, future rounds might not go so well, if it can be shown
that some label bureau really does only block material that a court would
likely consider obscene. We need the short term win, and need it in
sufficiently broad terms that a narrower version of the censorship regs
won't stand up either. We *might* have that with the CDA ruling (esp. the
court saying that the Net deserved "at least" as much protection as print),
but this remains to be seen. Its basically a stalling game. The longer the
censorious aspects of metadata remain at bay, the longer we have to prepare
the legal field to make sure that the abuse isn't viable (here, anyway, but
the US is clearly leading the issue of Internet content and free speech,
globally. A few sinister governments will continue doing everything they
can to censor, but I'd rather a free US be pressuring them to lighten up,
than a questionably free US taking censorship hints from Burma or Malaysia.)

> >> >PICS, like the RIAA movie ratings and the Comics Code Authority is a
> >> >text-book example of the govt. scaring the industry into instituting
> >> >"voluntary" censorship the govt. could never get away with directly
> >> >mandating.
> >>
> >> Yes, its an interesting regulatory tactic (that I call "herding" [2]) that
> >> is not restricted to the domain of content control.
> >
> >I'm glad you have a name for it. Why did you (or rather W3C) decide to
> >participate in it, and why will W3C not admit to doing so?
>
> I came to understand the nature of this strategy by working at the W3C.
> Also, I don't believe the W3C participated in it purposefully.

Of course they did. PICS was created specifically because of fears of govt.
regulation over content, and rushed out in a mood of fear. You've said as
much yourself. I was there in 1994, and our then-boardmember Rob Glaser
tried to convince the EFF board that IHPEG/PICS was a necessary evil (with
a silver lining, in the form of parental control over what their children
see) to answer upcoming government censorship. The rest of the org did not
buy that argument, and guess what? We were right. Glaser left the board,
we nuked the CDA in court, and PICS migrated to the W3C corral, and look
what happened - W3C's own intro document to the PICS concept plays right
into the hands of "herding" by encouraging govt. use of PICS for
censorship. W3C participated in the herding process twice over, and the
entire genesis of the proposal lies in knuckling under to herding.

> _My_ interest was in self empowering tools and I never wanted "fascist"
> governments to adopt PICS. I think metadata is inescapable. To the degree
> that various real world power/deliberative structures are able to influence
> cyberspace deliberative structures is unclear. However, by having metadata
> capabilities (social mechanisms, making statements and automatable
> decisions) it appears you increases everyone's capability, including the
> real space power structures. I believe the degree to which this is a
> legitimate or illegitimate process with respect to the design of
> decentralized technology should be external to the design of the technology.

I'm not aware of any way to design technology by such a head-in-the-sand
method, without regard to the societal implications of what you are doing,
that will not irresponsibly produce negative unintended consequences. In
some cases we have governments set technical design rules (e.g. emissions
standards and other environmental concerns), or control the design itself
(development of the atom bomb). In areas where the principle danger is
govt. abuse, it is upon the private sector to "govern" the development. If
W3C finds itself unable or unwilling to wrestle with the ethical issues,
then *they should have yielded to those so willing*. It's really just a
shame that W3C is so closed and industry-dominated an organization. PICS
would never have left the drawing board had it been controlled by IETF
instead. You may be right that ultimately it doesn't matter in some
senses, since a general meta-data spec like XML can do anything PICS could
do. But a standards body made up of interested members of the technically
competent public would never have advanced a tool specifically created for
censorship, and the positive effect this would have had on the debate could
have been very valuable. We are still contending with censorship bills and
other issues, globally, that at least in part can be traced to regulators'
and legislators' (dim) understanding of PICS and what is or isn't possible
with it. Had PICS never left the drafting board, many of these censors
would be at a loss, or at least at more of a loss, as to what to do about
Net content. W3C's social irresponsibility cost us a lot. This isn't even
the only time it has done so. We're pretty close to getting new EU-model
problematic privacy mandates, because TRUSTe and similar private-sector
efforts aren't making enough headway. *Part* (by no means all, but
definitely part) of that problem is P3. Many industry players who would
have participated in TRUSTe or BBBOnline or whatever have now declined to
do so and instead are holding out for P3. I was on a lot of the (surprise!
CDT-led) P3P initial planning conference calls, and the level of hostility
to TRUSTe (then eTRUST) was really shocking. It was clear to me and
then-president of EFF Lori Fena that many of the players' principle motive
in pushing P3 (then P3P) was simply to undermine TRUSTe and EFF. It was the
same old internicine jealousies and bickering. If P3 were in IETF, or
anyone else's, hands this likely would not have happened. The problem is
that W3C has a seeming habit of purposefully monkeywrenching anything that
gets in the way of the interests of the corporations that control W3C, even
if doing so harms the public/user interest. The ironic thing of course is
that P3 and TRUSTe are fully compatible with and complementary to
eachother, technically, just at one point somewhat conflicting politically.

> For instance, I do not believe the W3C necessarily acts unethically with
> respect to providing a decentralized/self-empowering privacy technology, but
> one that does not preclude European privacy regulators from extending their
> privacy policies into cyberspace. The technology to date, probably does not
> satisfy them with respect to their ability to do this, but it does not
> preclude them. The same could be said of content.

I think there's a clear difference, even if you are an engineer with a dim
sense of socio-politics, between a govt. use that protects a right, and a
govt. abuse that takes away a right.

> An intrinsic part of a self-empowering technology is that users can act on
> their own behalf. However, not all users will bother to read the fine print
> or able to act on their own behalf all the time. A parent doesn't want to
> have to investigate every Web site their kid might go to. Nor read the
> privacy policy of every site they visit. (In some instances, you can design
> technology so you aren't depedentend on the assertion of the other party,
> like anonymous cash, anonymizers and such. But sometimes you will want to
> have an assurance from the other party.) You have two alternatives, rely
> upon the government to uniformly set all policies, or create mechanisms by
> which the technology supports mechanisms of individual control, but can
> support mechanisms of assertions, deferral, trust, and reputation.
> Governments could also try to relate or extend power through these
> mechanisms. Granted.

What I'm saying is that designers of these system should being doing
whatever they can to make it especially difficult for governments to abuse
such systems, even if the price of that protection is some inefficiency or
complexity introduced to the system. I'm not sure exactly what those things
are at this point, frankly.

But, the simple fact is that not all, by any means, of the people working
on Net specs takes your "pure technical design with no social
considerations" view. As I pointed out, the IETF and other people working
on next generation TCP/IP are very explicitly including end-to-end
encryption, and are doing so in direct defiance of government demands, even
going so far as to very carefully evade export laws by parcelling out the
design work so that the crypto portions are created overseas, and so on.
They are aware of and involved in the policy debate, and doing things that
undermine govt. abuse of the Net for surveillance and censorship, on
purpose. If anything W3C is highly anomalous in that it is NOT working to
keep the net as free and open as possible. Incidentally, it is precisely
this habit of industry-controlled consortia and cartels to fuck the public
for the bottom line that has EFF and "The Boston Group" enraged at the
IANA/NSI "New IANA" Bylaws, and has us demanding the inclusion of specific
due process and freedom of expression clauses in the Bylaws. (See our front
web page for more info on that.) The problem is not unique to W3C, but is
characteristic of the kind of organization W3C is when such organizations
are not contrained
by the charters to not fuck the public.

> I'm not necessarily defending PICS nor the W3C. I'm trying to clarify a
> number of issues. If I'm defending anything, I'm defending the necessity of
> meta-data and a rationale of supporting mechanisms of self determination,
> which have an acknowledged side affect of being able to be abstracted or
> even co-opted, PICS being the best example of a poor situation in which this
> may happen.

OK! I think we understand eachother a little better now. I guess my view
could be summed up as "Fine, and I agree that meta-data is necessary and
coming whether necessary or not. PICS was the wrong way to do it, because
it frontloaded the entire system with a censorship meme, and was geared to
censor more efficiently than it was geared to do anything else, which is
unlikely to be true of XML. I hope."


--
Stanton McCandlish mech@eff.org http://www.eff.org/~mech
Program Director, Electronic Frontier Foundation
voice: +1 415 436 9333 x105 fax: +1 415 436 9333 ICQ: 16631335
PGPfone: 204.253.162.21 ICQ Pager: http://wwp.mirabilis.com/16631335#pager

-- More (100%) --

Date: Sep 27, 1998 (Sun, 0:40:10)

To: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

From: rene@pobox.com (Irene Graham)

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au, Stanton McCandlish <mech@eff.org>



[Note to Linkers: I apologise that some of the following gets into
technical aspects of PICS. However, I think the matter of whether or not
PICS (facilitated systems) operates as advertised and is fit for purpose is
relevant to consideration of whether it will ever be widely voluntarily
used by content providers (and thus be a "solution") and to the merits or
otherwise of mandating that content providers self-rate. I hope I've
succeeded in expressing the following in a manner (generally)
comprehensible to those of you interested in the PICS debate, whether or
not you're already familiar with the technical aspects.]

On Tue, 22 Sep 1998 20:14:01 -0400 "Joseph M. Reagle Jr."
<reagle@RPCP.MIT.EDU> wrote:

>At 12:45 AM 9/23/98 +1000, Irene Graham wrote:
> >PDF documents. If you are suggesting that most ordinary content
> >providers could set up and run their own label bureau in order to
> >self-label their PDF documents (and that there would be some point
> >in their doing that) then I think you are being ridiculous.
>
>This is what I believe. This is the way it should be done.

Unfortunately, I have considerable difficulty understanding either: (a) the
concept; or (b) what you mean - I'm not sure which is the case. Firstly, if
every content provider runs their own label bureau in order to self-label
their material, does this not mean that end-users who use PICS-compatible
software would need to have it configured to query millions of
pre-specified label bureaus? I've not seen any implementation of PICS that
enables this, nor do I see how it would be workable or practical from an
end-user point of view. Secondly, I'd question the practicality of content
providers having to set up and run a label bureau just to label their own
pages. Even *if* it is possible for them to run a label bureau from a
subdirectory of their ISP's domain, there are obviously cost (in purchasing
label bureau software, etc) and time issues.

Advocating this means of self-labelling seems to me to be advocating making
it even more difficult and costly, potentially impossible, for ordinary
people to publish content on the Web (if labelling becomes a pre-requisite
to speaking) or for them to make content available to a wide audience (if
PICS-facilitated blocking software ever becomes widely used - unlabelled
content is blocked). It seems quite contrary to what was claimed to be one
of the intents of PICS development - to provide content providers with an
easy means of voluntarily self-rating their content.

>Placing meta-data
>in content itself can be problematic 1) if the data at the URI is likely to
>change

If the data (content) at the URI is changed, the meta-data (label) embedded
in the content can be changed at the same time, can it not? If not, it
seems reasonable to assume that the data is being changed by something
other than a human. Technology in general is not sufficiently advanced for
something other than a human to decide on and specify new censorship
ratings, nor will it ever be. Technology cannot, and never will be able to,
read the minds of the censors. Hence, insofar as PICS is concerned, your
(1) is irrelevant.

>or 2) in properly discovering the meta-data. For instance, does a
>tree leaf have all the meta-data in its parents roots applied to it,

Obviously (if we are talking about a file in a subdirectory and PICS
labels, respectively) it shouldn't unless specifically so intended. I
*think* we are in agreement on that.

>in
>which case one must always climb the tree to understand all data associated
>with it? If I wish to say that the whole tree is X, except node A, its much
>easier to make two entries in a database, rather than go through the whole
>site embedding the meta-data in every node/leaf.

I agree, but you've built another strawman so you can then knock it down.
You're saying, or at least implying, that the only way node A could have a
different PICS label to the rest of the tree (using embedded labels) is to
label every single page on the site.

Have you read the PICS spec relative to embedding labels? It addresses this
specific issue, and does not require either labelling every single page or
"always climb[ing] the tree". Part of the W3C PICS sales pitch to the Net
community was to tell content providers that they wouldn't need to
self-label every one of their web pages - that it would only be necessary
to embed a generic label in the default/home page of the tree and another
in the default page of node A (i.e. two entries just like in the scenario
you've mentioned re two entries in a database). See the W3C recommended
PICS spec at http://www.w3.org/TR/REC-PICS-labels :

"... if a client is interested in a label for the document
"http://www.greatdocs.com/foo/bar/bat.htm", it can first check
whether the document has a specific label embedded in it. If
not, the client can ask for the document
"http://www.greatdocs.com/foo/bar/". The server sends back the
home document for foo/bar, which may be foo/bar/index.html,
foo/bar/home.html, or something else, depending on the server.
If that document contains an embedded generic label, then the
client may interpret it as applying to the document bat.htm.
If the client does not find a generic label there, it may
check further up the hierarchy, in
"http://www.greatdoc.com/foo/" or even at
"http://www.greatdocs.com/".

Web site operators who wish to provide specific labels for
their html documents are encouraged to embed them in the
documents. Those who wish to provide generic labels for their
sites or subparts of their sites are encouraged to include
them in the home documents at as many levels of the document
naming hierarchy as they think are appropriate. "

Quite clear isn't it. No need to "embed the meta-data in every node/leaf".

However, there is a problem. Subsequent to the specs being *recommended* by
W3C, some people have decided they don't agree with the recommendation, or
perhaps they never did. Whatever, the best means of implementing PICS is
clearly a subject of contention even amongst the PICS developers -
apparently some were intent on designing a censorship tool, others were
intent on designing a Net user empowerment tool. It appears that W3C
currently backs the former faction, but doesn't want to make that too
readily obvious to the casual observer.

On the W3C PICS home page one sees a link, misleadingly named "Hints on
self-labeling" to http://www.w3.org/PICS/defaults.html. However, this is
not, in fact, hints on self labelling, but instructions to developers of
PICS compatible software telling them to make their software do something
other than what is specified in the W3C recommended PICS spec. It states:

"2. Client programs should be able to quickly find the most
specific label for a particular URL. In particular, they
should:
[...]
- Not have to walk up the directory tree asking for all the
documents along the way. This idea of clients walking the
tree looking for embedded labels was suggested in one of the
PICS specs [1] and has quite rightly been avoided by all
client implementations that we know of.
[...]
If a client downloads a page located at the URL
<protocol>://<server>/dir>/<fname> but there is no label
embedded in the HTML document retrieved from that address, we
recommend that the client direct the following label bureau
query to <server>: GET /PICSLabels?u="<protocol>://
<server>/<dir>/<fname> "&s="<service-id>" HTTP/1.0 "

[1] The "one of the PICS specs" referred to above is in fact *the* PICS
spec recommended by W3C.

These hints/defaults are clearly in contradiction to the PICS spec
recommendation. They require ordinary content providers to label every
single page, *unless* they have the means of getting their labels placed in
a third party label bureau - specifically, their ISPs, *if* ISPs ever
commonly run and maintain such a thing. I frequently get the impression
W3C-related people have lost sight of (and consideration for) the ordinary
people who make up the bulk of the Net content providers. W3C appears
focussed on industry/commercial providers, those able to run their own web
servers and related label bureaus, their members.

PICS is technically flawed as a result of this. It makes it impossible for
any content provider who chooses to self-rate their content by embedding
labels in accordance with the spec, to know what will happen when someone
tries to access their web pages. It depends on whether the end-user's
software developer has complied with the recommended specs, or the "hints",
or has done something else again. The last time I checked, a considerable
number of people who'd self rated with RSACi using generic (default) labels
promoted by W3C and RSACi, etc, had (obviously inadvertently) succeeded in
blocking their entire site other than their default/home page because end
user software (eg. MSIE 3 and probably 4) doesn't comply with the PICS
spec.

Well, perhaps this (flawed) is not entirely true. The PICS critics can be
excused for holding the view that PICS was never actually intended to
enable content providers to easily self-rate. The specs and publicity told
them it would be easy to to so by embedding labels, without having to rate
every "tree leaf", but no PICS implementation has complied with the specs
(afaik and according to the authors of the "hints") and the developers are
promoting something other what the specs say.

Why? If the view of these developers is justified, why haven't the W3C
recommended specs been amended accordingly? I contend that it's because if
W3C ever made patently clear how PICS implementations are actually intended
to operate there would be even less people saying self-rating with PICS
facilitated systems is a reasonable idea, less (apparent but ineffective)
take up of self-rating and even more opposition to it being mandated.

At the stage at which W3C has stopped development of PICS, we have a
situation where if a government mandates self-labelling, and a content
provider rates their entire site, in accord with the W3C recommendation
(and RSAC's instructions), with the highest (most restrictive rating) for
simplicity's sake, people trying to access a page on the site other than
the home page will get a message saying the page has not been labelled. It
would be nice to think that the authorities acting on complaints would be
likely to readily understand that the page *is* labelled, it's just that
the technology does not work as allegedly intended, nor as advertised.

>In P3P there is a
>compromise of sorts, where within the HTML you can provide a link to an
>XML/RDF file, but cannot embed it in the content itself.

In the PICS specs, there is no compromise. It was clearly intended that
content providers could provide self-ratings without embedding PICS
compliant meta-data in *every* page. Furthermore, frankly, if I was
inclined to self-rate I'd consider it one hell of a lot simpler to amend a
label embedded in the page I was amending, than having to update some other
file as well.

P3P is a different matter, as I understand it. It's unlikely, I'd have
thought, that changing the contents of a page would have any effect
whatsoever on privacy policies of the site as specified in a separate file.
Not so with PICS labelling.

>Furthermore, I believe in the future, we will move towards content
>(HTML/XML) and presentation(CSS/XSS) being stored in databases. Results
>returned from a get to a URI will be dynamically generated based on
>negotiation with the client. For instance, the client might signal its a
>B&W, low bandwidth, high latency (wireless) client in Cambridge, please send
>me the local movie listings. All of this happens by way of "meta-data." We
>already see this implemented in many advanced sites.

So? As I said above, technology cannot read the minds of censors. Even
humans have significant difficulty doing this (as anyone who's read the
Australian censorship/classification "guidelines" will know). How is
dynamically generated content to be rated? It needs to be rated before it
can be labelled.

> >This is one of things that exasperates me about discussions with PICS
> >advocates. They frequently seek to avoid addressing the issue raised, which in this
> >instance was that if governments mandate self-rating and labelling now, PDF
> >could become very popular amongst those not desirous of self-censoring in
> >accord with someone elses' values because there are no tools now available to
> >enable them to self-rate PDF documents.
>
>I agree.

Oh, good. Pity you didn't say that in the first place instead of implying
it's possible to self-label PDF documents now because you "believe there is
label bureau software out there."

>Furthermore, I've made the point that PICSRules does potentially
>enable governments to promulgate regulations,

I'm pleased to hear that you've come to understand that, particularly given
that about 9 months ago, in response to my and other peoples' criticism of
PICSRules, you said:
"With respect to the contention that PICSRules will make it
easier for governments to censor their populace. I disagree
[...]
PICSRules doesn't make [PICS] any easier to scale for
governments, the only people it gives more power to are
smaller organizations and individuals in exchanging
preferences."
http://www.w3.org/People/Reagle/stuff/PICSRules_Response.html

>but not in the way most people were ranting about [1,2].

Perhaps in another 9 months, you'll also understand what "most people" were
"ranting about".

>[1] http://www.w3.org/Talks/9803-DCSB/slide15-0.htm

That seems to have little relevance. Did you mean slide 16? which says:

"In the PICSRules debate, critics missed the true danger:
regulations on the UI. As you promote configuration and
preference expression to the UI (good things), governments may
shift their strategy from infrastructure to UI.
- servers will not use PICSRules, but clients will - that was
the point. [1]
- easy to require a content or privacy configuration be
available or activated in an extensible configuration
architecture. [2]"

Point [1] is utter nonsense. The *very first* implementation of PICSRules
was in IBM's Web Traffic Express proxy server. Servers *will* use
PICSRules. Who is the Editor of PICSRules spec? None other than Martin
Presler Marshall of IBM. IBM's (PICSRules compliant) proxy was announced
before the PICSRules spec had even received formal W3C approval. As I said
before, though I didn't name IBM, the advertising for IBM's proxy says it
"provides a consistent and focused filtering capability and removes it from
the user's control." (IBM NR 9/12/97).

Insofar as Point [2] is concerned, I can only assume you are way behind in
considering the issues relative to PICS. The potential for governmental
mandating (or manufacturer set defaults which may be difficult for users to
know of or figure out how to change) of configuration of software was first
raised a long, long time ago.

Furthermore, to mimick the mantra of the PICS advocates "governments don't
need PICSRules to mandate a content configuration be available or activated
in an extensible configuration architecture, they could already do that
with PICS".

Insofar as what people were "ranting" about 9 months ago, I refer you back
to what I said then about PICSRules, proxy servers and global rating
systems: http://rene.efa.org.au/liberty/picsgrate.html
Perhaps if you read that again now that you've come to realise that
PICSRules does indeed present "true danger", you might also realise that
what you recollect to have been "ranting" was not and is not unwarranted.

All in all, I'm increasingly of the view that PICS was probably, initially,
short for "Platform for Internet Censorship Systems". Renaming it to
"Platform for Internet Content Selection" made it more saleable, but is
less factual.

Irene

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Irene Graham, Brisbane, Queensland, Australia. PGP key on h/page.
The Net Labelling Delusion: <http://www.pobox.com/~rene/liberty/label.html>
"...PICS-type systems...might have to be enforced" Peter Webb, ABA, June 96
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

-- More (100%) -- -- More (100%) --

Date: Sep 30, 1998 (Wed, 22:57:56)

To: "Joseph M. Reagle Jr." <reagle@rpcp.mit.edu>

From: rene@pobox.com (Irene Graham)

Subject: Re: Censoring the Internet with PICS

Cc: link@www.anu.edu.au, Stanton McCandlish <mech@eff.org>



On Sat, 26 Sep 1998 17:18:34 -0400 "Joseph M. Reagle Jr."
<reagle@rpcp.mit.edu> wrote:

>[This will be my last response for the time being, I tried to focus on the
>technical issues for completeness.]

OK, but it isn't complete yet :-)

>At 12:40 AM 9/27/98 +1000, Irene Graham wrote:
> >Unfortunately, I have considerable difficulty understanding either: (a) the
> >concept; or (b) what you mean - I'm not sure which is the case. Firstly, if
> >every content provider runs their own label bureau in order to self-label
> >their material, does this not mean that end-users who use PICS-compatible
> >software would need to have it configured to query millions of
> >pre-specified label bureaus?
>
>They would run the label bureau.

That doesn't seem to answer my question. However, from subsequent comments
and further investigation, it now seems that you and I were talking about
two different types of "label bureaus".

I was talking about a "label bureau" in the only sense I'd heard of (or at
least understood) it before, i.e. a (like) third-party label bureau where
browsers have to be pre-configured to ask for labels from a pre-specified
label server located (generally) somewhere unrelated to the location of the
requested web page.

However you, I gather, are talking about the masses of content providers
running their own web server which could, if designed to be PICS
compatible, include a facility to set up and maintain a "label bureau".
Such a web server could consult its label bureau to locate the relevant
label and deliver it with the content, as separate items, in the same HTTP
header. (I had not understood this before).

Whilst it thus may in theory be possible for the masses of content
providers to "run the label bureau", it's way beyond their means. Not only
can few afford a permanent connection to the Net to run a web server, they
are simply not sufficiently technically literate to do so. For the
foreseeable future, that they can and will is a pipe dream.

[...]
>There are a number of server side label bureaus (the PICS page lists some).
>You are right that they aren't widespread!

The PICS site lists two PICS compatible web servers, IBM's and W3C's. Odd
that no others have been added to this list since Jan 1996 given that the
maintainers of these lists agreed to be fair/add updates promptly.
Obviously any other developers aren't interested in their product being
freely advertised in the most obvious place to look for info on PICS
compatible products. I'd be most interested to know of other such products.

[...]
> >Advocating this means of self-labelling seems to me to be advocating making
> >it even more difficult and costly, potentially impossible, for ordinary
> >people to publish content on the Web (if labelling becomes a pre-requisite
> >to speaking) or for them to make content available to a wide audience (if
> >PICS-facilitated blocking software ever becomes widely used - unlabelled
> >content is blocked). It seems quite contrary to what was claimed to be one
> >of the intents of PICS development - to provide content providers with an
> >easy means of voluntarily self-rating their content.
>
>Both mechanisms are supported.

Supported in the PICS spec, yes, but not in reality. It was decided at the
first PICS developers workshop (June 96) that client/browser software would
not look up the hierarchy for generic embedded (self) labels.
http://www.w3.org/PICS/picsdev-wkshp1.html
Nevertheless, six months later the W3C news release announcing that the
PICS spec had W3C's highest stamp of approval and was stable etc still
claimed that content providers could embed generic labels.
http://www.w3.org/Press/PICS-fact.html. Of course, they can. However,
browser-type software doesn't take the slightest bit of notice, apparently
because the PICS developers had previously decided it shouldn't/wouldn't be
designed to do so.

No doubt the W3C has a quite plausible explanation for why the PICS spec
says something other than what the PICS developers decided months
previously. However, whatever it is, it makes no difference to the fact
that goverments and anyone else who read the W3C PICS publicity, the
non-technical stuff, are still told that PICS is an "an easy-to-use
labelling platform"

One might question whether self-labelling was ever seriously intended to be
easy for the masses, given the PICS Technical Committee Charter dated the
same as the initial press release (11 Sep 95) says:

"Where possible, we would also like to achieve the following
secondary goals:
[...]
- make it easy for end-users to produce and distribute new labels.
[...]
Whenever a secondary goal interferes with speedy progress on the
group's primary charter, the secondary goal will be deferred."
http://www.w3.org/pub/WWW/PICS/TechCharter.html

It currently seems that "deferred" was a euphemism for "abandoned".

[...]
> > If not, it
> >seems reasonable to assume that the data is being changed by something
> >other than a human. Technology in general is not sufficiently advanced for
> >something other than a human to decide on and specify new censorship
> >ratings, nor will it ever be. Technology cannot, and never will be able to,
> >read the minds of the censors. Hence, insofar as PICS is concerned, your
> >(1) is irrelevant.
>
>I don't quite follow, but consider the inline expansion or aggregations of
>information. Team calendars will be generated from the XML encoded
>individual calendars of all the team members. Or if I change the icon on all
>of my pages to something racy. I now have to go and change all the HTML
>pages it was on. Whereas, if the server served the icon with the icon's own
>meta-data, I only have to change one thing in the database rather than go
>edit every page it sat on. The inline expansion of distinct Web resources,
>likely to change, does get tricky if the meta-data isn't properly served.

OK, again we were talking about somewhat different aspects. I agree with
your comments about ease of content provision, but in the context of
censorship/PICS labelling something/someone has to rate/classify the
resultant output. Here in Oz whether speech gets banned or not, or what
classification/rating applies to it, depends not only on the text and
images themselves, but also on such things as where in a publication/on a
page an image or text is located, how big it is, what impact on the viewer
it has, what the context is, what other text accompanies it, etc etc etc.
If the (Aust Federal/State) governments continue with their stated intent
to have off-line censorship laws apply to on-line content then each page
potentially needs to be looked at to decide how it should be
classified/rated, or even if it may be legally distributed.

I'm quite baffled as to how this is to be achieved for dynamically and/or
automatically generated content. None of the PICS advocates seem to have
considered this, let alone proposed a solution. I very much doubt that
governments enthused about self-rating have either.

[...]
> >PICS is technically flawed as a result of this. It makes it impossible for
> >any content provider who chooses to self-rate their content by embedding
> >labels in accordance with the spec, to know what will happen when someone
> >tries to access their web pages. It depends on whether the end-user's
> >software developer has complied with the recommended specs, or the "hints",
> >or has done something else again.
>
>Yep, the spec is somewhat weak because of this and issues related to
>relative URLs. That's the way of standards and technology.

I'd suggest that the following (from an article by Paul Resnick and Jim
Miller, co-chairs PICS Technical Committee in Wired August 1996) is also
relevant:

"PICS may be the silver lining in the content regulation cloud.
There had been proposals for an even more general infrastructure than PICS
to handle metadata ..., but industry consensus was a rainbow on the
horizon. Moral crusades are a dangerous way to conduct industrial policy,
though; the next cloud may bring only rain."

Almost visionary given that, in the following month, mandatory rating was
first announced, in the UK.

Irene

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Irene Graham, Brisbane, Queensland, Australia. PGP key on h/page.
The Net Labelling Delusion: <http://www.pobox.com/~rene/liberty/label.html>
"...PICS-type systems...might have to be enforced" Peter Webb, ABA, June 96
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~