Re: General Exception for Essential Purpose

Another way of analysing the situation is to impose the following model:

1. There is a set of abstract structures and semantics, accompanied by
text, auditory/graphical data and executable code which constitute an
interactive application, for example a web site with interactive features.
There is parallelism in the system where (as in the case of
graphical/auditory material and its text equivalent) automatic conversion
from one form into the other is infeasible.

All of these materials are stored on a server and constitute a data model.

Downstream processing (in the server itself, in a proxy, in a user agent
or any combination thereof) converts these materials into, effectively, a
user interface on the ultimate input/output device.
Input flows upward, through the necessary abstractions (link activation,
form submissions, data base queries, or user interface events) to
whichever application can handle it or generate a higher-level abstraction
that is forwarded further along the chain. Note that the data model may be
reflected down the chain, so to speak, into the ultimate user agent or
into a proxy server.

Given the nature of the web as it is currently evolving, each aspect of
the processing that produces a final rendering of output and the handling
of input, can be divided among the various links in the chain from server,
optionally through proxy servers, (which may or may not be running on the
user's computer), through transformation processes and finally to the
rendered output, and in the reverse direction for input.

The 2.0 guidelines do not specify where each aspect of such processing
should take place. Thus multiple interfaces can be constructed by the
content creator, through carrying out appropriate processing of the
semantically and structurally rich source to generate (or constrain) the
final rendering, and by delegating the handling of input to the user agent
through relevant abstractions. Alternatively, much of this work can be
sent directly to a proxy server or to a user agent. Thus,

1. The content designer can offer choices as to how much input/output
processing, and thereby user interface construction, is performed under
his/her control. This leads to multiple interfaces.

2. These options should include a choice whereby the semantically rich
data model is sent, so far as possible, further down the chain of
processing. This allows the user agent (or a proxy server or other
software operating under the user's control and in response to his/her
preferences, which preferences can not be satisfied by the interfaces, if
any, made directly available by the content designer) to carry out the
user interface construction, for instance via style rules. Input should in
this case be handled through high-level, device-independent abstractions.

Part 2 is what Kynn has referred to as a "backup scenario". What I fail to
grasp is why, in principle, the resultant interface, generated from
high-level abstractions through software, must be qualitatively inferior
to a custom-designed interface (for a specific modality or output device)
made available by the content developer. Thus I wouldn't regard scenario 2
as in any way a second-rate solution, and it is quite possible to dispense
with scenario 1 entirely through leaving user interface construction
entirely outside the author's control; but the guidelines should not
restrict content developers in deciding whether they will offer 0, 1, 2 or
more interfaces in addition to their high-level markup and semantics,
their equivalents, etc.

Note: the usual disclaimer applies; these are personal opinions.

Received on Saturday, 28 October 2000 21:40:36 UTC