RE: Conversation about "Web Applications Architecture" additional background for TAG discussion

I have TAG action-517 "figure out what to say about scalability of access...", but I think the TAG's primarily focus in the past has been things like the hits on DTD and URIs that W3C suffers, or 3rd-party deep linking to images on web sites not prepared for the hit load on non-monetized content. 

Your comments are also relevant, in that there hasn't been a lot of priority in the conversations for  the needs of rural ISPs trying to support low-bandwidth links.

However, I think the place to take your concerns at this time is the working group http://tools.ietf.org/wg/hybi/  specifically working on "websocket" and its followon. For example, as far as I can see, their requirements document http://tools.ietf.org/html/draft-ietf-hybi-websocket-requirements has no scalability requirements or discussion of always-on-bidirectional connections, etc. issues that you note.

Getting the working group requirements document to note the issues you raise as "important considerations" would be a first stage in making sure the working group addresses the issues you're concerned about.

Larry
--
http://larry.masinter.net


-----Original Message-----
From: Eric J. Bowman [mailto:eric@bisonsystems.net] 
Sent: Monday, February 21, 2011 12:48 AM
To: Larry Masinter
Cc: www-tag@w3.org; Mark Nottingham
Subject: Re: Conversation about "Web Applications Architecture" additional background for TAG discussion

Thanks for posting that.  I'm concerned that the nature of the deployed
infrastructure is getting overlooked in this debate.  My decade-old
desktop firewall/router is an 8"x4"x1.5" form-factor with a piddly CPU
running embedded Linux, w/ DNS and squid caches.  I've flashed it once,
to patch BIND.

How large and expensive will its successor need to be, and how often
will it need updating, to be able to act as a participant in this new
architecture which requires javascript and eschews end-to-end protocols?
Even if I thought the existing architecture needed replacing, I'd want
the new architecture to be scalable using $100 caching appliances.

I've been configuring SOHO LANs to share DNS/HTTP using caches and/or
proxies, to save scarce and expensive rural bandwidth, since the mid-
90's.  I'm hardly alone in doing so (not just in rural SOHO settings).
What happens when all the browsers which used to be configured to share
resources like this, can't be any more?  This regression in scalability
concerns me at the ISP level.

I'll let you IETF folks worry about the macro implications I can't
speak to, but I can speak to the local impact -- here where the Cisco
2501 I purchased in 1994 is still routing IP traffic for the third
successor ISP to the one I used to own/operate (running an ISP is what
led to my interest in Web architecture, more so than developing
websites).

Broadband isn't a pretty picture out here -- a few popular destinations
adopting WebSockets would require my ISP to make systemic upgrades, to
keep their antenna sites from being swamped as non-closing connections
become the rule instead of the exception.  Will they need four antenna
sites to cover the same number of customers in an area previously
served by one antenna?  It isn't just a matter of plugging in more
backbone to deal with the increased bandwidth requirements, which is
cheap; new antenna sites cost bucu bucks.

Maybe these WebSockets destinations will be faster, but not for me
unless my access costs go up just to allow my ISP to hold performance
steady with existing levels.  Or I pack up and move 20 miles, into
Comcast territory.  :-(

BitTorrent's always-uploading behavior is a problem that's limited to
BitTorrent users; extending this behavior to everyones' browsers all
the time would be devastating to wireless networks, whose antenna sites
are upstream chokepoints.  I've followed discussions about long polling,
but I haven't seen it debated in terms of whether the deployed
infrastructure can even handle its proliferation.

My rural-ISP background and knowledge leads me to question whether
those pushing to abandon the existing Web architecture have ever wanted
for bandwidth, or understand how difficult their vision of the Web will
be to implement outside first-world metropolitan areas.  How much more
expensive will standard-issue Web/DNS accelerators become if they need
to be kept up-to-date with the latest popular javascript libraries in
order to function?  Will small ISPs be able to compete, or will these
new costs drive them from the market, or will a shift to noncacheable
traffic destroy the feasability of their business models outright?

Wireless broadband in particular, is engineered around the assumption
that protocols are end-to-end with connections that close -- the
technology is incompatible with bidirectional protocols at scale.
There are one or two proprietary solutions to the problem, but it
shouldn't be required for ISPs to be early-adopters of these systems
just to stay in business.  The big players will be able to, of course,
but I'm a huge advocate of competition in the ISP market (if we had
more of that in America, we wouldn't need to legislate net neutrality).

>
> I'm not sure that I want to argue against it, but I do think it needs
> to be openly discussed and understood by the communities.
> 

I'm barely tech-savvy enough to raise the issues I have, I'd just feel
better knowing that the ability of the deployed infrastructure to
handle this new architectural direction is being considered, instead of
focusing on the browser-vendor / website-developer issues it raises.
Will this architectural shift put independent ISPs out of business,
further consolidating the market under the control of monopolistic
conglomerates who don't hesitate to engage in anticompetitive behavior
or gouge their customers?  Surely that isn't better for the community as
a whole.

-Eric

Larry Masinter wrote:
>
> I thought I would send out a summary of conversation with Mark
> Nottingham about "Web Applications" for additional background for our
> conversation about the IETF/IAB plenary in late March.
> 
> ===============
> 
> Mark:
> 
> For me, the crux of the issue in 'Web Applications Architecture' is
> that the browser vendors are "dumbing down" the Web to a much
> lower-level interface, whether it be getting information onto screens
> (Canvas) or onto the wire (WebSockets). 
> 
> AIUI this is because they see such interfaces as having greater
> interop and therefore less QA headache, and they have an expectation
> that 3rd party javascript libraries (as mobile code) will build on
> top of these low-level interfaces to provide additional
> functionality, rather than standards.
> 
> As such, this is a titanic shift; Open Source is taking the place of
> Open Standards. Unintended reuse is probably the biggest potential
> casualty, in that you can no longer count on any semantics in the
> protocol or format; that's all buried in a library which you have to
> recognise and then extract information from. 
> 
> It's also a big change for vendors, whether they be big companies,
> startups or open source projects; what remains to be seen is how
> they'll adapt (or not). For example, while Google today has the
> resources to spider a severly fractured Web, it didn't at the start,
> because it was a beneficiary of the semantics in HTTP and HTML, and
> the simpler nature of the Web then.
> 
> It's obviously also a big change for the W3C and IETF, one that
> perhaps we've walked into without fully thinking through the
> repercussions. I'm not sure that I want to argue against it, but I do
> think it needs to be openly discussed and understood by the
> communities.
> 
> ============
> 
> Larry:
> 
>  I'm not sure the problem Mark talks about hasn't always been with us
> -- AOL instant messaging, Flash, the web itself grew up because
> people could just ship implementations (plugins or installable apps)
> that used TCP and UDP and deploy them.  Maybe this is a scale issue?
> Maybe it's just a "support unintended reuse please by making URIs for
> things that are reproducible session state" or "in addition to your
> UI always make a net-accessible API to support reuse" is the new
> design pattern we want to encourage?
> 
> ============
> 
> Mark:
> 
> I think there's a lot of truth in what you suggest, but it's a bit
> more. In the past, someone who wanted to get one of these
> applications deployed needed to convince someone to install some
> software. Now you can deploy increasingly capable software without
> realising it, just by clicking a link. 
> 
> WebSockets doesn't use URIs meaningfully, and it allows you to
> construct bespoke protocols. I guess what I'm concerned about is that
> these new efforts seem to be regressing; rather than building on top
> of the well-understood and shared semantics of HTTP, HTML and URIs,
> they're encouraging developers to re-invent them by providing much
> lower-level APIs. That makes things easier for those with a QA focus,
> but I wonder if it's better for the community as a whole.
> 
> 

Received on Wednesday, 23 February 2011 16:57:55 UTC