From W3C Wiki

TPAC 2007: Session 3: Future Formats: HTML5 and XHTML2

/!\ This session transcript needs to be edited. Please consider updating this entry

agenda audio


Steve Bratt: Janet Daily just reminded me that there were some bloggers who were going to be blogging about this meeting, but many of you out there are bloggers and we certainly welcome you all to do that, let people know what we're talking about in here, tell it like you see it, and again it's just another part of our efforts to improve our outreach to outside community, to the people outside of this room.

>> It's my pleasure to introduce Al Gilman who is going to be leading a panel of experts, and hopefully with contributions from you in the audience, the subject of future formats, HTML5 and XHTML2.

Al GilmanThank you very much, Steve. HTML is the at center of the web, it's the glue of the users' experience since the web exploded in the big bang and the role of leading the web to its highest potential. is something that is a use it or lose it, and if we don't work on these things and maintain them, we will lose the position of leading the Web, and in fact W3C is very busily engaged on bringing new and improved functionality to the user experience in a variety of ways centered around HTML, so we're talking about the work that is going on in the HTML Working Group and the XHTL 2 Working Group and some new capabilities that are being integrated, and the growth in capabilities.

Al GilmanSo, here, I have Anne van Kesteren (Opera Software), Henri Sivonen (Mozilla Foundation) and Richard Schwerdtfeger (IBM). I also have regrets from Rotan Hanrahan (Mobileaware Ltd); he had to beg off at the last minute because of a corporate emergency, he's OK, but he's not here. I'm not going to answer questions for Rotan, but I will take two minutes to focus on this challenge. ... Anne, two minutes?

Anne So Henri has all the good points about why we love HTML5, so I will try to say something about why we started. In 2004 there was a Web Applications Workshop held by the W3C and I found a quote in the lags there where Tantek Çelik said we want to do HTML4 errata, and Steven Pemberton replied that if you want that done you have to do it, so we did.

We created the WHAT WG, and tried working out how HTML actually worked in browsers and also spec'd new features that made it work for Web applications, etcetera.

In 2006, end of 2006, W3C approached the browsers venders again, and the response was agreement, and a new HTML Working Group was formed, and we are currently working on this HTML5 spec ourselves, together with WHAT WG.

This is where we are today.

Al Gilman Thank you Anne. Rich, two minutes.

Rich About four years ago we started on a journey to address web 2.0 accessibility, and back then it was called the XHTML accessibility. We've created some cross-cutting technologies that we started with the XHTML 2 Working Group, and we built a plan that would allow us to add semantics to Web content and make it fully interoperable with assitive technologies, these are things like screen readers etcetera, and we tied all this to business value. One of the things I think people should look at is what we did in the accessibility work, and how we built a community round this to make one of the most successful accessibility efforts in as long as I've been in the business. We did the technical work on what works in today's and tomorrow's markup, we did evangalisation, we evanglized it, we participated in multiple standards bodies, we collaborated with browsers, developers, content providers and assistive technology vendors to produce and end-to-end solution, and we build critical mass. It's probably one of the greatest advances in accessibility in the last ten years, it's basically bringing the accessibility and useability in the desktop to the Web. There's a lot of mainstream uses of this technology, both in our middleware and in the client, so accessibility is a good barometer of problems and successes. One of the things we talk about with both HTML and XHTML, both groups are doing excellent work, I think both groups can learn from each other, I'd like to see the two groups merged, and build a common strategy going forward, because one group looks at what I need to do in the browsers today, and the other group is looking at what I need to do tomorrow, and how it actually ties in with the enterprise, and I think that's probably one of the differences between the two groups. I can tell you that working with both groups has been an outstanding experience, they have some wonderful people. Thank you.

Al Gilman Thank you Rich. .... Henri?

Henri I'm going to say a few points about why I think the HTML 5 effort is interesting and might actually work. First this is a multivendor effort, the top browser vendors are on the Working Group, so there's a real chance of inoperability when the spec is well-defined. Second the spec documents how to consume web contents both new and old, and that lowers the costs of consuming web content, and the benefits of lowering that cost go beyond developing browsers, so I've been working on an HTML5 parser in Java and Anne and others have been working on an HTML5 parser in Python, and we're already seeing non-browser applications picking up these libraries and doing things with HTML where previously it was too complicated and costly to develop parsing for HTML, so people who were in te middle, not in the browser, just took HTML, used it as a blob, and then put it another pipe [?], and now they can actually do interesting stuff with it. The third point is that HTML5 protects investment in HTML and the Web, and not just in borwser implementations but in deployed Web aps, so if you have a lot of HTML already out there in a Web app, and then you want a new feature, because people want new features, we should provide those features in HTML in order to keep HTML competitive. When you can incrementally start deploying a new feature without having to rewrite your Web app, HTML5 it protects the investment in the existing Web app code, and this also helps (we hope) with the adoption of HTML5 since it's apble to plug in to this existing ecosystem and network of tools.

Al Gilman Thank you Henri. Rotan Hanrahan chairs the Device Description Working Group.

Rotan WAP 1 was sort of a scandalous failure, and the second generation approach involves knowing what you're sending to, and this is at the margin where One Web does not equate with one look and feel. They're applying a separation of content and presentation that's industrial strength, this is a corner case on the commercially active Web for adaptation of the presentation and input event bindings and things like that. That group likes to look at what's in the content management system, have it engineered as data. We have existing practise that there's the Webtop that reaches people, and that's in HTML, it has a low level of orthodoxy, and XML is used widely in the enterprise in the back office for integrating data laterally across an organization, but it's a case where you know the processor that's going to process it, whereas you go across the public Internet to the Webtop, and you take the processor the user has... The people who are adapting for the mobile context like things that are strictly XML to work from, so they would like to build a content repository and serve from that, and then they'll serve to whatever the device likes, but they like to do their source in XML, which is partly related to how we have two groups.

Al Gilman At this point we're ready to take questions. OK, let me try a question. I want to ask the panel, do you think there's benefit to more orthodoxy in the public Web, the Web that goes to people, and is there a way to get there? Does anyone want to take a swing at that?


[audience laughter]

Henri More orthodoxy?

Al Gilman I'm saying the Web as she is spoke in practice versus the Web as she is spec'd. In W3C there's real systems run with finite error rates, but we seem to have a rather large error rate.

Henri The specs should meet the real world, so that if you take the spec and implement the spec you can actually consume the content that's out there.

Al Gilman Well, this sounds like the drums principle that you should be strict in what you admit and lax on what you accept. That could turn into `just accept anything' but then it's hard to interpret semantics out of the markup.

Anne That's what HTML5 does, at least for the text/html MIME type, it accepts an arbitrary bytestring and converts it into a document, so you always get an end result, and whether the result is what you desire, I guess that varies, but if you write it as per the authoring guidelines in the spec you will get what you desire, and if you make a mistake and all browsers follow the spec, then at least all browsers will display your page and interpret it in the same way, so at least semantics don't get lost between clients.

Rotan [?] I think part of the problem is that most browsers don't follow all the specs. Some browsers will implement parts of the spec, and will also implement JavaScript in different ways, and there's a big cost to Web application developers who need to get consistency across the browsers.

Al Gilman OK. So, yes, that was a leading question becaus eone of the value-adds of the HTML Working Group developing an HTML product is that they're working on describing more closely how the browser parses HTML as it receives it, and including recovery behaviours that we haven't messed with before.

Anne Actually CSS has done that for years.

Al Gilman Alright. CSS has done it fine. OK.

Anne I do agree that in general specifications don't do error handling at all, and that is quite bad.

Al Gilman It's within the workplan of HTML5 to do better on that scale?

Anne Yes, so part of the HTML5 effort is actually fixing that hold in HTML4, that it doesn't define error handling or the parsing rules for HTML, and that also goes for earlier versions of HTML which didn't do that either.

Al GilmanDo I have a question from the audience?

Anne If the audience could participate, it would be nice.

Ian Jacobs (editor of the HTML4 Specification, with Dave Raggett and Arnaud Lehors), so this is not a defensive question at all, i'm curious about the error handling part of the specification. That seems very useful, to reach an agreement on error handling, but I'm wondering if the error handling approaches work across different applications and different application scenarios, and what the HTML5 spec plans to do about covering lots of different kinds of error hadling.

Henri Well, HTML already, when it's parsed, different implementations always do something, so it's not a question of whether you can have error handling or not, something already happens, always, and this is about defining how that happens so that different implementations can interoperate even in the case where the input isn't conforming, which is usually the case.

Ian Jacobs Let me refine the question. There are non-browser scenarios where HTML is used, so are those being taken into account? I guess the question is, it is possible to describe error handling in a generic fashion for a wide spectrum of applications? Like a printer or...

Anne Yes, so, the specification is defined to closely match existing browsers, but if you take an HTML5 parsing library and plug it into another kind of application, and the library implements the HTMLL5 parsing algorithm, then you get the HTML5 error handling in another kind of application.

Henri As to whether it's portable, the HTML5 spec for instance does allow different behaviour in validators, which don't have to do the full error recovery because they can [stop] after a few errors, for instance, because they don't have to apply the whole algorithm, but in general for clients trying to get the semantics out of the document they have to follow the whole algorithm.

Al Gilman Another question from this mic?

Henry Thompson (from the University of Edinburgh) The description of the process as error recovery implies that there is a non-errorful starting point, but perhaps I just haven't done a very good job of reading the spec, but the spec as I read it in its current state doesn't define a language and then a set of error recoveries for when you don't get something in the language. It describes a process, and I can't derive from the process what's considered to be correct and what are considered to be errors. and in terms of the applicability of Postel's Law, and our guidence to authors, where do I find the definition of HTMLL5 without errors? It seems to me that's a very important part of what we do, is to try to say `this is the language, and here's something that's beyond, separate from, in addition to the language itself, a set of blessed error recovery strategies. It doesn't feel like that's the way the spec is written today, but am I just missing something?

Anne There are two layers, the parsing layer which consists of the tokenization and the tree constructioon, and then ther's the higher layer that operates on top of the parse tree or the DOM that's constructed by the parser, and the higher layer, there's in the spec, throughout the spec, normative statements about what the DOM must be like to be conforming.

Henri To say this a bit more simply, there are two sections, one of them is called writing and one is called parsing.

Anne I'm coming to that in a moment. So then on the lower layer the how to convert a stream of bytes into a DOM tree, there's an algorithm, so it's defined as an algorithm which has certain steps where it says, this is a parse error, and so there's some functional definition of, if you walk the algorithm here, an error happens, but that's not very useful for authors to have to walk the parsing algorithm, so there's also a section in writing that defines what the character stream that you can feed to the parser must look like, but due to the complex nature of the parsing algorithm that section is written for spec lawyers who are spec lawyers but don't still want to walk the algorithm, and for actual authors that section is still probably too hard to read, so for practical purposes there needs to be a tutorial that doesn't actually say what all the dark corners are, but gives a story that if you write this, this is within the conforming stuff, but the actual definition of what's conforming on the parsing level isn't something that a casual author would want to read, so you'd want to read a tutorial that tells you part of the story, and if you stay within that part of the story, you're still within the story.

Henry Thompson That's very helpful and I'll look forward to that tutorial. Could you clarify just one last thing, which is whether the section you described as the writers' section which says, the following byte sequences are processed, does that include those which are processed via error states, or only those which are processed without any error states?

Henri The writing section only tells things that don't invoke errors when fed to the algorithm.

Henry Thompson Thank you.

Al Gilman Next question.

Glen Adams representing Samsung. So I believe you said that orthodoxy may be viewed as conservative in what you transmit and liberal in what you receive, and I wonder if that dictum has been largely responsible, in the latter part, liberal in what you receive, for the current state of affairs, because browsers traditionally have been liberal, that allows for many types of behaviours and a lack of interoperability which is trying to be addressed here, I wonder if we changed that dictum then what would the resulting effect be?

Henri How do you propose we change that?

Glenn Adams I was asking a question to you.

Henri So my answer is that I don't think we can change that. and that therefore we need to specify it the way we did. The reason is that there's a lot of deployed content out there that relies on the error handling. The story is that initially HTML was developed and it defines what is a conforming document, and then people write what is not a conforming document, and they do that because [browsers] were not doing strictly what the spec told, but they had a more liberal parser already, and then people started relying on the specifics of those parsers, and then one dominant parser came out, and everyone started coding to the specifics of that parser, and the minor players in the game started reverse-engineering the more dominant parser, which is why it's important to specify the error handling rules in the spec, so that all players do the same thing, from the side [start?]. And I guess there's a separate question of whether the language should be strict, like if you feed it a byte string, like if you feed it a byte string that doesn't conform to the spec, if in that case the error-handling rules should be that you [stop] loading the document and don't show anything, which seems negative from the user's point of view.

Al I'd like to comment on that briefly, in the sense that XML started out with a fall-over-on-first-error rule and it's my understanding that in the Web services world, even in the back office there's push to relax that and mark things as must understand and must recognise and whatever, and accpet things that have holes in them. Certainly the public Web is successful because it connects zillions of people with zillions of people, so it has absorbed characteristics of natural language which is that there's lots of redundancy and lots of errors.

Glenn Adams So I guess in some way we're a victim of our own success, because of our liberality in recpetion, and the lack of error-handling specification, it allowed proliferation to occur, but now we're having a backlash of that, which is that not all receivers are equal, and therefore not all behaviours are equal. So it's a difficult path to find a balance between the two and I'm just wondering if at a higher level if we go back and review that dictum, if looking at it in different ways will help us in any way.

Al Gilman I understand. But we've tried to reform it by basing it on something that required parse-tree perfection, and this didn't get a lot of uptake, so now we're trying to by the process of making a public document of the sponginess of the spec, if you'll pardon my language.

Al Gilman Question from over here?

Rich the question I have is, the dominant HTML parser, did they provide all their error correction techniques or are you still reverse engineering?

Henri Reverse engineering. From all four [sic] browsers, not just the most dominant one, to find, we don't want to break real-world content, and there are basically two paths of HTML error recovery, one sometimes results in a graph and the other always results in a tree, and given that CSS and the DOm are tree-based, we went for that one, but tried to emulate as much of the one that was graph-based as possible without making the other graph based too.

Rich I see a strong analogy between what's being done to come up with a public statement close to consenus in error recovery, just the way that the WAI ARIA work has taken the accessibility APIs, and it doesn't exactly match any of them, but the logical model of Aria 1 is very much a consensus, a dotted line, which is approcimately what's common across APIs, it's engineering reality but it's a step forward in terms of making spec and practise closer.

Al Gilman Question from over here...

Linda Grant, Hewlett Packard[?] Do you see the browsers of the future supporting both HTML5 and XHTML2 and if not what implications do you draw from that for the bifurcation of effort by the W3C?

Anne So as current things stand with the latest XHTML2 WD it would be impossible for us [Opera] to implement both, and given that we have a vested interest in HTML5, I guess we go for that one. [On the implications...] HTML5 will be used by browser vendors on the Web mostly, and XHTML2 will be used in vertical markets, because apparently there's interest in it, otherwise there would not be a Working Group, I hope.

Rich I don't see XHTML2 being implemented in the mainstream browsers, however, I think this bifurcation is a real problem, I think there's a lot of innovation going on in XHTML2, I think the two groups should be merged, and the concepts that are coming out of XHTML2, even if it takes several five six, sevn years, should be looked at and incorporated into future versions of HTML. In particular, some of the things that they are doing is incorporation of XML events to reduce the amount of JavaScript that's in everything, that reduces footprints, that means there's business value add for things like that. Other things that are being considered for within XHTML2 is they haven't looked as much at how do we get this to transition in the browser, but on the other hand they're also looking at the enterprise and at middleware, andlet's face it, in the middleware we're not going to take content in HTML fragments and try to glue it all together, it doesn't work that way, there's just too many problems with it, so what you're seeing is you're seeing a lot of middleware being done in XML-based markup and then producing XHTML on the client. So I think from a strategy perspective you should be looking not just at the browse and the consumer market but the whole end-to-end piece as part of the strategy, what do I need to do in the middleware and what do I need to do in the client. As we go towards more Web 2.0 applications there's going to be a bigger emphasis on the middleware. Your average developer's not going to throw together facebook in front of the users, so there's a lot to be learned from both goups, what can has to be delivered in the browsers today, and how do we get there tomorrow to make things a lot easier? There's no question that what people have to do today to support the different browsers today is very expensive, different levels of CSS, different levels of JavaScript support, how can we make that easier, nore declarative over time? I think one of the problems we've had with XHTML2 is that we haven't taken what, as we did for aria, how do we get what we want from today's browsers and then migrate to where we want to go, we basically have to address both areas.

Henri Currently what I'm seeing is that the top browsers vendors are participating in the HTML5 Working Group and have already implemented older versions of HTML as practised and I don't see browser vendors on the XHTML2 Working Group, so I don't have a reason to believe that an XHTML2 implemention in the same browser would be on the way. And as Anne pointed out a technical issue, let's suppose that browsers implemented those two side-by-side, the way XHTML2 redefines stuff in the XHTML1 namespace as of the latest Working Draft, it would be impossible to implement it in a way that when you do a createElement in the DO you get the right interfaces, because you can't use a version switch for createElement and stuff, or createElementNS, so to support the little existing XHTML1 content out there that there is, browsers really can't implement XHTML2 as of the latest Working Draft when it's in the same namespace.

Daniel Glazman (Destructive Innovations) First I would like to congratulate the HTML Working Group because it's the first time in history that we do care about the editing side, that's tremendously important. I'd like to urge browser vendors and editor vendors to have editors ready for HTML5 when the browsers are ready for the market. HTML5 is much more comples than HTML4. It's much more powerful, but more complex. We need WYSIWYG editing tools. Thank you.

Steven Pemberton (co-chair of the XHTML2 Working Group) I personally think this panel pits the two technologies against each other in the wrong sort of way, and I really don't think its an either/or. Of course HTML has got to exist as it is. But I was on the group that ended up producing Python, and this was in the 80s, and I have to tell you that we got a lot of pushback from people who thought we were completely crazy, and in those days doing interpreted languages was completely nuts. But we didn't think C should go [away], we were trying to address new problems that we saw coming up, and that we thought of ways of solving these problems in the future not today, and in a sense XHTML2 is doing the same sort of thing, it's saying, what are the problems that we need to solve, and how can we do this better, so things like, Single authoring, so that you don't have to write ten versions of your site for ten different sorts of device, or Accessibility, so that you don't have to have guidelines for accessibility, so that the accessibility comes out of the box. Hw do you improve the user experience wuthout having to program in JavaScript... Division of tasks within the company so that you can give different parts of the task to different people and they don't have to communicate except over a fairly thin pipeline as it were, and I could go on and on and so this is the area that XHTML2 came out of, how could we solve some of the larger problems, while HTML still exists? I see HTML as the assembly language of the Web that other specifications can talk to. An example of that happening is that eBay UK already uses XHTL2, but then transforms down to whatever the particular device needs. XForms, which is part of XHTL2, did the same sort of analysis, and I have a number of data points of how XForms has done its job but I'll just give you one, that's a company that creates very large walk-in devices that have very complicated user interfaces; traditionally they needed 30 people over 5 years to implement the user interface. As an experiment they tried XForms and did it with 10 people in one year, so that's more than an order of magnitude saving in cost. These are some of the benefits you can get by going up one level, which is whay Python did, so I don't think the two need to be in conflict, and I think they can exist side by side perfectly well, because they're addressing different problems, and I don't see why that should be a problem.

Henri You mentioned different devices, accessibilty, improvemetns without JavaScript, and task division; I have here two portable devices and they run three independent implementations of the interoperable Web stack, so the way I see the right way forward for doiong Web stuff on different devices is putting better browsers on the different devices instead of using the server-side to serve different stuff to different browsers. And HTML5 is improving the built-in accessibilty by providing elements such as Progress, so an application developer who wants a progress bar can get a progress bar by saing that this is a progress element, and then the built-in semantics of that element can be mapped to the accessibilty api and then exposed as a progress indicator. Morover, WebForms 2.0 improves the ability of authors to do smarter Web forms without having to do as much scripting as before. As for the task division, it seems to be like the holy grail of enterprise workflows, and when I look at, let's say, Java development enviroments where the stack is designed to separate concerns and tasks, still in practice people end up cassing the task lines, and the architecture that's supposed to separate the concerns actually adds to complication, so I think that while task division might be nice in theory, it's really really hard to do in practice, espeically considering the constraints of working with content in legacy browsers.

FIXME got to 43 minutes into the stream.