Skip to toolbar

Community & Business Groups

Web History Community Group

This group gathers people interested in the history of the World Wide Web: how it was invented, what was out there that made it possible, and what happened in its early years. Our main goal is to collect and preserve valuable information (software, documents, testimonials) before it is lost. This group will not produce specifications.

Group's public email, repo and wiki activity over time

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

No Reports Yet Published

Learn more about publishing.

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

Publish Reports

‘First’ Pirate Bay Server on Permanent Display in Computer Museum

An (arguably) important part of Web history exhibited in a museum. From the article:

The Pirate Bay is one of the best known file-sharing brands and in less than a decade the site has well-earned its place in computer history. The Computer Museum in Linköping has a section dedicated to 50 years of file-sharing and one of the top pieces is one of the first servers used by The Pirate Bay. According to the museum The Pirate Bay has become a contemporary historical phenomenon and the server signifies “a revolution that begun in a dark grey metal box under a bed.”

“Vague but exciting…”

CERN archivist Anita Hollier points me to the “World Wide Web (archives)” section of the CERN document server. It lists many documents that I’m sure would be of great interest to this group, but not much of them is available (yet). However, there is an early version of Tim’s seminal paper: “Information management – a proposal” (PDF). There are other versions of it linked from the “original proposal of the WWWW” page at W3C, but not as nice as this one, which is scanned from a paper printout including handwritten annotations, like “Vague but exciting…”, “Nice idea” (about links), “This is like e-mail, but who will implement and port all the user agents?” or “But it complicates life for the busy user”. Anita (who I warmly thank) tells me they’re from Mike Sendall, Tim’s boss at the time. How right he was.


Goodbye Minitel and CEEFAX

Two public pre-WWW information systems were recently shut down: France Telecom’s Minitel, after 30 years in operation, and BBC’s Ceefax, after 38 years.

Both looked very much the same, but the comparison ends there. Ceefax implemented “links” as references to page numbers that the user would type on their TV remote. Minitel had a more complex linking structure, programmatically specified, with no URLs or ways to link to other “sites”.

What’s particularly interesting, looking back at Minitel, is the fact that it had something the Web never had: a payment system. Services could charge access to their content, proportional to the time you spent. The user would pay through their phone bill and France Télécom would give some of that payment back to the service.

Whether it’s a good thing that the Web never had an integrated billing mechanism is debatable. Imagine the outcry if ISPs started charging for access to some websites by the minute. However, it can be argued that it’s not because app stores have pay-for content that there’s no good free content, and that a billing system would make it possible for many small entrepreneurs to quickly earn funds to scale up their project and continue innovating. Right now, those entrepreneurs turn to apps precisely because they provide an easy way to get paid for the service they design.

It will be interesting when Google and Mozilla release their web app platforms with integrated billing systems. We may end up with the best of both worlds: the general web can remain an open platform giving the opportunity to tinker and hack, while web apps will offer the possibility of business models that will encourage innovation.

Web History Timeline Project

John Allsopp of Web Directions and Style Master fame was the first guest on The Web Behind, and one of the very first things he did was to announce his Web History Timeline Project.  It’s a beautiful visualization brought to life by John’s expert curation and timeline.js.  As John says:

The goal is to bring together some of the most important milestones in the history of the web, whether they’re

  • the publication of seminal articles and books;
  • the publication of important standards and RFCs;
  • the release of important software (browsers, servers, tools, libraries;)
  • significant events, such as the founding of the W3C.

The best part is that anyone can contribute milestones using a simple web form that John’s made available in his announcement post. Be sure to give it a look and add anything you think is missing!

New Podcast: “The Web Behind”

I’m really excited to announce that I’ll be launching a new podcast series called “The Web Behind” with Jen Simmons of The Web Ahead—in fact, The Web Behind will be an in-stream subset of The Web Ahead.  We talk about it briefly on The Web Ahead #34, which was released earlier this afternoon.

On The Web Behind, we’ll be interviewing people who were involved in the evolution of the web from its early days, getting their perspectives on why certain things did or didn’t happen and how those outcomes have affected the web’s development.  Many guests will be people who have since left the web field, or who have vastly different roles now than they did 10-15 years ago.  Our first guest will be John Allsopp, and we’re scheduled to record live at 2300 UTC on Thursday, September 20th, with the resulting podcast episode available shortly thereafter.  We plan to have a new Web Behind episode about once every other week, schedules permitting.

One of my primary goals is to augment the work of the Web History CG with the personal stories and perspectives of the people who witnessed and quite often influenced the web and web design and development.  In many ways our conceptual model is, though we aren’t (yet) planning to create a standalone site like that.  Maybe one day!  First we need to build up a library of interviews.

We have our first few guests lined up and many more on a “wish list”, but I would love to hear suggestions from the Community Group regarding who we should have on.  I’ll say right away that I hope to one day have both Sir Tim and Robert Caillau as guests, but we’re very much interested in hearing other names.  We want to bring forward voices who are unfamiliar to current web professionals, and would be thrilled to have on guests unfamiliar even to us.

I’m really looking forward to hearing what our guests have to tell us, and I hope you’ll join us!


Remy Sharp just posted to Google Plus about how he came up with the name of Polyfills. It is fairly obvious but probably something work keeping record of, you all know how rumours can change things!

Original post:

Where polyfill came from / on coining the term

It was when I was writing Introducing HTML5 back in 2009. I was sat in a coffeeshop (as you do) thinking I wanted a word that meant “replicate an API using JavaScript (or Flash or whatever) if the browser doesn’t have it natively”.

Shim, to me, meant a piece of code that you could add that would fix some functionality, but it would most often have it’s own API. I wanted something you could drop in and it would silently work (remember the old shim.gif? that required you actually inserted the image to fix empty `td` cells – I wanted something that did that for me automatically).

I knew what I was after wasn’t progressive enhancement because the baseline that I was working to required JavaScript and the latest technology. So that existing term didn’t work for me.

I also knew that it wasn’t graceful degradation, because without the native functionality and without JavaScript (assuming your polyfill uses JavaScript), it wouldn’t work at all.

So I wanted a word that was simple to say, and could conjure up a vague idea of what this thing would do. *Polyfill just kind of came to me, but it fitted my requirements. Poly meaning it could be solved using any number of techniques – it wasn’t limited to just being done using JavaScript, and fill would fill the hole in the browser where the technology needed to be. It also didn’t imply “old browser” (because we need to polyfill new browser too).

Also for me, the product Polyfilla (spackling in the US) is a paste that can be put in to walls to cover cracks and holes. I really liked that idea of visualising how we’re fixing the browser. Once the wall is flat, you can paint as you please or wallpaper to your heart’s content.

I had some feedback that the “word should be changed” but it’s more that the community at the time needed a word, like we needed Ajax, HTML5, Web 2.0 – something to hang our ideas off. Regardless of whether the word is a perfect fit or not, it’s proven it has legs and developers and designers understand the concepts.

I intentionally never really pushed the term out there, I just dropped it in a few key places (most notably the book), and I think it’s when +Paul Irish  gave a presentation some (many?) months later, directly referencing the term polyfill, was when the term really got a large amount of exposure (I think this was also helped with the addition of the Modernizr HTML5 shims & polyfill page).

First WWW flyer

I have what I think is the first A4 leaflet produced by Tim, that my wife brought back from Hypertext 91′ in San-Antonio Texas (a conf she attended as a Digital engineer working on their own hypertext of that time, called Memex, integrated in DecWindows).

It has on one side a variation of the “Web bus” drawing and on the other side the text from the usenet announce from Aug 91

I put a scan online

Archiving Format

I was poking around in search of good ways to grab pages for preservation, since I feel like “Save as Web Archive” in browsers isn’t really a good long-term solution.  Is there anything better than or otherwise preferable to Web Curator?

(Added 7 June 2012 7:45pm) — The reason I ask about Web Curator is that it appears to save content in the WARC format, which—according to the answers on this Stack Exchange post—is the preferred format for archiving (static) web content.  Is grabbing stuff with wget sufficient?  Should we go with WARC regardless of tool, or is it too much/not enough/not right for us?