Web Editing Working Group

Presenter: Johannes Wilm, Alex Keng and Anupam Snigdha
Duration: 6 min

Have you ever tried to edit text on the Web and it did not work as smoothly as you would like to? The mission of the Web Editing WG is to explore limitations in existing browser primitives to provide use cases for new APIs and to suggest solutions for text editing.

All demos



I've been trying to write this letter in this web phone forever.

Ugh, why doesn't it work the way I want it to?

Why do I have to redo it all the time?

No, not, that's not what I wanted!

No, not like that!

Ugh I'll try again.

Well, why does it always have to take this long?

It really, really shouldn't be this difficult.

Ugh Yeah.

Have you ever tried to edit text on the web and it didn't work as smoothly as you would have liked to?

The mission of the Web Editing Working Group is to explore limitation in existing browser primitives to provide use cases for new APIs and to suggest solutions either by way of standardization of existing behaviors or by the introduction of new APIs relevant for text editing.

The goal is to facilitate the creation of fully-featured editing systems, as well as small editors using Java script.

Now let me have some of our members demonstrate how we are fulfilling our mission.

Hi, this is Alex from Microsoft HQ. So one of the APIs we are working on is EditContext.

EditContext is a new API that enables web-based editors to have full control of DOM while handling the user text input.

So traditionally, to get text input, web-based editors would need to put 'contentEditable' or 'textarea' into DOM.

And while it's getting text input from the user, the DOM may get modified, which may cause undesirable result for the web based editor.

But with EditContext, now texting input is decoupled from DOM.

So a web-based editor can get text input directly through EditContext and you can update the model and then update the DOM however you want.

So one example is to type on a canvas element.

So here with EditContext, we don't need to use a hidden textarea, or a hidden contenteditable div, to get text input.

We can directly get text input through EditContext and draw the text on the canvas element.

For example, here I can type some Japanese.

(Alex mumbles in Japanese) Here we can see that the @@@ window is correctly displayed, and then I can press space bar to enter phrase mode.

Notice that here we are showing correct IME integration.

So we have three - we can see that we have three phrases in the sentence, and then I can press left, right to move around these phrases and active phrase is displayed in a thick underline, and this is not possible without EditContext.


Hello everyone.

I'm Anupam.

I'm on the Edge Team at Microsoft.

So today I'm going to demo virtual keyboard APIs that allows authors to control the visibility of the keyboard explicitly, and also allows them to adapt the layout of the webpage to the virtual keyboard.

So, first I'm going to show how the authors can adapt the layout of the web page to the virtual keyboard.

On a dual screen device, this is more prominent because of how the visual board resize effects the layout of the page, as well as certain portions of the screen.

So when the user taps on it with control and the keyboard pops up, the visual report resize happens and on the left screen where the keyboard is not under, you see a blank space right under.

So that's a lot of wastage of space and the UX can be improved Here, We use a new virtual keyboard API, and when the virtual keyboard comes up, authors selectively position parts of the webpage and the portions on the left side of the screen remains unchanged.

Actually, looks a pretty good user experience, as you can see from this slide.

So how can we achieve that behavior?

Well, just set 'overlaysContent' flagged to true on the virtualKeyboard, and that ignores the default resize of the visual viewport.

We also provide CSS environment variables, virtual keyboard API that gives keyboard-inset values.

Notice we can use these values to position elements.

And second part of the virtual keyboard API relates to the explicit control of the visibility of the virtual keyboard.

So you can achieve that using show and hide APIs that's on the virtualKeyboard.

So, I'll show you a demo how you can use these APIs to provide a better user experience.

So, on this web page, it's it doesn't have any virtual keyboard API, it doesn't use any virtual keyboard policy attribute or anything.

So, when I want a triple tap to select a paragraph, you know, at the bottom of the webpage and try to edit it, you can see, I was not able to select the paragraph because of the keyboard height difference, I can't select the paragraph.

If I switch to a different page where I'm using the virtual keyboard API and a new keyboard icon over here that controls the visibility of the keyword, I'm able to triple tap and select the paragraph and tap on the keyboard.

If I want to edit that content, I can tap on the keyboard icon that shows the virtual keyboard and the user can then edit the paragraph.

So if I want to dismiss it, I can tap on the keyboard icon, and that dismisses the keyboard.

So this is all I have for virtual keyboard API demo And thank you for watching.

We have been working on the standardization of features relevant for text editing for more than seven years.

Initially, in the form of a task force.

And since June 2021, in the form of a working group, if you would like to join us, if you would like to participate in our monthly calls, or meetings at TPAC or discuss via GitHub or our mailing list, we would be pleased to welcome you in our Working Group.

You can find our main GitHub repository on

There you'll also find details about our mailing list as well as details about how to participate in our monthly calls.


All Demos



Title sponsor

Coil Technologies,

Media sponsor


Silver exhibitor

Movement for an open web

Bronze exhibitor


For further details, contact