Delegated Ink Trails

Facilitator: Mario Bianucci

Delegated Ink Trail Presentation Paradigm: Prototype status and learnings from web developers feedback.


Previous: EditContext API All breakouts Next: Learning from MiniApps




I'm here to talk about delegated ink trails some today.

I guess, introducing myself, I am Mario Bianucci and I'm gonna be talking about delegated ink trails.

The link on the first slide and the slide should be available on the Deepak website.

I can also share link or maybe Zoo here can share a link in the chat if anybody else wanted to look.

The link leads to the explainer for a little bit more information if you're interested reading about it more afterwards.

So for a general overview, what is delegated ink trails?

It's got goals both from a user perspective and a developer perspective.

From a user perspective, we just want to reduce latency when inking on the web as much as possible because right now native apps, native apps on some platforms are really able to reduce latency and provide a good experience, but the web is really kind of left behind in that, and still has a pretty, not great experience.

So we want to try to improve that as much as possible and from a developer perspective, we want it to be totally painless to use.

We really want to make sure that it's not up for debate, is it worth the time and effort to implement this for the gains that the user may or may not see, we really want to just make it is a no-brainer, absolutely use it.

It's very easy.

And as part of that, we want to make sure that it requires absolutely minimal changes to the app's rendering pipeline.

We wanted it to be just a single one call and then the browser or whatever will take care of it.

It's like, there's no additional time making sure things are happening in the right order or this and that, setting things up correctly.

Just one small change, no big changes to the rendering pipeline.

And you get a much better inking experience on the web.

So why do we want to do that?

Well, as you may have noticed remote working and remote learning have both grown absolutely tremendously in the past year.

I mean, they were already on the upswing prior to 2020, and then of course the pandemic happens.

And so remote working and learning have grown more than anybody could have predicted.

In fact, Microsoft Teams has seen an increase of over 450% in daily active users from November of last year to October of this year.

So it's grown tremendously and I'm certain that that's also true of things like Slack and the office suites and the Google Drive suites and everything else, like any any collaboration tool has I'm sure, grown tremendously just like Teams.

And good collaboration tools are just critical for remote learning and remote working because if the collaboration tools are giving people a negative experience or just, you know, not very helpful, it's going to make people a lot less productive and it's just going to give everybody a negative experience trying to use them.

So we really want to make sure that the collaboration tools are working well and are providing the best possible experience for the users.

And part of that is that low latency when inking is very important for that, because, you know, how often is it, you know, do you think it would be way easier to just draw a diagram or for teachers drawing out multiplication tables or something like that, just drawing things out can really help propagate ideas and explain things much better than words or trying to go into like notepad or Office or something like that.

And trying to draw on ideas.

Just being able to quickly draw something can help a ton.

And as I said, the native apps oftentimes already have very good inking latency, but so many of these collaboration platforms are web based or have web based components.

And they are just lacking in a good inking experience because that just doesn't currently exist on the web or it doesn't exist as easily as we would like it to.

So you may be thinking, there are current enhancements for inking on the web right now, there's a couple, but we weren't particularly thrilled.

We didn't feel like any of them quite hit exactly what we were looking for.

So one potential option was just using overlays.

However, we felt that overlays are kind of unnecessarily difficult.

Like they have a lot of requirements for the developers.

They have to be the top most object on the page.

They can't be obstructed by, you know, rounded corners on a screen.

They have to be either transparent or opaque, very specific requirements there.

And then from the user side, the user also has to make sure that they have the correct OS, the drivers are up to date, the GPU is working.

It's a very brittle way of trying to reduce latency.

And it's a lot of work for the developers, they're not actually guaranteeing that the user will get an improved experience.

So we thought we could improve upon that.

Another option would be desynchronized canvas, however, desynchronized canvas doesn't quite work with the typical way that inking works on the web right now.

If you're not familiar the way that inking typically works is that while the user has their finger, pen on a touchscreen, Apple typically do what's known as wet rendering, where they will just draw to a transparent canvas on the screen.

And then whenever they lift their finger or pen, then it does, what's known as drying the ink to the webpage, which is where it just draws via SVG, CSS, HTML, whatever, to the webpage itself.

So in order to have a good experience in that model of inking, you want to make sure that the wet ink is removed on the exact same frame that the dry ink is placed down, because if they exist at the same time, there may be some artifacting due to sub pixel differences between the two.

And if there's a frame where neither exists, that's a really negative experience because then you have flickering in the ink and you absolutely want to avoid that.

So we thought we could improve that aspect as well.

And then finally, you could just predict points as well.

And that's definitely a valid option.

However, it causes some issues just because you may not get accurate points which you would definitely prefer to have as accurate as possible to make sure that the user has a good experience and you can improve accuracy by predicting fewer points but then you're not improving latency as much as you may otherwise be able to.

So we thought we could make prediction or, you know, improve upon prediction as well.

So this is the current model of how browsers typically, of how inking happens through the browser right now like the OS will receive input from the touchscreen or whatever, and feed it into the web browser, where in Chromium, it goes into the browser process.

This is going to be like a UI thread or something similar on other browsers within the browser process, it does a little bit of processing on it, and then it will give it to the render process, which is essentially where the app will do whatever it needs to do with it, draw the stroke and anything else.

And then the render process will hand this stroke off to the GPU process, which will then actually draw that, the display hardware will display that and then we'll see it.

This works well, however, oftentimes the OS is giving the browser process a lot more points than the browser process is able to give to the render process just because there's a lot of things going on in the render process and so it's slower.

So we wanted to take advantage of the fact that the browser process knows about a lot more points than are actually being put onto the screen.

And so by doing that, the new way that this works with delegated ink trails is whenever the render process or whenever the points get forwarded to the render process from the browser process as before, the render process will then, whenever the developer calls the API, the render process will tell the browser process, hey, we want to use this API, so start forwarding the points as you receive them to the GPU process and we'll meet up there.

And then also whenever that API is called, the API is given a trusted pointer event of the last point that is drawn as part of the stroke and a little bit of a description of the stroke.

So like color and width and things like that.

And that's packed up into a little metadata that's shipped off to the GPU process with everything else.

While the browser process is sending any new points that it's receiving, then whenever it gets to the GPU process, it draws the stroke as it normally would have.

And then after everything is done being drawn, it also draws a trail connecting that final point that the render process provided, it draws a trail, connecting all the points that the browser process had.

So the end result is that we have, we have a trail of real points being connected onto the stroke that looks exactly like the stroke and it's a 100% accurate to what the user did and it's improving the latency.

And then on top of this, we can also do a little bit of prediction as well in order to improve it even more.

And we don't have to do a ton of prediction in order to see some improvements.

And so we can ensure that prediction is accurate.

In terms of further improvement, the OS Compositor introduces a frame of latency itself.

This is oftentimes just to, you know compose the screen basically, but an OS Compositor could theoretically actually support this feature itself.

So support the feature itself and draw the trail itself on the apps behalf.

So the browser would just forward these points to the OS Compositor and the browser wouldn't have to do any drawing itself.

This has the benefit of the OS Compositor can wait much much longer than the browser actually can to draw the trail allowing the the OS Compositor to receive many more real points.

And so it can improve the latency much further than the browser would be able to itself.

And then you can also predict upon that to improve it even more so you can get a lot more improvement from doing this on the OS Compositor end.

In order to kind of give you a better idea of what this might look like to a user, just because it can be kind of foggy just hearing that, you can imagine that this is a stroke that a user is drawing that the black points on the dotted line is the final point that the render process receives from the browser process.

And then the browser process might receive the two points to the right of the dotted line.

So the render process would draw that stroke on the canvas today, but then those last two points wouldn't get drawn they would just be latency that the user would experience, however, with the OS, especially with the OS version, those two points that the browser received could be shipped down to the OS Compositor.

And then it can draw that frame at the last minute before flipping onto the screen for the user to see and improve the latency dramatically.

Today in Chromium Skia, a polyfill version of this feature is available.

It's drawn via Skia which is a drawing stack within Chromium.

It's done as a proof of concept to show that this feature definitely does work.

We are seeing improved inking latency with this and it's also showing that it can be done using exclusively the drawing stack within the browser.

It does not have to be using an OS API to see improvements.

And it's done as a proof of correctness, sorry, because you know, we're seeing the trail, it's matching exactly as the stroke looks, you know whenever it's on the screen, you can't even notice that there is the trail other than just improved latency where it's being correctly removed each frame.

So you're not getting any artifacting from a trail and a stroke appearing at the same time.

It it looks great.

And on top of that, we've gotten a lot of positive feedback from internal customers, including PowerPoint teams and Whiteboard.

They're beginning to start testing with it to hopefully start doing some trials with it within the next month or so.

And they've been providing a lot of positive feedback about how easy it is to use and how generally painless it is to make the one single API call and get the improved latency.

But they are very excited for potential OS Compositor improvements because they're definitely significant.

As far as delegated ink trails via the OS goes, there is upcoming Windows 10 API support for this.

I can't give any you know, timelines for when that's going to be available but we are testing it internally.

And it's looking very promising.

We're seeing very significant improvements whenever using the OS support for this feature.

And so we're optimistic, but if this feature starts gaining traction then other Os' will start providing similar support and we'll be able to get inking latency on the web down significantly.

This is the proposed WebIDL to give you a better feel for what the future might actually look like As you can see there's an ink interface on the navigator, that you request the presenter from, it takes in a type which is currently just going to be a delegated ink trail.

The point of taking a type like that is to provide future extensibility.

We're thinking it might be able to expand to something like shaders or something like that in order to provide more interesting trails and such as well as it takes in a presentation area, which is just going to be the element that the trail should appear on and default to the view port, if one is not provided.

And then you have the ink trail style dictionary which is just the description of what the trail should look like.

It takes in the color and the diameter or width of the trail.

And then the actual API itself, the update ink trail start point is on the presenter.

And it takes in a trusted pointer event and the description of the trail, the entrance style.

As far as the road ahead goes, we want to get a consensus on the API.

Internal partners have been have found it to be very useful.

We've really liked what we've been seeing so far.

So we think it's a great API and we're interested to hear what others think of it.

We wanna, you know, make sure that the current shape is ideal for all common cases.

It's worked well for us and for internal partners, but that's not to say that there aren't other ways of doing inking that this one won't work for at all.

So we want to make sure that we cover our bases there.

And then if there are any potential issues with it, we tried to cover everything that we could think of.

But that's you know, there certainly could be other things that maybe we had overlooked and want to make sure tackle before it gets too far along.

And then you know, is there a support for taking it out of standard or, sorry, is there a support for taking it out of incubation?

You know, we really want to take this on the road to standardization.

We think it's going to be useful because you know it doesn't seem likely that remote work and remote learning are going to go away anytime soon.

They're probably only going to continue growing.

And so we really want to get ahead of it and try to make inking on the web as good as it can be and try to match or get very close to matching the native application experience when inking.

So we want to know, is there support for standardizing it?

And with that, that's pretty much all I've got, I believe.

So I'm happy to take any questions or feedback or have any discussions about the feature.



Platinum sponsor

Coil Technologies,

Media sponsor


For further details, contact