W3C

Enabling new types of web user experiences

Scott Jenson, a well-known UX designer and creative director with a specialty around mobile phone and consumer electronics, was involved in the W3C Closing the Gap task force, and shares here some of his thoughts on what mobile needs from the Web for new user experiences.

Abstract

The W3C is working on multiple documents on moving the core web application platform forward, the primary one being Core Mobile Web Platform. There are also the Closing the Gap task force, the Web and Mobile Interest group, and numerous technical interest groups on dozens of specific topics, from DOM performance, to device hardware APIs and many many more.

These efforts are excellent and clearly important. What’s missing from this discussion is a framing document, something that attempts to motivate at least part of a broader picture. Is performance the only issue? Why do we need these varied technical proposals? Are they adding up to a coherent whole? This short document is treading into opinionated waters, not to create a definitive list of projects but to attempt some guiding principles to help motivate these and future projects.

Introduction

In the court of public opinion, the war between native apps and web apps appears to be over. Even though the web world is valiantly and consistently improving the web platform, the world seems to have moved on, embracing and rewarding native apps.

But this is only an apparent victory. The world which native won is itself changing. When there was a single dominant device, the downsides of native were minimal. But as that single device grew to multiple platforms, each with multiple devices and screen sizes, the burden of writing a native app has steadily increased.

In addition, what an app is attempting to accomplish is also changing. We’re no longer just flinging birds or vignetting square photos. We’re entering a world where nearly every device, from movie posters to microwaves, can become ‘smart’ and wish to interact with you. Even further, the number of ‘screens’ in our life is growing as well, encouraging new interaction patterns across more than a single fixed screen. We are on the cusp of significant experimentation and new interaction models.

A single definition of a web app is tricky: any attempt to articulate where a web document ends and a web app starts is such a nuanced continuum that it is ultimately not a useful exercise. In fact, the very concept of a web app is changing so rapidly that it’s hard to understand how all of the various pieces under consideration in the W3C fit together as a cohesive whole. We seem to be so focused on how to build web apps that we seem to be ignoring how they will be experienced. However, what should be categorically and emphatically stated is that web apps are NOT and should never be just a ‘web ports’ of iPhone apps, that is aiming far too low in aspiration.

This document is an attempt to break out of this iPhone myopia and discuss how interactive web pages (for lack of a better term) should be experienced. The W3C is exploring numerous technical capabilities but hopefully by calling out some of these new directions, it hopes to coordinate existing activities and encourage new ones. The following list is certainly partial and will change over time but hopefully it encourages further discussion:

Web UX Style 1: Native App replacement

The first interactive style is one which wants to emulate a native app. Even though I just disparaged this just 2 paragraphs above, it is the dominant model today and one which the market currently understands and accepts. This style encapsulates a web page into a package that looks and behaves just like any normal native app. In the user’s eyes, packaged web apps should look and feel nearly identical to native. The web in this case is an implementation technology and the technical plumbing shouldn’t show. As the user concept of a native app is well explored, a web equivalent has several clear requirements. If a web app is appropriately provisioned and vetted, it should be allowed to:

integrate in the list of apps directly available to the user
This means that it should show up on that OS’s home screen and be launched just like any other native app.
install itself from any web page
As apps can now be installed from the browser, it would make sense to allow web apps to do this same. This implies that the user experience to install a web app should not be just saving a bookmark. Apps should be able to have an ‘install link’ on any web page that starts the safe process of downloading the app package to the device.
integrate into the the OS task-switcher
If the user switches from a web app to a native app, such as the contacts manager, they shouldn’t have to switch to the web browser and then switch to the web app. That’s too complicated and confusing. A user should switch between running web apps just as they do with native apps.
have a full screen experience
Web apps should be able to start in full screen, just as native apps do.
open links to external pages in a separate browser
A web app will most likely be interested on a core user experience and will switch internally between its own pages. However, it should be able to offer links that when selected ‘escape’ to the browser.
enforce the “singleton” rule
If the user chooses the web app on the home screen a second time, it should just open up to the currently running instance, not launch a new one.
operate in the background
This is likely dependent on the native OS but if at all possible, a web app should be allowed to run in the background. This becomes more important as apps want to process activities such as location, in the background.
generate system level notifications
This is just integration with the OS like many other potential services. This is especially important for apps that can run in the background and need to inform the user that something important has changed

There is likely much more to add to this list but the goal is fairly simple: if we are content to mimic native apps, web apps should behave much like native apps on the host device. The user shouldn’t have to navigate to a ‘sub OS’ on their device to launch a separate class of applications, with their own rules.

Web UX Style 2: On demand interaction

Native apps are a snapshot in time. They exist as frozen functionality that is searched for, downloaded, installed, organized, updated and eventually deleted, all by user initiated actions. As the number of smart devices grows, from smart movie posters, to smart museum displays, to smart doorknobs, to even smart toasters, all sorts of devices will be sprouting the potential to interact with the user. The sheer user effort involved in managing native apps will become overwhelming, even if a single OS dominates. If you believe in Moore’s law in any way, it’s clear that the number of devices that require interactivity will grow astronomically. Having a ‘user cached native app’ for each smart device, over time, is a mathematical absurdity. No one is seriously arguing that EVERY web page in the world should be a native app so why would we expect this of smart devices?

This is where the web’s ability to offer interactivity with the click of a link turns into an amazing super power, one that saves us from this complexity. It offers interactivity for nearly zero effort on the users part. This is something native apps can’t hope to achieve and is a clear differentiator in the struggle between native and web apps.

Today the web link and the URL bar are the only ways to summon a web page. However, we’re starting to see how mobile handsets and their ever expanding set of hardware sensors are expanding this capability. QRCodes, as clunky as they are, were the first to offer this ‘outside of the browser’ approach to launching web pages. NFC is very similar, trading proximity for a much quicker experience. This will likely continue through new technologies such as Bluetooth 4 and Wifi Direct.

This ability to ‘summon’ a web page without typing anything at all needs to be explored in future W3C standards and not be left solely to handset makers. A system where smart devices could broadcast a URL so nearby mobile devices could offer it would unlock an interactive ecosystem, effectively carrying the web server model from cyberspace into the real world. Any device could contain a link to a web page offering the ability to control that device. This linking of the real world to the web could be a very transformative and unique capability of web apps. In order for this to take place, the W3C should encourage a wireless discovery service.

There should be a multi protocol standard for smart devices to broadcast a URL. Wifi, WifiDirect, and Bluetooth Low energy are all likely initial candidates. QRcodes roughly accomplishes this but requires a custom app and significant user activity to get the target in focus. NFC is easier and works well but requires the user to physically tap the device. By listening for a wide and expanding list of wireless protocols, the browser can become an agent looking for potential URLs on the users behalf.

The results should be optionally passed through a web service. There are many reasons the immediate results of this discovery service would need to be augmented. For example, it might be useful to add additional results from nearby geo located objects. In addition, as the number of devices found grows, the value of ranking the results increases significantly. The objects found in the discovery service above should be stored in a common format (e.g. json) so any web service could then read and alter that list in an interoperable way.

Web UX Style 3: Multi Screen interaction

Lots of experimentation is going on connecting phones and tablets to interactive television. However, just as WebRTC is enabling so much more than just face to face video, the ability for one screen to interact with another has many deep possibilities beyond just streaming movies. This is a rich new type of interaction that is just starting to uncover its potential.

While the W3C can’t realistically create standards when we are in such an early exploratory phase, it can specify a very simple and basic rendezvous mechanism for web devices to find each other. This doesn’t preclude any experimentation but does significantly encourage the category to be explored further. Possible steps would include:

Extending the discovery service to web apps
While the discovery service listed previously is a user service meant to find everything, it should be possible for a web application to offer this choice as well. This would allow the list to be filtered so only targeted/cooperating devices would be matched.
Exploring a topology of devices and services to encourage common categories
This likely will be a moving target but starting off with some basic media transport styles such as printer or video display would encourage devices to share their functionality across multiple applications and companies. This clearly needs much deeper thinking but point here is to broach the topic and discuss what topology would be useful.

Conclusion

The goal of this document is to rise above the current alphabet soup of technical standards and create some conjecture and possibly even motivation around how these standards can work together. The web can be so much more that what native apps can do. It can offer interactivity like water, pouring out of any device with nothing but a click. This is the super power of the web and isn’t appropriately appreciated as the key differentiator from native apps.

In a world where companies roll out beautiful fait accomplis, we start to believe that all web products need to be fully formed and mature when we release them. We forget that the delta between Netscape and Gmail was 10 years! It takes time for meaningful systems to evolve. We don’t just need new focused standards but also new experimental standards that encourage exploration.

The web has been on it’s back footing for too long, aspiring to catch up to the legacy of the iPhone native app model. While there is clearly a significant amount of work just to make basic web apps more viable, there are also several things the mobile web can do to encourage a new type of interaction such as offering a model that allows the web to pervade the physical world, allowing anything to have a ‘web page’. In addition, the ability for web based devices to find and interact with each other, both at the user and programmatic level needs to be encouraged. While still experimental, the W3C can at least open the gates and encourage new interaction styles in an open and collaborative manner.

9 thoughts on “Enabling new types of web user experiences

  1. Excellent post – there’s just so many ideas to chew on in this, but I’d like to focus on one for now:

    There should be a multi protocol standard for smart devices to broadcast a URL. Wifi, WifiDirect, and Bluetooth Low energy are all likely initial candidates. QRcodes roughly accomplishes this but requires a custom app and significant user activity to get the target in focus. NFC is easier and works well but requires the user to physically tap the device. By listening for a wide and expanding list of wireless protocols, the browser can become an agent looking for potential URLs on the users behalf.

    This seems perfectly reasonable on the surface, but does bring to mind the potential for overloading users with broadcasts for nearby apps and services. From a historical perspective the various implementations of ‘Bluecasting’ could serve as prior art of what not to do here in terms of user experience. Essentially (a few years back) it was quite common for folks to be bombarded with messages from various companies in public places (also in private spaces as BT range improved and spammers got involved) via Bluetooth. The net result of this activity was users disabling their Bluetooth settings – often both in public and private spaces – removing any possibility of discovery.

    As proximity interactions are primarily concerned with discovery, user privacy and intent any system that overlooks any of these factors is destined to fail. I suspect any solution here will largely be a act of delicately balancing meaningful app discovery with user privacy and intent.

    Not an insignificant challenge, but a very important one for the future of the web.

  2. Great post Scott!

    I’ve been thinking quite a bit lately about discovery scenarios that require proximity, or even implicit touch/tap (a la NFC) and those that would work better in a pure broadcast scenario (a la Bluetooth/wifi acquisition). That’s the bit that i’m finding trickiest from a pure use-case point of view…especially once you factor in scale (i.e. a whole world of stuff broadcasting rather than just a couple things).

    Forced proximity often feels like the better human fit. More accuracy, clearer intent, less “noise” and risk of picking the wrong “thing”. Also has useful privacy/security benefits e.g. I couldn’t just (accidentally or otherwise) connect to a thing that’s broadcasting from my neighbour’s apartment…or “inspect” the contents of their house by looking at their manifest of broadcasting objects :-p

    Broadcasting is of course a useful way of pushing data out, so brands and advertisers love the idea. There are also cases where proximity just isn’t possible (accessing the “app” to adjust a ceiling air conditioner?) or would simply be more way annoying than being able to do it from a distance.

    Ideally, I think whatever technology we adopt going forward will have to lend itself to a combination of “broadcast so anyone can discover” and “implicitly interrogate” (something you’ve already physically discovered…and don’t want to have to discover yet again using some discovery interface…a la NFC).

    Oh…and of course it has to be an open standard so we don’t end up with all sorts of data ghettos!

  3. Wireless discovery creates significant security concerns. If I am interacting with a smart movie poster, trying to buy a ticket for example, am I interacting with the poster or with an attacker’s device which is emulating the poster to me and passing on the communications to the poster? QR codes are not perfect, but they are very difficult to attack MITM. The tap requirement of NFC would significantly mitigate this.

  4. Bryan
    I absolutely agree that eventually we will find 20-50 devices nearby. First, let’s be clear, these devices aren’t beeping your phone, they are just silently broadcasting their URL. The user must first make some indication they want to make a choice (such as pulling down the notification bar) To me it’s a given that these devices have ZERO claim on my attention.

    Second, we are VERY comfortable with web search engines ranking their results. I don’t see ranking nearby devices any different. The big nuance is that the raw results are and open data exchange so several companies can compete and offer different services. I would expect that history, personal preference, and similar devices would all go into the mix.

    I truly feel that we’ve already solved this problem with the virtual world of websites today, I’m just applying those same principles to searching the physical space nearby.

    Stephanie
    Forced proximity certainly has a role as it shortcuts the gather/rank/list/choose tasks altogether. There is nothing in this approach that rules out NFC. I do get quite a few folks that claim NFC (and sometimes QRCodes) are enough. You aren’t saying that, just letting you know why I want to not rely exclusively on those technologies.

    Liam
    I get ‘the security question’ every time I discuss this. I don’t disagree but I answer this three ways:
    1) There are tons of examples where security isn’t an issue, mostly around public access. Let’s not throw this standard out for cases that dont require security.
    2) The web is a completely insecure base technology that we’ve layered security on top of. There is no reason we can’t apply everything we’ve learned on the web to physical devices. So you might have to ‘log in’ to your home lighting system the first time for instance. We’ve already solved this problem for the web (not perfectly I’d add but if the web is “good enough” why isn’t it for physical devices?
    3) we do need a much better ID API so devices can challenge me and I can reply in a way the guarantees who I am. We need this for the web JUST as much as we need it for smart devices. My answer to #2 is technically correct but not a satisfying one and no one really feels the web security model is great. So in some ways I agree with you that any robust ID API that comes along would greatly improve this issue.

    Scott

  5. A system where smart devices could broadcast a URL so nearby mobile devices could offer it would unlock an interactive ecosystem, effectively carrying the web server model from cyberspace into the real world. (…) [T]he W3C should encourage a wireless discovery service.

    Are you aware of standard ISO/IEC 24752 for the “universal remote console” (URC)? This standard relies on device discovery without preferring a specific network or protocol (LAN, Wifi, BlueTooth, …). Introductory information is available on the site of the openURC consortium and on myURC.org. The second link will also lead you to information about the Universal Control Hub or UCH, which allows communication with target devices that do not conform to ISO/IEC 24752. I’m not sure how close this is to what you have in mind, but it is worth having a look to avoid duplicating this kind of effort.
    (The current version of the standard – in five parts – dates from 2008, but a new version is almost ready.)

  6. It would be helpful of you listed examples of the interactions. For the multi device section I am struggling to envision the real world use.

  7. Good evening:
    I read carefully this post, and I have to say that both native apps, and web móvil apps have their chance in this “changing world”. Apps are more powerful, but memory, desktop and disk consuming. On the other hand, móvil web apps are more suitable for local business, because users on the go can find easily an offer, a product, a local businness. So let’s see how it will evolve

    KR

Comments are closed.