BEA Systems Position Paper for

W3C Workshop on Web of Services for Enterprise Computing

David Orchard, BEA Systems

Contents

Where we are

BEA believes that the Web of Services workshop is roughly asking “what’s next” for services at the W3C. The original 2001 Web Services workshop ( http://www.w3.org/2001/03/wsws-program) had a number of companies asking for a layered architecture with a variety of messaging, description and discovery specifications.

Web services

Much of that infrastructure is in place within and without the W3C. Messaging specifications that are final or fairly close to final: SOAP 1.2, XOP/MTOM, WS-Addressing, WS-ReliableMessaging, WS-Security (and other WS-Security* specifications), WS-Transactions. Description formats are similarly in advanced stages: WS-Policy and WSDL 2.0. Discovery efforts in UDDI are finished. As well, WS-BPEL is well advanced. There is a base level of interoperability between the specifications defined in the WS-I Profiles, and there are more profiles emerging for the later specifications.

There are other messaging, description and discovery efforts that are not in the standards process: WS-MetadataExchange, WS-Eventing, WS-Transfer, WS-Management *. There are some areas that have made little public progress: intermediary support, client-side routing, message flow control,

A certain faction of developers doing “services” is promoting XML using HTTP as a transfer protocol, sometimes called REST. They do not support using the various WS- specifications for message transfer. One large problem is the complexity of the stack, perceived and real. A popular blog entry on the topic is the “S stands for Simple” ( http://wanderingbarque.com/nonintersecting/2006/11/15/the-s-stands-for-simple/). Many of us went through the transitions described and were part of the decision making. Given the complexity of just SOAP and WSDL, how many developers will really be able to move to the full stack?

To a certain extent, we are victims of what we asked for. We all said that we should do things from the ground up, in well factored and separate specifications, yet also do them very quickly. The belief is that this would be better for users and developers. This resulted in many different specifications. We must ask whether the specifications are really separate and composable, and what specifications really are composable. Much of the answer to the complexity challenge has been the promise that runtimes, tools, or frameworks will solve the developer’s woes. They won’t need to see the WS-Addressing, WS-Security, WS-ReliableMessaging, etc. headers, or the WS-Policy assertions. To date, this dream has not yet largely materialized and developers must still examine messages by hand to do error checking when using different WS- stacks. Most WS- stacks work well when the stack is all sides of the communication, but the dream of interoperability is multi-vendor multi-tool multi-platform.

If deployments will really become tools talking to tools, then we can look at which toolsets or stacks are supporting which specifications. It appears as if all the tools intend to build on some common sets. For example, one core set appears to be WS-A + WS-RX + WS-Security + WS-Policy. There has been little in the way of showing and proving that these all work together other than in simple deployments. The WS-I does not even have a profile promised to test that WS-Policy can describe these messaging aspects so it will be years before there is a WS-I level of interoperability of protocols and descriptions. Further, because the W3C Working Groups in Web services space allow for very little change in the scope, there is little chance to fully correct a “building block” specification that is used by follow-on specifications. For example, the use of anonymous addresses in WS-Addressing and WS-ReliableMessaging has been a source of much back and forth between the groups.

This begs the question about whether we should have had a smaller # of larger specifications. For example, the W3C or OASIS could have produced a WS-Messaging 1.0 specification that embodied this core. OASIS might argue they already did that in ebXML and there may be some validity to the point.

In examining any future work, the question of whether the factored approach has delivered on the promises of interoperability, simplicity and composability must be answered. The answer is still to be determined and it certainly is not an unqualified Yes at this point in time, 5 years after the first WS workshop.

Web 2.0

At the same time as the development of “Web Services 1.0”, Web 2.0 technologies have been gaining in popularity, such as “mashups” that perform Web integration. There are clearly two architectures in play, the WS-* architecture that promotes many operations – typically on fewer resources - and the REST architecture that promotes few operations – generic interface – on more resources.

WSDL 2.0 attempted to describes both the SOAP and REST worlds. But uptake has been slow. We believe that part of the reason for the slowness of the uptake is that neither SOAP nor REST users, nor vendors, find WSDL 2.0 compelling. Perhaps this is because there is a true separation between them at the description level. In particular, the WSDL design choice of grouping custom operations into a collection called an interface, with a separate binding step then deployment step serves neither side particularly well. Perhaps a better solution would be to have a SOAP Description Language that assumed SOAP and a REST Description language that assumed HTTP operations.

Given WSDL 2.0’s lack of uptake and length of the CR phase, perhaps that core architecture decision should be revisited. A revised Web description language working group (or 2 Working Groups) could refactor WSDL 2.0 into a SOAP description language, narrowing the binding step to protocol-binding only, and provide an HTTP description language, perhaps incorporating some of the URI-Templates and WADL work that has gone separately.

We do believe that the W3C should be the home of Web description languages, should any further efforts happen.

Use Cases

These observations are drawn from some use cases that are described below.

Thin client Banking Use Case

A large international bank offers a trading service. The trade service uses an “enhanced” quote services because the bank adds or enriches the quote with a variety of data. The data are wide ranging, including the client’s trading history, risk assessments of the security, current market-movers and makers activities. The exact enrichment is dependent upon a large number of factors. Trader-specific data affects the enrichment, such as the environment factors (thin versus thick client) and the amount of enrichment purchased. There are global factors on the enrichment such as load on the system affects the enrichment, as lower priority data will not be added when load is high.

The trading service is currently offered using SOAP services described in WSDL. Clients are written in .Net and in Java. The enrichment data is added as SOAP headers by each of the enrichment nodes in the message path. The application is designed as a request-response message exchange.

They are interested in offering the services as REST based services to attain a wider reach. In particular, they believe that a variety of clients - such as Ruby, Python, various open-source, and others - are not able to participate, restricting a potentially lucrative segment.

They have not deployed this service as a REST based service for two main reasons. Firstly, the SOAP+WSDL stacks do not lend themselves to easily publishing the service and the description of it as a REST service. Secondly, they are unsure of the “real” customer demand for this. There is interest but it has been difficult for them to approximate how much demand there is.

Client-side REST validation Use Case

Almost all the major “dot com” sites, such as Amazon, Yahoo, Google, AOL, eBay, offer XML over HTTP, often implemented as REST, access to their site. The typical deployment is a search service that takes in a modest number of parameters. For example, a music search might take in artist, album, song, release year, rating. The description of the site is human readable documentation that describes the requests and response. The parameters are specified as URI parameters for an HTTP GET request. The responses are XML documents, often described in a schema language such as XML Schema or RelaxNG.

This model works well when there is a large “site” that has a medium, where medium is probably in the dozens or low hundreds, of parameterized combinations. By parameterized combinations, we mean the combinations of parameter names and excluding the values. However, even in these numbers, the alleged simplicity is often not there. Interacting with the service often means hours of testing and debugging the URIs with the combinations of parameters. While it is easy to see the success or failure of the URI, the manual task of constructing the URIs can be quite lengthy.

Contrast this with the Web services stacks that usually provide an implementation of WSDL. A key feature of WSDL is the ability to generate client side stubs that can validate the input parameters. The “client-side” validation, sometimes called “strong-typing” in this type of context, is often considered an extremely useful productivity tool. The HTTP binding of WSDL 1.1 is effectively not deployed, so it cannot be used for this.

One use case is to increase the productivity of deploying such REST services by introducing a machine-processable REST description language. WSDL 2.0 provides this functionality, but it appears to be too complicated for the “simplicity” that REST development craves. A technology like WADL seems to provide the right amount of description language capabilities. There are also generated client-side APIs for some sites, but this obviously has limitations such as the platforms supported, keeping the API and description coupled and versioned correctly.

This productivity increase would also help lower the bar for entry for the non-large dot coms that wish to deploy many more services. For example, many enterprise systems have many more services, often with much lower numbers of requests for each service.

Widgets, Portals and Discovery Use Case(s)

The current portal standards, notably Web Services for Remote Portlets (WSRP), provide mechanisms for producing and consuming presentation-oriented web services. WSRP defines WSDL interfaces and SOAP messages to facilitate this. The specification has been final since April 2003. It has been widely implemented and has significant interoperability experience. The common usage appears to be a medium to large scale deployment of portlets. Perhaps a factor limiting the scale of deployments has been lack of open-sources support.

The next generation of composable, remote user-interface seems to be “widgets”, which are smaller units of service/component functionality than typical portlets. A widget is typically a small web based unit of functionality, like a blog plug-in, a pop-up window, a small ajax application, a desktop application that uses an online service. These are meant to be easily integrated into web sites, blogs, desktops, etc. The design goal is a very low bar of entry for the consumer of a widget.

As always with deployment of new services/components, directories emerge. Konfabulator, widgetbox.com, etc. provides directories of widgets. These sites appear to be very popular with large numbers of widgets offered. A REST description language could provide a description language for each individual widget and could assist in discovering widgets that are available from a server.

A very common scenario is the use of multiple widgets with a web page. Issues of communication, state management, security, etc. with respect to integration “on the screen” are prevalent. The ability to describe the widget’s interface ought to make a developer more productive. Further, using a declarative definition of a service rather than requiring Javascript code follows the principle of least power ( http://www.w3.org/2001/tag/doc/leastPower).

A common widget discovery use case is that a developer will search for a widget. Currently, they do this through a general search engine or a widget search engine. There are two missing discovery use cases.

One discovery use case is to ask a URI, ie widget, for a description of the URI. This is similar to the Web services de-facto standard of adding a ?WSDL to a URI to request the WSDL, and similar to using WS-MetadataExchange GetMetadata request. By analogy, appending a ?WADL to a URI could result in the WADL description of a service.

A second discovery use case is to ask a site what widgets or REST services it offers. A blog site could easily offer hundreds or thousands of widgets for use in blogs. Obviously these are discoverable by a search engine and probably available in human readable form. In many cases, a potential consumer wishes to ask the site directly for its offerings, and have those described in machine readable form.

A third discovery use case is integrating REST service descriptions with search engines. Sitemaps.org describes a protocol for informing search engines about pages that are available for crawling. Standardizing sitemaps and integrating sitemaps with a REST description language would provide developers with higher productivity discovery of components.

Recommendations

The overall recommendation of this position paper is that the W3C should do whatever it can to be a center for the development of Web-centric technologies, from descriptions to protocols to formats. One particular recommendation is that there is a need for more appropriate machine readable descriptive capabilities for Web based services. The promise of WSDL 2.0 has not materialized and is unlikely to do so. Part of this is more support and higher productivity for an AJAX and non-Ajax clients interacting with many different components/services/widgets. This would foster increased productivity in Web based services, and potentially provide for integration with the description-centric Web services community. An observation is that there does not appear to be technology available for easily integrating Web services with the Web either from offering SOAP services to Web clients or the converse of SOAP/WSDL clients consuming REST services. It’s not clear that the W3C can solve this problem, but it does appear that some of the “glue” technology is fairly easy to develop. Another recommendation is that it is it likely that some form(s) of discovery technology for a Web description language will be useful. The final recommendation is that the process of “Fast-tracking” small WS-* specifications should be re-examined for any new work because it has not conclusively achieved the promise of usable, interoperable, well-factored, and composable specifications.