W3C

Web of Services for Enterprise Computing Workshop Report

This version:
http://www.w3.org/2007/04/wsec_report
Authors
Eric Newcomer, IONA
Ken Laskey, The MITRE Corporation
Philippe Le Hégaret, W3C

Status of this document

This is a stable document and may be used as reference material or cited from another document.

Table of Contents


1. Summary

The Web of Services for Enterprise Computing (WSEC) Workshop was held 27–28 February 2007 in Bedford, MA, hosted by The MITRE Corporation. The Workshop was chartered to explore the suitability of Web services and Web standards for meeting enterprise software requirements, and what, if anything, needs to be done to improve enterprise support. Furthermore, for identified areas needing improvement, on which items might the W3C consider taking action.

This document is organized into the following major sections:

The workshop participants recommended, among other things, that W3C should:

The W3C should ensure that the requirements of those using the services are met by consolidating efforts on specification maintenance, improving the interoperability of the Web stack (e.g. test suites), addressing some of the limitations (e.g. authentication), and encouraging better tooling (e.g. REST description language).

The effort of W3C through its Web services activity is generally seen as positive, in particular in terms of ensuring the applicability of the World Wide Web architecture in the Web services specification. However, some would like the W3C to put more resources into this activity, including its architecture.

The Technical Architecture Group (TAG) reemphasized the importance of using uniform resource identifiers (URIs), including when exposing Web services resources. This is not intended to advocate changing addressing mechanisms in existing enterprise systems where such mechanisms provide current functionality but rather to reenforce the notion of creating maximum flexibility as appropriate to realize the original concepts behind Web services.

The problems discussed at the Workshop have been exacerbated by the polarization of separate communities looking at the strengths or limitations of competing approaches rather than which approach or combination of approaches can best fit the problem at hand. Interoperability has suffered even as specifications intended to promote interoperability have multiplied.

The workshop reached consensus on the need to make the specifications we currently have work together. To a large extent, this requires maintenance and incorporation of errata, but it also requires a knowledgeable group to provide interpretation in line with the original intent and guidance as issues beyond the original scope arise. To realize this, the workshop recommends establishing a Web Services Core (WS-Core) Working Group (WG). Such a Working Group would draw on appropriate expertise to facilitate clarity and understanding without themselves generating new specifications.

The workshop also felt that the user communities need to lead in identifying the scenarios and use cases for which they need solutions, to collect implementation experience and best practices, including architectural implications, and to help assess whether proposed solutions meet requirements. Such work of users would complement those writing new specs, helping focus on known problems with any of the service approaches. User groups could represent verticals or coordinate across verticals. While the idea of one or more such user groups was generally appealing, the workshop did not attempt to flesh out details of how such groups would be organized and run.

Finally, there is also consensus that the answer is not one anointed set of specifications codifying one approach but rather reliable bridges are needed that leverage the whole arsenal of architectural concepts. This would enable seamless exploiting of any technique that provides an effective solution.

The workshop endorsed such overriding principles as the importance of URIs to expose WS resources, such as WSDLs, and to use identifiers that provide transparent addressing without need for proprietary knowledge embedded in addressing constructs. The workshop, in particular, emphatically discouraged use of reference parameters in ways that go against accepted architectural principles that easily span implementation approaches.

2. Historical Background

The primary motivation behind the workshop was improving the standardization of enterprise software. Web services are the most recent specifications proposed as a solution for enterprise IT standardization requirements. While the Web of documents has succeeded beyond anyone’s wildest imagination, in large part due to the widespread adoption of the standards on which it is based (HTML, HTTP, URI), the Web of services has not achieved equal success, particularly in the enterprise, where it potentially would have its greatest benefits. The widespread use in government and industry of proprietary enterprise software has a huge direct and indirect economic impact because of their reliance on operational computer systems.

The workshop validated the goals and benefits of enterprise software standardization, including the ability to abstract differences among programming languages, data types, operating systems, and software systems using a common interface and tying these together using a common interoperability protocol.

The current proposals for enterprise software standardization are XML (for data types) and Web services (for interoperability). When initially proposed during 2000–2001, XML offered a new level of abstraction difficult if not impossible to achieve using binary languages, data types, and protocols, and Web services had wide industry support. Together, these appeared to be the most promising candidates for enterprise software standardization. 

The possibility was mentioned during the workshop that current technologies are not abstract enough for the standardization goals expressed, i.e. a service description in WSDL is still not abstract enough for the business folks to understand or to effectively capture description aspects important to business use.

The main idea behind Web services is to provide a single scalable stack for developing and deploying services within an intranet as well as the extranet, but this has not yet been reached. Web services, as addressed by the W3C Web services activity, has focused mainly on SOAP messages over the HTTP protocol. More needs to be done to help Web services reach their initial goals and to build better bridges between the Web and Web services.

Some progress was made in SOAP 1.2 with the SOAP Response MEP and in the WSDL 2.0 Working Group with its HTTP 1.1 binding, but these specification features are still not widely deployed in products. Other protocols, such as JMS or FTP, have been ignored. Even within the context of SOAP messages, interoperability has not really been achieved yet and the lack of participation in the XML Schema Patterns for Databinding is disappointing, because databinding support is a common requirement.

Services are being deployed on the Web without the help of SOAP Web services, but they are limited with respect to meeting comprehensive enterprise IT requirements, especially in the area of security. Services are also being deployed on the Web using SOAP and WSDL, but it is clear that this will not be the only way (or perhaps not even the dominant way) to achieve program to program communication. In particular, the emphasis on implementing the RPC oriented style of Web services represents a mismatch with the practices of the more document-oriented Web, and adding back into HTTP the features and functions (e.g. persistent sessions) intentionally omitted during original design would likely result in similar problems.

2.1 Discussion points

The discussion at the workshop tended to revolve around two main streams of thought, which are not as well coordinated as they could or should be. One is that existing Web technologies can be adapted for enterprise use. In this stream of discussion it was proposed that additional standardization is not required, but this view did not garner widespread support.

While post-Web businesses such as Amazon, eBay, Yahoo, Google, and others have successfully adapted Web technologies for enterprise usage patterns, they appear to have done so using a lot of custom code and minimal off the shelf software or standards-based approaches to integration.

Pre-Web enterprise applications that provide existing benefit may show increased benefit if these, or parts of them, are exposed to the Web, but a similar reliance on custom coding to rewrite them is too expensive. Additional standardization is required to make Web access viable. Experiences were cited from several pre-Web businesses, including The Hartford, Citigroup, MITRE, and Xerox to indicate partial success has been achieved using Web services, leading us to the question of what comprises the additional standardization that is needed.

The debate between using Web technologies and Web services technologies is often characterized as the “REST vs SOAP” or the “Web vs WS-*” debate, and the two are typically placed in opposition as if to indicate that one or the other is likely to emerge as a single, unified solution. Proponents of each approach attended the workshop, and among the achievements of the workshop was a significant and helpful dialog between representatives of the factions, each contingent of which acknowledged (at least to some extent) the validity of the others’ viewpoint.

Underlying the debate is the question of whether Web based standardization activities need to conform to a single, unified architecture (i.e. the Web architecture) or whether it is possible (or even preferable) to entertain separate architectures for the “external” Web and the “internal” Web.

There was some discussion about general purpose vs specialized code in the context of the debate over fixed interfaces (Web) vs custom interfaces (Web services) i.e. with general purpose you pass the data and the program knows what to do with it, whereas with specialized code the program has to be written specifically to perform a certain function and therefore benefits from a custom interface.

The challenges in dealing with existing IT systems are not going away any time soon, and one answer to the phenomenon of the Web could be to allow natural selection to take place. Companies and agencies that do not successfully adapt will simply and naturally suffer a competitive disadvantage. However, the discussion at the workshop indicated that various organizations are deriving value from both approaches to the problem (Web and WS-*), and in at least one case the same organization (Yahoo) said that they are employing both approaches successfully in different areas of the business. The workshop then explored several ways in which to improve the ability of the Web and WS-* to work together successfully.

2.2 Implementation vs Specification

It was noted that vendors often seem to view Web services as extensions of existing environments and, as a consequence, adapt or implement the parts of the specifications that map most closely to existing concepts and systems involved in enterprise middleware. In particular, the RPC oriented style of Web services has been implemented widely, whereas the more loosely coupled document oriented style (closer to the Web) has not. The interoperability problem with the RPC style follows from its reliance upon code generation and data type serialization for transparent operation, and implementations vary with respect to serializing complex data types and code generation strategies. Standardized data bindings could help but vendors have not joined the data bindings WG.

Achieving transparency with RPC technologies may however interfere with the goals of abstraction in standardization because RPC is less abstract than exchanging documents. Also, several features (in particular related to SOAP 1.2, WSDL 2.0, and the use of WS-Addressing) that provide better compatibility with the Web are not widely implemented. One participant indicated that they consider Vista/WCF as a reference implementation for Web services. This brings an interesting dynamic with respect to the evolution of the Web services specifications in this case because the evolution of the proposed reference implementation is then tied to operating system releases.

The workshop identified as a general problem the way in which the WS-* specifications have been implemented, which may be among the major reasons Web services have so far failed to achieve the goals of enterprise software standardization.

2.3 The Interoperability Issue and Databindings

The data binding problem can be characterized as the way to map XML Schema data types to various language data types, such as Java or C# data types. It is exacerbated by the predominant use of the RPC style in SOAP and WSDL, and is caused in large part by data type incompatibilities among RPC implementations in different languages. This issue represents one of the biggest challenges in the compatibility of software systems, inasmuch as every general purpose (or even special purpose for that matter) language defines its own collection of data types depending on its overall design goals.

The Data Binding Working Group at W3C was started to help with this problem. It is chaired by a user organization, but the WG has failed to achieve critical mass because of a lack of vendor participation. During the workshop, IBM said they did not join because Microsoft had not joined. Other vendors said that without IBM and Microsoft there it was difficult to justify the investment of time and effort on the WG. This is a clear example of the problem. Standards, especially enterprise software standards, are beneficial to the user, but vendors may sometimes have conflicting goals that inhibit market adoption.

In general, interoperability decreases as data type complexity increases. Why is there such good interoperability with HTTP? Is it because file transfer (i.e. document passing) is simpler than RPC? Is it even technically possible to automatically serialize XML into binary language data types? The simplest approach may be file transfer, but the predominant mindset still points to requesting the execution of remote programs and returning results directly to the requester, sometimes with synchronous properties (i.e. the reply indicates a successful database update). Many existing systems are built with this assumption and make the data binding stalemate more pressing.

3. Value of Web and WS

The formal definition of the REST architecture for the Web was published around the same time as the first of the Web services specifications. Ultimately, these became viewed as competing alternatives, but at the time they seemed very complementary.

Web services specifications include many concepts from the Web architecture and were originally designed to work well with Web technologies. However, the Web services specifications also included concepts and designs to work well with existing enterprise middleware technologies such as RPCs and MOMs, and the implementations of Web services have tended to gravitate toward those concepts and designs rather than toward the more Web friendly parts of the specifications.  This is perhaps because implementations have in large part been done by the vendors of those existing technologies. It was noted that many implementations of Web services have created extensions or compatibility layers to the existing systems (e.g. in the Java world where Web services were implemented using API extensions of J2EE rather than system level APIs).

Web applications, on the other hand, can be integrated with enterprise applications using raw XML and hand coding of the XML processing. This results in the ultimate loose coupling and provides the most benefit for portability and interoperability, but it places more of the burden for handling the abstraction onto the developer. This tends also to be more like the way the Web works. However, as the presenters from Yahoo noted, developers often object to the additional work involved in raw XML processing, and ask for code generation tools like those available from Web services vendors.

Several proponents of a “pure” Web architecture suggested that nothing additional is needed in the standards to address enterprise requirements, but the discussion noted that additional description is often needed beyond the standard self-description of HTTP methods. One of the key requirements mentioned for the Web approach is a description language, and in this context WADL, the Web Application Description Language, was discussed. The author of WADL was present and gave an overview of the specification. WADL basically decorates URLs with information about the operation being performed. WADL is not ready yet for standardization, and although there was much interest at the workshop, it is not clear that WADL is a sufficient response needed for the Web based approach. Additional experimentation over the coming year will hopefully provide more insight into the needs and potential solution.

3.1 Historical challenges and approaches

At the Web Services workshop in 2001, the approach of having a stack of solutions was appealing and we decided to spin up lots of groups to build these specifications. We were to build a foundation of protocols that work within the context of the Web, with the goal of making lots of things talk to lots of other things. In addition, we wanted to create a system to support dynamic composability to meet problems as these arose and to build the corresponding tooling to make all this happen. After six years, we are half way through the spec stack, and interoperability has remained elusive.

Where have problems arisen along the way? Much has been laid to the lack of architectural consistency, something which was envisioned to come out of the Web Services Architecture work, but unfortunately that also did not work out as planned. Parallel interoperability efforts did provide significant value, most notably SOAP Builders, but that lost momentum in favor of WS-I. While the Basic Profile from WS-I was a significant step forward, WS-I has not kept up with a variety of specifications as these have reached the Recommendation/Standards status, and interoperability guidance has been frozen to an earlier time.  In this vein, several criticisms of the WS-I governance model were raised during the workshop.

It is useful to look at the IBM vision as a concise statement of what the community had hoped to produce:

“… a single scalable stack, offering the best of the Web in simple scenarios, and scaling gracefully to SOAP based Web services with the addition of optional extensions when more robust quality of service features are required.”

However, the road to paradise has also been littered with the Web/REST vs. Web Services battles, and the “single, scalable stack” has more resembled two walls. That was part of the motivation for this workshop. Web Services and services on the Web in practice turn out to be two very different things.

It is interesting to note that the original purpose of SOAP was to exchange formatted XML messages over the Internet. The reason the W3C Working Group progressing SOAP is called “XML Protocol” is that SOAP was one of about 15 proposals in 2000 for how to exchange XML messages over the Internet. At that time, ebXML was also being proposed at OASIS and UN/CEFACT as an XML based replacement for EDI. Conflict arose between the SOAP and ebXML communities, especially after UDDI was announced (UDDI was seen as a competitor to the ebXML registry/repository). This conflict was somewhat settled — perhaps defused is a better word — by the compromise in mid 2000 in which SOAP with Attachments was submitted to W3C as a Note. This document was never progressed, largely because other alternatives were proposed, initially DIME and later MTOM, which was eventually adopted by the XML Protocol Working Group as part of SOAP 1.2.

The original goal of SOAP has much in common with REST and in fact certain interpretations of the specifications cite the major difference being SOAP allows the definition of a method or operation name within the message and REST does not. This is largely due to the fact that REST is specific to HTTP (i.e. HTTP has an action header that can be used for the same purpose, or an associated URI) whereas Web services are multi-protocol and therefore need all information to be contained within the envelope. Some of the fundamental differences result from these different assumptions: REST assumes HTTP while WS-* assumes protocol neutrality. Others result from the fact that HTTP intentionally designs out features of existing IT systems (e.g. session based security, transaction coordination, reliability, etc.) and that the WS-* specifications basically amount to an attempt to put them back in.

The Web services stack consists of an ecosystem of composable specifications that enable ubiquitous messaging on lowest common denominator transport (i.e. HTTP). These messages are not self describing, thus needing a variety of metadata/description to make them truly usable in other than a hardwired environment. Particular pain points include data binding, message contents, versioning, and support for asynchronous transports. A strong point of the SOAP approach are tools for the auto generation of code, but some said that the code generation dream becomes a nightmare when we try opening a generated API for use by other applications.

The Web approach based on the REST architectural style provides for messages with precise semantics and the use of URIs as common identifiers provide much of the strength and appeal of the Web. The well defined interface makes it easy to use but it should be remembered that design features of HTTP that allow it to scale also limit some of the behavior expected in enterprise environment, for example, the shortcomings of HTTP authentication. Also several users noted that while a URI based system using HTTP is effective for retrieving data, it is not as good at updating it. That said, there are times when the simplicity may be adequate for larger and smaller enterprises.

Reasonable people (thankfully, including the participants in this workshop) have noted the various strengths and shortcomings of the two approaches and note a need to find a workable combination that provides more value and stimulates less hostile debate. Some of disputes have been rooted in commercial disagreements, while part of the battle has been attention to please one development audience while ignoring another. This has proceeded rather than thinking of the technologies in their own right and considering such things as the useful trade off in architecture between many operations (e.g. specific updates) vs. generic operations (e.g. a single GET).

It should be recognized that REST and WS-* are not going away, and there is no need to have just one. There are many legacy systems, notably special purpose systems such as VISA and SABRE that process large volumes of transactions, that perform necessary functions and would be extremely costly to significantly modify or replace. The Web is not going to be replaced either. Thus, the challenge is to evolve these together, to address the aspects missing in each, and to help people find the right tool for the job.

It may be fair to say that existing enterprise systems have evolved from highly centralized mainframe-style systems (i.e. systems architected for a centralized computing capability, or a single computer deployment) and that the distributed computing software systems used in the enterprise today have attempted to preserve as many attributes of these centralized designs as possible — reliability, security, availability, transactions, performance etc. are all easier to achieve in a centralized environment — while HTTP was designed to support a document distribution system over a WAN. These paradigms are therefore naturally disconnected. The question is what is the best way to join them together? Internet based businesses have shown that it is possible to create high performance, reliable, secure systems using HTTP, but to do so requires a lot of custom coding.

3.2 Challenges

The major challenge is interoperability. (One could also say the major goal of WS-* is interoperability and if you don’t need interoperability you don’t really need WS-*.) As the WS-* stack grows and as efforts continue to increase the speed at which new specifications complete their associated standardization processes, the complexity of making everything work together across implementations becomes more daunting.

Part of the community’s response has been to fill in the holes faster, as done with WS-Addressing. As an example, it was mentioned that discussions surrounding the WS-Addressing Member Submission were constrained and the specification went to Last Call quickly. However, was this at the expense of architectural coherence and will this lead to more interoperability problems in the future? One presenter noted that architectural coherence may be at odds with the idea of fast standardization. Another noted that the greatest challenge is not building more layers, but getting the ones we have implemented in a much more seamless manner than what we have today.

There were numerous examples of specific technical gaps that contribute to interoperability challenges. WS-* specifications generally work by adding headers to the SOAP envelope, the design of which assumes that all relevant metadata and data are enclosed within it. However the rules for composing and processing multiple headers in combination are not defined. Another difficulty concerns attaching XML documents into the body of a SOAP message because an XML document can only have one root, which in the case of SOAP is the SOAP envelope. Together with a lack of standard data bindings this creates interoperability challenges, especially for tightly coupled systems.

As noted earlier, the data binding issue was raised numerous times as an important piece of glue that is being pursued by a small, dedicated group but is not getting the attention and support it deserves. Other discussions touched on the need for consistent state and session management and the need for not only syntactic but also semantic interoperability. Also, as we discuss the historical gap between the REST and SOAP approaches, it was noted we are missing some of the technology that would allow SOAP clients to consume REST and vice versa.

The REST/SOAP gap moves the discussion of challenges from the mostly technical to the cultural and political. The SOAP 1.2 Response MEP and the web method options were meant to bridge the gap, as are the WSDL 2.0 HTTP binding and wsdlx:safe. However, these features of the specifications are not seeing widespread implementation and use. It was noted that some of the difficulty is a gap between the perspective of the vendor who decides what tools to make available and the users whose problems may benefit from a wider variety of capabilities. The vendors often say the users haven’t asked for capabilities, but many users get their primary information from the vendor sales staff and consultants who deal with what they are selling. There are times we overcome the innovator’s dilemma and get consumers to go beyond traditional approaches and there are times we hide behind it. Thus, there is a challenge in educating users to the possibilities and giving fuller consideration to the problems they face. Then, the challenge is to fully implement the specifications so these possibilities can go into solutions.

To address the various interoperability challenges, the discussion converged on the challenges of specification coordination and best practices. There is a need to deal with spec errata and to answer questions about how features were intended to be used. The latter was brought up numerous times with respect to reference parameters in WS-Addressing, where these can be easily used incompatibly with the Web. It is also necessary to have a forum to consider questions that have surfaced since the initial spec development. Then, there is the need for continued interoperability testing and interoperability events to demonstrate solutions. As part of the process, vendors could put up public endpoints and maintain these for public testing. The results of this continuing work feed into best practices upon which consumers can rely to produce consistent solutions from which greater interoperability will follow. The challenge is to create a center of gravity for both new and existing efforts and to generate the political will to see this through.

The consideration of coordination and best practices led a discussion on certification. This, of course, requires test suites to be built and maintained and for some entity to be a certification authority. The SOAP Builders model was noted where rather than a certification authority, there was a social commitment from the vendors. However, the major challenge is the social commitment from the major parties. Common remarks followed the theme of “if the vendors won’t show up,” “if the right parties don’t come to the table,” “you can’t force vendors to follow the standards.” In summary, standards organizations do not solve the problems themselves but rather are a venue where people need to agree upon solutions and follow through.

4. The way forward

As noted, the greatest challenge is better interoperability among the specifications currently in play, and so the clear message was to make work what we have. As one of the participants put it, “Stop writing specs and start writing code!"

4.1 Undoing REST vs. SOAP

So the question becomes one of how to put things together. From the REST vs. SOAP legacy, the consensus was that there is strength in both approaches and there is a need to identify how best to leverage those strengths. For example, an implementation could use a SOAP server alongside a Web server, and a question will be whether these work best in parallel (i.e. for a request, choose to rout to one or the other) or in series (i.e., to the Web server and then rout to a SOAP server as necessary).

Note, we are talking about a combination of the approaches and are not implying a convergence or some sort of REST/WS-* integration. There are fundamental differences that can provide value rather than conflict and confusion; what is needed are well-defined bridges and the past work of the Web Architecture group should provide guidance. Also, such bridges should, when possible, go beyond strictly considering what is internal vs. external to an enterprise because agility to changing situations must consider that such boundaries can quickly change.

Another issue that requires additional demonstration is generating better understanding on the number of operations needed or, possibly, the number of types of operations. There was general consensus of the value of a generic read, i.e. GET, that PUT will be somewhat less generic than GET, and that UPDATE may need to be much more specialized. The value of the generic operation is when it can be done without knowing specifics of a particular use. Scenarios to help define this in an interoperable manner would be more productive than random debates.

4.2 Putting the Web in Web Services (and vice versa)

The discussion often came back to an unproductive pattern of Web Services that have been taken off the Web and what should be done to reverse this state of affairs. One aspect that is too often missing is exposing WS resources, e.g. WSDLs, using URIs. This allows all concerned to reference the same resources from either approach without needing specialized tools. For example, URI-addressed resources can be accessed from a browser and can flexibly respond to new media types from the same Web server.

The use of reference parameters in WS-Addressing was seen as an unfortunate opportunity to make the separation problem worse. Reference parameters were advocated as a way to convey information that would be analogous to cookies. Also, reference paramaters were extended to allow the use of addressing schemes for protocols other than HTTP. However, EPRs using RefPs can not only be opaque but can carry very private information. The concept of using a single URI to talk about a single resource is fundamental, and RefP information should generally not be required to establish identity of a resource. There may be exceptions for temporary use but the major problem occurs when the EPRs are not transitory. There was discussion of defining mappings between EPRs and URIs but there was not consensus that this would lead to an otherwise robust solution. Again, part of the problem follows from the different assumptions behind REST and WS-*, i.e. REST is HTTP only while WS-* is intended for multi-protocol use

The action for which there was consensus surrounding EPRs was that vendors should show care in using EPRs and the intended “best practice” of emphasizing how a consistent use of URI naming can support easier moving between REST and SOAP approaches should be enthusiastically communicated. A fuller discussion of best practices appears below.

While it is also generally agreed that Web Services require a significant amount of description beyond what is contained in WSDL, it was also noted that while RESTful applications are termed “self describing,” these would also be well served by additional annotations. The discussion here noted the interest in WADL and the similar suggestion for a “REST DL.” In many cases, these both leverage externally generated metadata, ranging from English prose to structured or unstructured data to a more formal representation in RDF, but all referenced using URIs. URI-identified resources, including service descriptions, also provide a basis for defining orchestration/choreography activities. There is a great deal of interest in following work such as WADL to better understand what is to be accomplished and how rather than pursuing premature standardization.

4.3 The need for coordination

To improve interoperability both within and between Web Services and REST based services requires maintenance of and coordination among the pieces we have already built. Putting that off until there are more pieces just puts off the day of reckoning until the situation is worse yet. The short term goal must be to work on the interoperability problem, with the goals of a coordination effort to be the following:

Note that the bullet on test suites describes the activity as maintenance and/or coordination. Whether this is a centrally maintained test library and a catalog of distributed but publicly available endpoints will depend on the details worked out by the participants.

A big problem, and not just with WS-*, is that vendors tend to adapt new technologies to their current environments, rather than thinking about the best way in general to implement them. This came up at the workshop when one user said that his company would not widely adopt Web services until its capabilities matched those of existing middleware systems already in use. The question to be addressed by a coordination effort should be how WS-* and/or HTTP can meet the application requirements, not how WS-* and/or HTTP provide feature parity with existing products.

The coordination should include developing examples, built following REST, SOAP, and eventually coordinated principles, to demonstrate how the building blocks form the envisioned stack or any other pattern that leads to workable solutions. The examples should also demonstrate architectural coherence as a guide to dealing with future challenges that go beyond what we see at any given point in time as the standard solutions. This will eventually include services built from different versions of specifications but that persist because of their functional value this forms the new legacy inventory.

This coordination must be an ongoing cooperative effort among those who were responsible for the original specification development (including specifications developed outside the W3C), those who are on the forefront of implementation, and the user community. While membership in such a group is likely to change over time, it would be necessary for the group to have participation of major implementers to ensure that conclusions and findings appeared in the tools and products the efforts of such a group must be more than a paper exercise.

A final question was whether such coordination was a role for W3C and whether the W3C Process supports such an activity. The initial conclusions were that

The overall goal of the identified need for coordination is to help vendors understand what tools are necessary and for the consumers to understand how to use the right tools in the best possible ways for any given job.

5. Other needs

Discussion during the workshop covered a number of miscellaneous needs that peripherally relate to interoperability but deal more with other gaps in the overall services vision. These are described briefly in the following.

5.1 Connectivity of legacy systems to the Web

The challenge is in making legacy assets accessible on the Web. There can be numerous scenarios for this depending on what capabilities of the legacy are exposed and what associated activities must be accomplished by the Web layer. Much was made of the need for bindings to technology dependent languages and even specific platforms in order to bind to existing systems. This would allow systems to connect better with one another, sometimes using the web, sometimes using protocols such as UDP or TCP. There is also a need to have bindings into legacy languages and other systems such as proprietary message oriented middleware products and support for reliable, secure transactions. One potential solution is to have users state requirements and evaluate proposed standards specifications for compliance to the requirements, but users tend to view standards as a problem for vendors, despite the fact that the users stand to benefit more from effective standardization than vendors.

5.2 Service description and support for discovery and versioning

For Web Services (and for SOA in general), necessary description goes far beyond WSDL describing the message exchange through the service interface. Additional required information includes policies, effects of service interaction, and identifying information on the service provider. While machine-processable descriptions are envisioned for the future, natural language description will continue to convey important information. What is needed is a stable way to point to externally composed description where the mechanism provides a graceful means to support textual descriptions that may later be replaced by more formal representations. Such a mechanism may involve semantic annotations, and these will certainly rely on the previous discussion on the use of URIs throughout the information space.

The need for description is also relevant for REST-based services because, for example, a POST does not describe the effects of doing the well-defined action. WADL is one attempt at a “REST DL.” It describes resources rather than describing methods as is done with WSDL; unlike Web Forms, it is possible to specify Web output and support URIs. Marc Hadley, who has been developing WADL, is looking for more development experience before considering any effort to standardize WADL. The spec is currently stable and he is interested in seeing more use cases.

WSDL and WADL are not meant to be competitors but rather possibly complementary ways to convey a wide range of descriptive information. Among other things needed to be described is versioning. With an external definition of a versioning strategy, the description can convey an unambiguous maturity state that maintains meaning as long as the external resource defining the maturity is accessible.

The variety of information that should be available through description makes resources more discernible and more precisely discoverable by consumers. The description would go beyond that currently associated with publish-find-bind. This lowers the bar of entry for all service providers because they can differentiate their offerings in ways that are understandable to their target customers.

5.3 Workflow

Support for service composition is fundamental to the Web Service vision. Currently, two specifications BPEL and WS-CDL handle the differing approaches of orchestration and choreography, respectively. The need is to define conversations and not just point-to-point interactions. While the topic was not discussed in detail, questions raised included whether a higher level service model was required or was the combined capabilities of some merged version of the two specs required. At present, it is difficult to compose higher level services and while more complete description may provide information necessary to identify the pieces to be composed, more is needed to describe the composition itself.

5.4 Intermediaries

Intermediaries are a topic on which there was agreement that work needs to be done to establish predictable behavior but it is a hard problem and one that has been largely ignored. There has been activity in the past on Web caching but although there was interest, the vendors could not agree on the functionality. Work on intermediaries is an obvious next step.

5.5 HTTP authentication

The basic nature of HTTP authentication has been a constant thorn in the side of the REST faithful. There is no cross host authentication, limitations of number of entities involved, poor security, and hence people use cookies or custom authentication schemes. It was noted that this has historically fallen in the IETF domain, and there was no consensus on what would be an appropriate W3C activity in this space.

5.6 Multiple devices and intermittent connectivity

Another issue was that of support for a large variety of devices, especially mobile devices in areas with inconsistent connectivity. Devices will range from laptops to PDAs; connectivity interruptions will range from local dead spots to disconnected operations that can last hours or days. Such disadvantaged users will not only be service consumers but may also be service providers, such as sensors, that will have intermittent availability.

While more basic interoperability problems are currently more pressing, the challenges of the disadvantaged user will persist to make sure life does not get too boring as other problems are solved.

5.7 Data vs Object Oriented services

It was mentioned during the workshop that rather than focusing on program or object oriented services, in order to solve interoperability problems, the industry should rather focus on data oriented services instead. The W3C Semantic Web Activity has been focusing on Web content that can be understood, interpreted and used by software agents.

5.8 Architectural best practices

Some user organizations advocated for the formation of vertical, and perhaps horizontal, user oriented initiatives to help promote industry best practices for the adoption of Web and WS-* standards in enterprise IT deployments, such as an SOA.

5.9 SOAP Header Ordering

SOAP header ordering — which relates to the point about their being no Web services architecture (i.e. the order of processing is undefined for the headers of the composable specifications).

5.10 State management

Shared session state management is still an unresolved issue, despite the potential role of WS-Context as a persistent session state sharing mechanism.

5.11 Hierarchical organizations

An interesting discussion was brought up concerning the typical hierarchy of business or government agency vs the informality of decentralized communities common on the Web. Chevron brought this up at the May AC meeting in 2006 in the context of how they control their network users (employees) and its environment, and that current Web based technologies are viewed as insufficient for the same purpose. Will the IM or Web generation break this down and change it when they start to enter the corporate workforce? There’s also an inherent tension between taking the time to achieve social consensus and the need for quick results.

During the workshop some of these very different characteristics were delineated that are involved in supporting and maintaining a set of technologies designed for use by a worldwide community of peers and those involved in maintaining a set of technologies for use in a hierarchical, strictly controlled corporate environment.

Inside and outside the firewall are often still two different worlds.

A. References

W3C Workshop on Web of Services for Enterprise Computing: Program and Position Papers
http://www.w3.org/2007/01/wos-ec-program
Architecture of the World Wide Web, Volume One
http://www.w3.org/TR/webarch/
URIs, Addressability, and the use of HTTP GET and POST
http://www.w3.org/2001/tag/doc/whenToUseGet.html
Representational State Transfer (REST)
http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm
Web Application Description Language
https://wadl.dev.java.net/wadl20061109.pdf
Hypertext Transfer Protocol - HTTP/1.1
http://www.ietf.org/rfc/rfc2616
Web Services Addressing 1.0 - Core
http://www.w3.org/TR/ws-addr-core
Web Services Context Specification (WS-Context)
http://docs.oasis-open.org/ws-caf/ws-context/v1.0/OS/wsctx.html
Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language
http://www.w3.org/TR/wsdl20
Tag Issue endPointRefs-47: WS-Addressing SOAP binding & app protocols
http://www.w3.org/2001/tag/issues.html?type=1#endPointRefs-47
Web Services Description Language (WSDL) Version 2.0 Part 2: Adjuncts
http://www.w3.org/TR/wsdl20-adjuncts
XML Schema Patterns for Databinding
http://www.w3.org/TR/xmlschema-patterns
Web Services Transfer (WS-Transfer)
http://www.w3.org/Submission/WS-Transfer
Web Services Architecture
http://www.w3.org/TR/ws-arch/