A position paper for the W3C Workshop on Web of
Services for Enterprise Computing
27-28 February 2007, MITRE, Bedford, MA, USA
The scope of this workshop is very wide-ranging, so this paper concentrates on a single, but important issue: given the Web has proven to be enormously successful for human to computer interactions, why isn't the Web seen as being "good enough" for computer to computer interactions:
Note that many of these scenarios span multiple organisations, which themselves can be quite fluid or virtual; capabilities may be bought in, outsourced or exposed as a product at very short notice to meet a business opportunity. In certain cases BT is required to expose services used internally in an equivalent form to competitors as a result of regulation. For BT, the term "enterprise computing" does not imply "within a single organisation".
The answers to each of these challenges revolve around standards. Good standards are accompanied by strong commitment from vendors and the community at large resulting in interoperability between products and services. Interoperability creates and widens marketplaces, leads to cheaper, faster, better integration, and enables us to switch implementations with little or no impact on our customers.
There are currently two standard approaches for exposing services in widespread use: Web Services and the Web.
Web services is a somewhat ambiguous term. Under the W3C Web services Activity it has come to denote exchanging messages using SOAP and XML. Messaging is a well understood and heavily used pattern for building large, high volume distributed systems. An architecture built upon messages can be flexible, resilient to an individual component becoming unavailable. This is especially true when combined with reliable message queues.
Web services advocate the notion of "transport independence", that is messages may be processed independently of how they arrive, meaning Web service messaging has very little to do with the Web as an uniform information space. SOAP messages are typically passed between endpoints which are used to present more than one resource. Many SOAP messages are exchanged using the unsafe, non-idempotent, uncachable HTTP POST (RFC 2616 terminology) or to generic endpoints such as mailto:firstname.lastname@example.org. Indeed SOAP messages may be exchanged using methods which are not readily expressed as a URI, for example over SNA or on a USB key. Also the messages themselves are often ephemeral and can't be referred to or dereferenced using a URI. BT advocates at least using a mandatory wsa:MessageId to uniquely identify a given message instance.
Transport independence is enabling, until you realise SOAP has been doomed to reinvent protocols missing from any of the underlying transport paths being abstracted, in particular authentication, message integrity, addressing, maintaining state and reliability. As a result SOAP is in essence an XML format for building protocols, but designing a robust protocol which works well and composes with other protocols is extremely difficult, especially when that design process is undertaken by a committee without experience of "running code". Whereas most communications protocols are built in a "stack" fashion, enabling a separation of concerns - the header of one layer may be thrown away before processing the next one, SOAP headers appear in an unordered "bag" and have implicit interdependencies only evident from careful reading of a myriad of specifications published by a wide variety of often competing organisations. Attempts to add clarity to this complex ecosystem have thus far been fruitless, such as the W3C Web Services Architecture Working Group, and the WS-I Requirements Gathering Working Group.
It's also possible to have more than one instance of the same header in a single SOAP message, for example there are three versions of WS-Addressing in common use, all of which provide a different namespace for the wsa:Action header and which can coexist in a single message with different values. Such ambiguities damage interoperability and introduce serious security concerns.
So the SOAP "stack" is a mess, and currently only the simplest of services are able to interoperate. However we believe this situation is likely to improve long term, in part due to the adoption of profiles published by the WS-I, but mainly due to the emergence of a reference implementation in the form of Microsoft's WCF. It should be noted that most deployed implementations only provide first class support for the W3C member submissions for SOAP 1.1, WSDL 1.1, WS-Addressing and WS-Policy. Migrating from these ad-hoc specifications to Recommendations maintained by the W3C and covered by the W3C patent policy is going to be very difficult in practice.
The following are our main issues with Web services:
Many services ignore SOAP and exchange Plain Old XML (POX) messages passed to a "Web API" using HTTP POST. Whilst these services eschew SOAP, and operate in simpler environments, they are architecturally similar to Web services.
We therefore use the term "Services on the Web" to indicate interactions which conform with W3C Web Architecture and operate on resources, typically using HTTP/HTTPS. Building resource centric systems isn't a natural activity for many architects, especially when faced with nothing but "Enterprise" tools from vendors the vast majority of whom are pushing Web services.
A key obstacle in adopting Services on the Web within business computing is the lack of a description language and tools to abstract the data being exchanged. Although WSDL 2.0 offers a HTTP binding, it has very limited scope and is message centric. The resource centric and more complete WADL would provide a better basis for a Web description language. However on the Web it is unclear exactly what would distinguish a description language from a set of HTML or XML Forms.
One interesting area to us, and a possible a matter of concern for the W3C, is the fragmentation of formats. XML held the possibility of being a one size fits all format to unite document and code based processing of messages, but we are now seeing services which accept and return data in a wide variety of formats targeted at native processing, such as JSON, YAML and PHP serialisation. The individual properties and mappings for each of these formats are subtlety different, and the security implications of using these formats are not yet well understood.
Use-cases seen to provide a challenge to implement as Services on the Web include:
The W3C holds the reins for the core specifications used by the Web and by Web Services. This has generated something of a conflict of interest, between what are different architectures and information spaces. Instances where the W3C has attempted to impose Web architecture principles upon Web services include:
In each case the resolution has not been well implemented by vendors or embraced by other SOAP specifications. In some respects such resolutions may be seen as paying a "W3C tax", the cost of having the W3C brand placed upon Web service specifications. Such challenges will only continue to deepen as more Web unfriendly Web service specifications come to the attention of the TAG, for example WS-Polling, WS-Transfer and WS-MetadataExchange
We believe that rather than impose Web architecture on Web services, the W3C should strongly highlight the distinction between "Web Services" and "The Web" which whilst not being very compatible, may continue to be complementary.
Many of the Web service specifications within the W3C will soon have been published as Recommendations and be in maintainance mode. We suggest the W3C collapse the individual Working Groups into a single Web services Core Working Group.
We propose that the W3C forms a Best Practices and Deployment Working Group to collect use-cases for "Services on The Web" and exemplify how to provide services for agents which conform to the Architecture of the World Wide Web.