Why Can't We All Just Get Along?

A position paper for the W3C WSEC Workshop

Glen Daniels, Progress Software
January 10th, 2007


In this paper, we briefly describe some of the integration patterns that our customers are using today, and how the Web and Web services relate to them. We then address a few of the questions set forth by the W3C for this workshop, and put forward our take on the current "Web vs. Web services" debate.

We look forward to lively (and timely) discussion with the W3C community on these issues, and we hope that some productive direction will arise as a result of this workshop. Thanks for the opportunity to contribute.

Use Cases

Progress has a wide portfolio of products - a long-lived 4GL database (OpenEdge), integration platforms like the Sonic Enterprise Service Bus (ESB), messaging systems, complex event processing and data integration solutions, and Web service management systems, among others. Our business is focused on letting our customers build effective applications, and on enabling them to easily integrate their applications across the widest possible set of environments and technologies. Among the typical uses we see for our products, we find things like:

Sending XML-compatible data around the enterprise is one truly common point amongst all these cases, and a lot of the foundational technology to enable that is firmly grounded in W3C Recommendations. We'd first like to stress how important XML, Schema, and the related network of specs for processing and manipulating XML, have been for the industry as a whole, and how these technologies definitely support enterprise use-cases.

On to the Web and Web services. Some of these cases, in particular portal integration, are clearly Web-related, and others (pulling status from shop-floor machines, etc) can certainly be nicely mapped to sets of URIs accessed via HTTP. However, in many of the cases, pre-existing systems are not natively built to be "web friendly"... They rely on non-HTTP protocols, multicast, object-RPC-style interactions, deep asynchrony, etcetera. For the past few years we've been using SOAP/WSDL based Web services as a gateway technology to enable interoperability between these kinds of systems and those from many other vendors. Despite some problems with various toolkits out there, we've found SOAP to be a very useful common ground for messaging applications that work both over HTTP and other protocols.

One pattern to note in particular is that of bindings, present both in WSDL and in SOAP. One of our key selling points is that we enable customers to adjust to changing business and technological requirements in as easy a fashion as possible. The notion of describing a service interaction in an abstract way, enabling code to be built at this level which doesn't need to change when running over a different underlying protocol, is a great help here. The MustUnderstand pattern (again, present both in SOAP and WSDL), which enables particular design or runtime extensions to be marked as mandatory, also greatly aids in handling changing requirements. We consider the ability to react fluidly to changing policies or technologies to be a key requirement for enterprise computing.

We have customers who view the world in terms of messages, events and destinations. We have customers who view the world in terms of services and invocations. We have customers who view the world in terms of resources. We'd like to make sure that the W3C (and therefore the broader community we at Progress interoperate with) considers all these views in appropriate ways. This doesn't necessarily mean forcing square pegs into round holes, but rather thinking more deeply about usage patterns and architecture on top of the W3C set of specs. More on this in our recommendations below.

Considering Questions

Several of the questions posed by the team for this workshop involve the difference between internal and external systems. We believe that this distinction, although interesting to discuss, is often an inappropriate metric. We've seen multi-billion-dollar companies in which the organizational hierarchy is so opaque that integrating systems across business units, or even teams in the same building, may as well be "external". On the flipside, organizations that merge with or acquire other companies suddenly need to very rapidly integrate potentially heterogeneous systems - taking what was "external" and integrating it deeply into their "internal" models. So as far as we're concerned, we can answer the question of whether there should be a single architecture with a resounding "yes, please". This does not mean there should be any requirement for a single implementation or wire-level technology. Rather, we like the idea of a common model which utilizes metadata, data modeling, and policy information in order to enable developers within the enterprise to take a common view of the deployed services around their organization, while supporting evolution and change at the wire level.

A question from the CFP involves tooling - this is a great vantage point when considering both use-cases and requirements. For an "enterprisey" user, the typical way to access a remote service is to go find some metadata (a WSDL file, for instance), load it into a tool, and then you're given various ways to access the clearly delineated functionality for that service - perhaps via coding to a "stub" interface or perhaps by simply graphically linking the service into a flow graph. The infrastructure underneath the tooling handles matter like data binding and also activating appropriate plug-ins in order to handle important Quality-of-Service extensions like reliability. Can we provide the same sort of experience for services that are more "Web-native", involving networks of related URLs which you access with straight HTTP? There has already been some work done in this area, notably WADL from Sun, but a lot more needs to happen. We'd be interested to see some activity here from W3C, as this type of thing also falls within our answer to "what's missing from the Web?".

Conclusion and Recommendations

The Web has been hugely successful, of that there is no doubt. And for many cases it certainly can be fruitfully used for machine-to-machine interaction. However, the architectural style of such interactions is often quite different than the way a typical enterprise software developer familiar with OOP, databases, and messaging would build things. Also, for many cases the "staight-ahead" Web as it is today just isn't sufficient - for instance, when mapping REST-style semantics on top of back end systems creates design mismatches, or when enterprise-style "-ities" are needed beyond what standard HTTP/SSL can provide. In our opinion, the SOAP/WSDL framework and the WS-* specs which sit on top provide a suitable foundation to solve those latter cases, and can also help to bring them more in line with the Web "proper" by bringing them into a world of URIs, content types, and common access bindings. While some scenarios will certainly point solidly to "The Web" (user interaction via forms) or "Web Services" (orchestrated reliable event flows), there is also a fruitful middle ground which should be explored and expanded upon.

We believe that Web services can, and should, be better aligned with the Web in the minds and code of the community. Indeed, most of the technological pieces to enable this alignment are already in place - the problem is a lack of "best practices" knowledge and architectural guidance. The W3C is the logical (perhaps the only) place for this knowledge to coalesce.

Learning by example often works much better than trying to inductively determine best practices by working upwards from a set of technical specifications. As such, we are eager for the W3C to provide the community with some guidance and detailed scenarios which demonstrate some of the various "web friendly" ways to service-enable the enterprise. It is our fond hope that this workshop will provide a solid foundation for such work, but we'd like to see more along these lines either from the TAG or, better yet, from a task force or working group. Such guidance should at least include metrics for determining what kind of solutions (straight HTTP, WSDL/SOAP services, or a hybrid) are appropriate for what kind of scenarios, and how to translate the benefits of the Web to Web services, and vice versa. Sam Ruby, Noah Mendelsohn, and many others have already contributed some good thinking in this area.

On another note, we are already seeing "enterprisey" toolkits from organizations like Apache, Sun, and Microsoft supporting both a SOAP (transport-independent) binding and a REST-compatible one over HTTP... Let's help these efforts out by better describing what it means to bind Web resources to languages like Java and C#, in addition to ones like Ruby and Python. This exploration dovetails nicely with the one above, and although it may not result in a Recommendation-track document, it seems like something that should happen within the W3C community.

Finally we'd like to mention stateful interactions. Although the Web is ostensibly "stateless", it is quite clear that a majority of the serious Web applications out there rely heavily on cookies and server-managed state to get their work done. In the enterprise, too, conversations that last more than a single message exchange, or span more than one communicating party, are the norm. We'd like to see the W3C spend some more time exploring the question of state, it's relationship to identity management, and how it interacts with the world of URIs. Thanks to the TAG for their great start in this area.