Experiences with the Web and the Internet at large have taught us that an important attribute for any application or system is scalability; indeed, it's become a cliche to say that something won't work because it "doesn't scale". Tightly bound to scalability is performance, as expressed by end-user perceived latency and other metrics. This paper outlines one approach to scaling Web Services, and proposes further work which leverages XML Protocol's features to help scale them and improve performance.
Scalability in network-available services, which might be defined as the ability of an application to handle growth efficiently, is typically achieved by making them available on multiple devices. While a single server can be enlarged to a certain degree, this approach rapidly reaches a point where the cost of scaling overreaches the benefits. For example, one may add more processors, but the cost of such specialized hardware quickly exceeds that of two commodity servers. Additionally, using more than one server gives benefits in performance, reliability and flexibility, and introduces the opportunity for the introduction of efficiencies which cannot be realised in a single-server deployment.
To function on a number of servers, an application needs to do two things; direct requests to an appropriate server, and enable that server to process them and provide an appropriate response.
In this model, Web Service request messages are sent to a Service's URI, but some mechanism (either in the message or external to it) routes them to another, intermediary, device. That device may or may not act as an intermediary in other aspects (for example, it may or may not be an XML Protocol intermediary, or an HTTP intermediary). For purposes of this paper, we will call such devices service intermediaries.
There are many methods of directing requests to a service intermediary, depending on the nature of the deployment and the service's requirements. XML Protocol, or an XMLP Module, may provide a mechanism for routing messages to the device. Alternatively, if a number of service intermediaries are located (network) near each other, a "Layer-3+" load-balancing switch may be used to distribute the load between them, without explicit in-message routing. In a more distributed deployment, a "Global" load balancing product or service may be used to direct clients to the appropriate service intermediary based on a number of criteria, and achieved through any of a number of possible techniques acting at various layers.
Once a request arrives at a service intermediary, the device needs to be capable of satisfying it. There are two basic approaches to this problem; execution of service-specific code to provide the service, and the use of mechanisms which leverage common service behaviours in order to introduce efficiencies. These roughly align with the functional/optimising axis used in the middlebox taxonomy draft [MIDTAX].
Functional service intermediaries typically execute service-specific code. Because these devices interpret or create messages in the process of providing services, they cannot be removed from the message chain. They also require distribution infrastructure and a means of describing the code's requirements and interfaces. Examples of functional mechanisms include resource-embedded languages which are interpreted by the intermediary and service environments which process the message in some way when providing the service [OPES, ICAP, PCR].
Optimising service intermediaries provide efficiencies by exploiting common service behaviours. If these devices are removed, a service is still available (assuming that requests are routed appropriately), but will not benefit from the performance increase they introduce. Generally, such techniques optimise by managing resource use, such as processing, network connections and bandwidth, through the inclusion of advisory hinting.
In this paper, we concentrate on the techniques which could enable optimising service intermediaries, attempt to identify requirements for them, and explore their use cases.
A standardized set of optimisation mechanisms to allow scalability of Web services needs to fulfill a number of requirements;
There is a rich history of optimisation techniques in protocol design and computer science in general which we can draw from. Here, we attempt to separate them into general mechanisms which may be combined to allow services to more powerfully and exactly control how service intermediaries handle their messages. This list draws primarily from techniques used in the HTTP, which in turn benefitted from experience in distributed filesystems [DFSScale].
Caching is a technique that has been used for some time to scale distributed systems, whether it be in filesystem design or the World Wide Web. By allowing clients to keep and reuse copies of entities, efficiencies are realised by either the avoidance of data transfer, or the avoidance of a round-trip to the server altogether. Caching techniques rely on locality in usage patterns; that is, the likelihood that portions of messages can be reused.
To be able to reuse an entity, a caching service intermediary must understand the conditions under which it is appropriate to do so. Cache indexing defines the profile of request semantics in which a particular response may be reused. The most obvious way to index a cache is based upon Services' URIs, as HTTP does. This provides a namespace for cache lookups to be performed in.
For more complex applications, it may be necessary to modify the cache index depending on other attributes. For example, HTTP allows the 'Vary' response header to specify which additional request headers should be used to index the cache, allowing objects with separate language attributes to be stored under the same URI, for example. This content negotiation feature is crude in the HTTP, but could be much more expressive using XML.
Conversely, there may be situations where a Service URI-based cache index may be too restrictive; it may be useful to expand the scope of the cache index to include multiple resources, to allow entities to be reused across services. To accommodate these situations, it should be possible to declare a 'virtual' cache index which different resources can interact with.
Furthermore, a Service much have some control over the entities stored in a caching service intermediary. Cache coherence mechanisms provide this, typically through the use of validation (actively checking to see whether an entity should be reused) and invalidation (marking the content as 'stale' based on some trigger event). Additionally, partial content techniques allow services to express the delta of a changed entity, giving greater efficiencies for large objects with relatively small changing parts [Delta].
Some Services consist of the submission of a message as the request, and a brief acknowledgement as a response, in a manner similar to SMTP's store-and-forward pattern. Standardization of an acknowledgement message would allow intermediaries to take responsibility for handling requests whilst immediately acknowledging them. In combination with caching and other techniques, store-and-forward allows intermediaries to improve service reliability substantially, by making it possible to have multiple, redundant points of contact for message submission, with the possibility for performance improvement through client/intermediary locality.
In some situations, intermediaries need to send or receive a number of separate messages to or from a particular device. Although some transport bindings may make it possible to reuse a network connection for these messages, further processing efficiencies might be realised by their combination into a single message. For example, it might be desirable to send all store-and-forward messages for a Service at once, wrapping all of them in a master message which uses an encryption module to protect them. If used across an HTTP binding, this approach avoids the overhead of separately encrypting the messages and then submitting each one and waiting for a response to indicate success.
Similarly, there may be situations where it is advantageous to 'piggyback' responses to give additional information to the intermediary. Previously, piggyback validation techniques have been examined in the HTTP [Piggyback], and such techniques could also be used with service intermediaries to pre-fill the cache, bundle invalidations, and perform other tasks.
To use these techniques, the manner in which they are to be applied needs to be described to the service intermediaries. Generally speaking, this has two aspects; when to apply them, and to what they should be applied.
The caching and store-and-forward techniques both require triggers; the cache needs to know when to validate or invalidate an entity. Store-and-forward needs to declare under what conditions the message should be forwarded. To accommodate this, a variety of trigger mechanisms could be defined;
XML offers an ideal way to control the scope of application to portions of a message, because there are a number of ways to associate hints with a particular XML element or hierarchical group of elements (up to the scope of the entire message).
The most obvious means is through use of attributes in a separate XML Namespace. For example, if an element 'foo' and its children are cacheable, it could be expressed as
<foo cache:invalidate="yes" cache:delta="5m"> ... </foo>
Alternatively, if the document has an XML Schema associated, it would be possible to encapsulate optimisation hints in the schema itself.
Finally, a separate XML description using XPath, XPointer or similar technology could be used to describe optimisation hints. This could be located out-of-band, in the intermediaries' configuration, or in an XML Protocol Header.
In both of these dimensions, particular care should be taken to assure that optimisation techniques can be applied in the most flexible and intuitive manner possible.
Although tentative, these use cases help illustrate the potential scope of optimized service intermediaries' power, and their effect on Web Services.
This paper has outlined areas of research regarding optimising mechanisms in service intermediaries; they are intended as a discussion point only. Hopefully, they will generate interest in standardization of such techniques, development of a framework for their use, and integration into Web Service toolkits and products.
MIDTAX - B. Carpenter. "Middle boxes: taxonomy and issues". January, 2001.
OPES - IETF "Open Pluggable Extensible Services" Birds-of-a-Feather
ICAP - J. Elson et. al. "Internet Content Adaption Protocol" (see also ICAP web site)
DFSScale - M. Satyanarayanan. "The Influence of Scale on Distributed File System Design". In IEEE Transactions on Software Engineering, January 1992.
Piggyback - Balachander Krishnamurthy and Craig E. Wills. "Piggyback Server Invalidation for Proxy Cache Coherency". In Proceedings of the Seventh International World Wide Web Conference, Brisbane, Australia , April 1998.
Delta - J. Mogul et. al. "Delta encoding in HTTP", October, 2000.
PCR - M. Bech et. al. "Enabling Full Service Surrogates Using the Portable Channel Representation". March, 2001.
Version: 1.01 - March 12, 2001