Victus Spiritus

home

The Client Server Class War

20 Jun 2011

Internet traffic is undergoing an irreversible transition from predominantly pc browsing to smaller mobile devices and large displays for streaming video. Client side software has been a hot development area on mobile and novel display surfaces, as opposed to only supporting local device web browsers. Most active SaS businesses build clients for all the primary platforms - web, iOS, Android, etc., enabling customers to access provided services efficiently from any platform they choose.

Deeply ingrained within RESTful architecture is the client server relationship. Clients can typically only request to receive and modify resources, while servers are also capable receiving and responding to requests. The World Wide Web is the largest RESTful architecture1.

What is Representational State Transfer

REST exemplifies how the Web’s architecture emerged by characterizing and constraining the macro-interactions of the four components of the Web, namely origin servers, gateways, proxies and clients, without imposing limitations on the individual participants. As such, REST essentially governs the proper behavior of participants.

REST-style architectures consist of clients and servers. Clients initiate requests to servers; servers process requests and return appropriate responses. Requests and responses are built around the transfer of representations of resources. A resource can be essentially any coherent and meaningful concept that may be addressed. A representation of a resource is typically a document that captures the current or intended state of a resource.

At any particular time, a client can either be in transition between application states or “at rest.” A client in a rest state is able to interact with its user, but creates no load and consumes no per-client storage on the servers or on the network.

The client begins sending requests when it is ready to make the transition to a new state. While one or more requests are outstanding, the client is considered to be in transition. The representation of each application state contains links that may be used next time the client chooses to initiate a new state transition.
(source)

REST is best described as a common design specified by a set of shared rules which individual network nodes adhere to. As a reminder and bit of homework for myself, I've included these requirements below (skip to Headless Networks if you're familiar with REST):

RESTful Rules

Client–server
Clients are separated from servers by a uniform interface. This separation of concerns means that, for example, clients are not concerned with data storage, which remains internal to each server, so that the portability of client code is improved. Servers are not concerned with the user interface or user state, so that servers can be simpler and more scalable. Servers and clients may also be replaced and developed independently, as long as the interface is not altered.

Stateless (Clients)
The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and any session state is held in the client. The server can be stateful; this constraint merely requires that server-side state be addressable by URL as a resource. This not only makes servers more visible for monitoring, but also makes them more reliable in the face of partial network failures as well as further enhancing their scalability.

Cacheable
As on the World Wide Web, clients are able to cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable, or not, to prevent clients reusing stale or inappropriate data in response to further requests. Well-managed caching partially or completely eliminates some client–server interactions, further improving scalability and performance.

Layered system
A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. Intermediary servers may improve system scalability by enabling load balancing and by providing shared caches. They may also enforce security policies2.

Uniform interface
The uniform interface between clients and servers, discussed below, simplifies and decouples the architecture, which enables each part to evolve independently. The four guiding principles of this interface are detailed below.

Code on demand (optional)
Servers are able to temporarily extend or customize the functionality of a client by transferring logic to it that it can execute. Examples of this may include compiled components such as Java applets and client-side scripts such as JavaScript.
(source)

Now that we have a firm footing on what client and server mean in the context of RESTful services, we're ready to explore alternatives. After a quick tour of Peer to Peer protocols and services, (hopefully) we'll better understand the conflict between client-server and peer to peer architectures.

"Headless" Networks

Peer to Peer distributed software has experienced periods of rising and falling widespread appeal. Napster was the internet's answer to getting files quickly, often without permission from the original owner of the data. Bit torrent has been the go to utility for distributed file sharing for most of my net connected years. As I covered here a few months back, Telehash.org has risen as another route for peer to peer communication.

While gratuitous volumes of data have been pushed and pulled through these pipes, they have never had the wide scale commercial success and attraction that web servers have had. One rationale is that businesses don't have obvious bottlenecks in Peer to Peer networks, like they do for web servers. An elementary control surface for organizations is to author the peer to peer software themselves. An open source strategy would have a greater chance of adoption, encourage healthy competition and provide additional security to developers on top of the protocol.

Notes

  1. With respect to the World Wide Web, it's arguable that the appearance of manifests and web sockets in http is eroding the distinction between client and server.

    Manifests provide clients with local storage. I appreciate the ability to access content while I'm offline, but am limited to tools like Instapaper, DropBox, and CouchDB until manifests get widespread adoption, and client databases grown in size (I believe they can with permission now on modern web browsers).

    While client and server are restricted to formal communication of state transfer, nodes exchanging data over a sockets can pass back any desired information without concern for node typing, diminishing the role of dedicated servers as network mediators. Clients and servers may both be stateful, breaking the stateless client requirement</li>

  2. Although layering enables many desirable network features (scaling, logging) it also allows for man in the middle attacks. Https obviates this type of security vulnerability (unless the attacker can decrypt the packets in real time).
  3. </ol>