Interlinking heterogeneous distributed applications.
Jean-Marie Chauvet
Object World UK, London 1996

The emergence of programming environments targeted at the World Wide Web are accelerating the transition from traditional client-server architecture to the distributed application model. There are major differences, however, in the ways these models distribute data and processing.

The new crop of business applications will need to accomodate this variety while keeping, as much as possible, high returns on its investment in conventional client-server systems - the legacy of tomorrow. With the recent success of Internet, and the growth of its use within the business organisations - a specific usage for which the term "Intranet" was coined - MIS managers and development groups are faced with three major models for business applications : (i) the traditional Gartner Group client-server architecture, (ii) the higher level distributed services of the Object Management Architecture, and (iii) the Web-mediated distribution of processing and access to data.

These models can be seen as cooperative or competitive, and most of the players in the software industry have been busy over the past couple of years establishing their strategy and positioning in the ever changing marketplace. However this marketplace can only grow if these varied architectures are adopted by corporations and proved healthy, eventually providing organisations with the economic benefits they expect from migrating their information systems to the new architecture. Unfortunately this means that initially the confusion surrounding middleware will increase as many vendors vie for the corporate buyer mind share. This preliminary state of chaos is likely to call for a clarification in the coming years, with the emergence of a limited number of "standards" and a growing and highly profitable niche of tools to build the new generation of distributed applications.

Traditional Client-Server : Dead On Arrival?

The fantastic growth of the Internet has prompted some sort of unexpected counter-reaction to the once widely accepted and generalised client-server model. There are several facets to this trumpeted onset of client-server apoptosis, warranting serious investigations.

A often stated claim is that the "cost of ownership" of PC-based clients has become too expensive to scale up to meet the needs of the more distributed applications. While it is true that the original client-server architecture emphasised separation of concerns in the development and deployment of business applications, by establishing a distinction between presentation, application and data layers, only one of its implementation predated the whole model. Client-server today is mostly a synonym for a relational database server, also storing the application logic in the form of 4GL stored procedures and triggers, driven by a "Wintel" client - a PC with a Microsoft Windows, 95 or before, user interface. Recurrent studies, however, have repeatedly shown that the economic benefits of such client-server are simply not there yet. Recent Gartner Group reports have shown that, in some cases, development and maintenance of a client-server application was actually more expensive than incurred when building a similar application on a centralised mainframe. Other studies by the Standish Group report on disturbing figures for canceled client-server projects, or over-budget over-time applications.

These recent controversy gave birth to some strategic re-positioning of several important players in the industry. The solution to these rising costs, heavily promoted by Netscape and Sunsoft initially, is to present the Internet Browser as the universal client, the unique window on business applications inside the organisation and on-line services outside it. Pushing it even to further extremes, visionaries like Oracle's Larry Ellison, are predicting the demise of the PC and the emergence of the so-called "Network Computer" - dedicated hardware for the universal client.

Caught in a frenzy to outbid competitors, this proposed solution started a flurry of announcements and numerous unveiling of technologies and complementary approaches. As the idea of a passive Web-based universal client was clearly insufficient to support the complexity of business application logic, the major addition to the model came in the form of the Java programming language. Sunsoft cleverly recast an internal generic-purpose object-oriented language into the Internet programming language of choice. In doing so, not only did it prove that it could outsmart Microsoft on marketing, but it brought in the notion of "applet" that became a key architectural point of the new model. Netscape "plug-in", combined with the JavaScript - née LiveScript - scripting language and Common Gateway Interface is also a technological piece added as an afterthought to allow for more client-server like processing. Microsoft also has announced technology and tools promoting this new model, besides its adoption of Java, it also touts Visual Basic for Internet development and future generation of browsers directly integrated in the operating system.

Competitive or cooperative architectures?

In order to understand the respective benefits and applicability of the above mentioned three models of distributed business applications, it is useful to consider the distinction between the transport layer and the services layer. In this view the application is constituted of a structured collection of services, linked together by a transport mechanism. The various models differ on how services are described in the services layer, on which underlying transport layer is used and what it is that this layer actually transport.

In traditional client-server models, each role is attributed at design time. The client handles the presentation logic and possibly a part of the application logic, while the server handles application logic and data layers. The services are thus quite limited : user interface services, and database services - the latter including application logic. The transport layer is usually a proprietary construction of the database vendor running on top of an RPC mechanism. The transport layer conveys commands in one direction and data, in the form of records in the opposite direction.

In the OMA scheme, roles are more abstract and can be discovered at run-time providing additional flexibility in configuration and reconfiguration of applications. Services are described in a standard way using the Interface Definition Language, and the transport layer is called an Object Request Broker, both standards defining the CORBA specification of the OMG. ORBs are, generally speaking, implemented on top of existing RPC mechanisms. The transport layer basically carries methods calls with arguments, and results of these calls in the opposite direction. The OMG has also specified various basic services in its CORBAServices and CORBAFacilities documents. The OLE/COM model from Microsoft is a competing implementation solution.

The Web model - found in both Internet and Intranet based applications - is even stricter in terms of roles and services representation. The client is a browser, which implies that all services have to be cast into the Hypertext Markup Language (HTML) and the transport layer implements the Hypertext Transfer Protocol (HTTP) a simplistic data exchange geared towards stateless servers, with addresses also known as Uniform Resource Locators (URL) flowing one way while data flows the opposite. The transport layer also implements mail and file transfer protocols which are similar in nature.

This robust but overly simplified services and transport layers is what originally made the success of the Web possible, placing very few constraints on the implementation of browsers and Web servers. It is also the major limitation to its general applicability in the development of distributed business applications. The combination of Java and JavaScript aims at solving this problem. These languages are descriptions of services embedded into the HTML page, and the transport layer has been extended not only to transfer data but to transfer code as well, in the form of Java bytecode. Now, the fact that this extension goes against the argument of simplicity in the browser, at it requires the browser to embark an interpreter or a just-in-time compiler for this language, is largely overcome by the benefits of being able to distribute processing. In this revised model the transport layer actually conveys objects, rather than data, from server to clients.

Integration of heterogeneous systems

A comprehensive solution to the problem of interlinking such disparate models relies on a strong discipline and proper understanding of systems integration. Not only do users now have a very high expectations for interoperability, but custom integration, as is usually practiced today, results in point-to-point integration solutions which are brittle and difficult to extend. Systems integration is founded on the need for system adaptability through the system life cycle.

Back to Dassault Développement

Copyright (c) 1998-2002 by Dassault Développement