Introduction

Developments over the last two years will significantly alter the usage of computers. I don't like to use the term "paradigm shift", since it's been overworked for the last decade, but it is appropriate for the changes which will be occurring. The tide is already changing, with many companies, large and small, developing pilot projects with the new technologies to which I allude. The movement will gain momentum as the tools prove themselves capable of delivering "production-quality" applications.

There are two components in this new world order: Java and the universal client, also known as the browser. The Java environment makes it almost impossible to introduce programming errors of the kind which plagued other languages. There is no explicit requirement to free allocated memory (source of "memory leaks" in the C language) as memory is automatically reclaimed when no longer referenced. Similarly it is impossible to reference a variable which has not been initialized (although this error was detectable by the lint utility in UNIX.)

The universal client eliminates the need for client code associated with only a single application. The days of the client/server architecture as currently implemented in products such as PowerBuilder are coming to a close. It is simply too unwieldy to keep distributed clients synchronized with the server code, even using automatic software distribution tools. The use of a browser, Microsoft's Internet Explorer or Netscape's Navigator or even NCSA Mosaic, eliminates the need for custom client code for each and every application.

Any browser which supports Java applets is a viable platform for a myriad of applications which act as the client in the new client/server architecture. This new architecture will utilize international standards to provide a robust n-tier model. The model will provide facilities such as network transparency and dynamic load balancing. Load balancing can also provide redundancy as well as fault-tolerance, providing consistent application access to the clients. Application systems can incorporate legacy applications as well as multiple network architectures.

CORBA

The new "middle-ware" tying diverse elements together will be CORBA (Common Object Request Broker Architecture,) as implemented by protocols such as IIOP (Internet Inter-ORB Protocol.) CORBA is growing to incorporate elements essential to the new client/server paradigm. Transaction services, essential for mission-critical and e-commerce applications, were adopted as part of CORBAservices RFP 2 in 1994. The following diagram shows the architecture of the new model.

The ability to integrate applications, or application components, located on various machines in a corporate network is what makes the CORBA technology so compelling. Language mappings for CORBA include COBOL, Ada95, SmallTalk, C/C++ and Java. These mappings support application development from the mainframe, through UNIX and VAX/VMS platforms down to PC server class machines (servers are intended to operate continuously, unlike desktops.)

CORBA and CORBAservices provide a robust environment for distributed computing. While the core CORBA provides the facilities for inter-process communication (whether local or remote,) CORBAservices adds DCE-like features. These include naming, transaction, time and security services. Additional CORBAservices provide more powerful features to allow dynamic discovery of relationships and properties. These will see use in ever more powerful browser and application builder systems.

Servers

As can be seen, there in an appropriate focus on ultra-high-reliability backend servers in this architecture, which is as it should be. While great strides have been made in improving the reliability of PC hardware and software (like Windows NT,) these platforms do not generally provide the level of reliability required to support mission-critical applications. Even with the introduction of new high-performance hardware such as Intel's new Xeon, the PC world lags behind the traditional datacenter insofar as the discipline and standards and procedures necessary to support the enterprise.

Servers which support mission-critical applications are required to have high-availability, on the order of 99%+ uptime. Vendors of these systems typically provide service contracts which specify time limits for responding to system failures (not just complete outages but hardware faults, disk crashes, etc.) Typing the command "uptime" on an HP UNIX server, for example, typically shows uptime of more than six months. Anecdotal evidence suggests that NT servers need to be rebooted once a week. Personal experience demonstrates that the DNS service on NT Server 4.0 is only reliable for one week.

Clients

The client in the new world order need only support a web browser in order to participate in the new paradigm. While these clients are typically hosted on PC desktops today, there exists marketplace pressure for higher reliability and lower total cost of ownership (TCO.) The desktop Windows environment is frustratingly unstable at the current time. Very few people who use a desktop computer intensively do not experience all manner of general protection faults, lock-ups, etc. While some of these are due to misbehaved application software, many are due to the underlying system software.

A recent architectural solution to the TCO and management issues is the "thin client." As usual, there are competing technologies in this area. The Microsoft solution, known as NetPC, uses current desktop hardware and their proprietary operating environment. A more ambitious solution is the network computer, or NC. Using inexpensive hardware and no local configuration, this architecture will appeal to organizations which use a relatively small suite of applications. Although early in the deployment cycle, and not as stable as they need to be, I expect this to be a growing market.

Java

The object orientation and strong typing features of the Java language contribute to the development of "clean" code, free of typical bugs. No more will programs require extensive debugging to uncover errant pointers or uninitialized variables. The Java compiler will fail to compile code which contains these and other errors. Java provides robust exception handling, insisting that code either handle or pass along exceptions which could be thrown. Run-time exceptions are rare, but cause a detailed stack-trace to be generated which will assist in problem determination efforts.

Java can be used in three different environments: applets, servlets and full applications. The applet class is familiar to anyone who has used a web browser recently. Full applications are still fairly rare but are becoming more common for small, special-purpose enterprise applications. The newest Java environment is the servlet, an application which runs on a web server. Supported on all the major server products, servlets can replace CGI scripts or serve as elements of an n-tier architecture.

While powerful, Java applets are not necessarily the best way to interact with end users. A large applet can require considerable network bandwidth, and time, in order to load. This is unacceptable to sophisticated users who are accustomed to rapid response to requests. A more compact interface model is provided by HTML forms. Used in conjunction with SSL, forms can both collect information from, and present information to, the user. While forms processing is currently supported by CGI scripts, a more robust and better performing alternative is the Java servlet. The following diagram of an n-tier architecture represents an example of how Java can be used in the CORBA environment.

Of course, the Java initiative encompasses far more than just the language. The JDBC extension provides database connectivity similar to that provided by ODBC. Remote Method Invocation (RMI) provides RPC-like capabilities. The Abstract Windows Toolkit (AWT) provides a platform-dependant "look-and-feel" while providing a single programming interface. The network API provides access to low-level communications protocols. JavaBeans provides the ability to create reusable components, or beans, which can be combined in order to create applications. Additional features are being announced monthly.

Summary

The original computing paradigm consisted of a single, monolithic computer system which operated in a batch mode. Terminals were added to provide interactive computing capability. The PC revolution provided small, independent pockets of computing power, unbound from the mainframe. The need to share information led to the deployment of LANs, and even the reconnection to the mainframe. Client/server computing utilized the local processing power of the desktop while sharing certain centralized resources.

We are now on the verge of what Sun Microsystems calls "network computing." This architecture promises seamless integration of heterogeneous hardware and software systems. Building on distributed components, and technologies such as CORBA, industrial-strength applications will be deployed which will offer security, integrity, location independence and dynamic resourcing capabilities. This represents a powerful new paradigm boasting features originally found on the mainframe computers of yore. The circle is now complete.

Copyright 1999 by Phil Selby

Microsoft Windows and Windows NT, Sybase PowerBuilder, Intel Xeon, Digital Equipment Corporation VAX and VMS, OSF UNIX and Sun Microsystems Java and JavaBeans are trademarks of their respective companies.