Networking Trends
A Personal View

Phil Selby

Technical Architect

July 8, 1997

Overview

The trend in Personal Computers (PCs) has come full circle. Beginning as autonomous tools for local data processing, bypassing the corporate mainframe located in a "glass house", it soon became apparent that access to enterprise data would be required in order to be productive. The first links back to the corporate mainframe were through terminal emulation interface cards. These cards permitted the PC to appear as a 32XX or 52XX terminal to a local controller. Software was developed which could automate interactions with the mainframe and extract data from the virtual screen.

As data storage requirements grew, and access to data among multiple users had to be provided, Network Operating Systems (NOSs) such as Novell Netware flourished. Although initially expensive, the advantages to such an approach were justifiable to larger organizations. Advantages of scale permitted installation of large, expensive (at the time) disks in a central server, rather than many individual machines. NOSs also provided support for other shared resources, such as printers.

While the centralization of file and print services was advantageous, it also created a single point of failure. Vendors responded to the need for high availability by creating more expensive servers with redundant, and sometimes hot-swappable, hardware components. Data integrity and availability issues were addressed by the provision of Redundant Arrays of Inexpensive Disks (RAID). Multiple servers could even be configured in situations where data processing was mission critical.

As servers became more sophisticated, it was necessary to have skilled personnel to configure and maintain them. The importance of backups generally precluded the use of administrative personnel to carry them out consistently. User and group management, along with management of resources and capacity planning, became the responsibility of staff trained in the particulars of the NOS. Eventually, physical security concerns led to the move of servers to sites with access controls and professional personnel; the "glass house" for PCs.

In order to maximize the value of information resources to the company, many organizations are connecting their offices together to create a Wide-Area Network. While options for implementing this have been few in the past, today there exist a wide range of available technologies and services. As more companies have need for similar services, providers have developed mechanisms for the sharing of communication infrastructures over both short and long distances. This trend is driving the latest service offerings of high-speed communications services.

Local-Area Networking

The "glue" which tied the PCs, servers and mainframes together was the Local-Area Network (LAN). While protocols already existed for the communication between mainframes and various devices (i.e. Synchronous Data Link Control (SDLC)) those protocols were typically carried over dedicated, point-to-point links. In order to share resources within a group, a new paradigm was needed. Pioneering work to address these issues had already been carried out at the Xerox Palo Alto Research Center (PARC) and was to form the basis for the first PC LAN.

Current LAN technologies include token-ring, token-bus and ethernet. IBM Token-rings run at either 4 or 16 Mbps are suitable for networks where guaranteed access and fixed maximum latency are required. Token-bus networks range from the simple and inexpensive ARCNet to the high-speed and fault-tolerant Fiber Distributed Data Interface (FDDI). Ethernet is arguably the most commonly installed networking topology, using Carrier Sense Multiple Access/Collision Detection (CSMA/CD) to arbitrate access to a common bus.

While currently installed LAN topologies have sufficed for the file and print services they were designed for, new technologies demand higher bandwidth than they can provide. In particular, Geographical Information Systems (GIS), Computer Assisted Design (CAD) systems, and multimedia applications all require significant bandwidth due to the size of the data files used. In the case of multimedia applications, data has to be provided isochronously in order to avoid dropouts in the audio and/or video signal delivered to the end user.

There are a number of different approaches which can be taken in order to deliver higher bandwidth to the desktop. Proposals for higher speed token-ring, FDDI and ethernet are already being discussed, and in some cases deployed. Ethernet at 100 Mbps is the most common example of this trend. Although the carrier frequency has been increased by a factor of ten, it uses the same CSMA/CD mechanism as that used with 10 Mbps ethernet.

Another mechanism used to increase speed to the desktop is the use of switching hubs. Using a high-speed switching backplane, communications between ports on a hub are routed directly between source and destination. Unlike broadcast ethernet, end user stations connected to a switching hub see only traffic which is directed to them. This eliminates the contention for a single common resource which has traditionally been the hallmark of ethernet.

The latest innovation in the provisioning of high speed networking to the desktop is Asynchronous Transfer Mode (ATM). Although originally designed to support communications at Optical Channel 12 (OC-12) speeds (622 Mbps) and above, recent implementations support speeds of 25 Mbps (Desktop ATM25) and 155 Mbps (OC-3). While providing higher bandwidth, these technologies are complex and not all of the associated standards have been finalized. As a result, interoperability of equipment from different vendors is not guaranteed.

Many companies are investigating the potential of ATM networking, but there are many different ways in which the technology can be incorporated into the network architecture. While workstations with high-bandwidth requirements might benefit from ATM to the desktop, office automation PCs might only need to use ATM to communicate between hubs, and from the hubs to the servers. Other than higher speed, ATM does not currently offer enough additional functionality to justify wholesale replacement of current infrastructures.

The quest for higher-speed networking is involved and expensive. In many cases, it is not possible to justify the additional costs which would be incurred, especially if there are no quantifiable benefits. It is important to realize that network bottlenecks do not only occur in the hardware LAN architecture but in a number of other areas. Unless an analysis of the current network architecture can prove that it is the network infrastructure that is the limiting factor insofar as throughput, resources would be better committed to other areas.

In order to clarify the assertions made in the previous paragraph, consider the case where a decision is made to replace a traditional UTP ethernet hub with a new switching model. Although the bandwidth available between node and hub is closer to the theoretical limit of ethernet, given that there are only two stations on the link, if 50 workstations require access to a single server then the link between the server and hub could become saturated. At that level of saturation, the server might not be able to respond to requests in a timely manner, resulting in time-outs and possibly a level of performance worse than what it had been before the hub replacement.

This example is only intended to highlight potential problems with the deployment of new networking hardware before obtaining a complete picture of the current architecture. Changes to the network infrastructure should be made only with a full understanding of the current bottlenecks and traffic patterns. Server or disk upgrades could result in much improved application performance with absolutely no changes to the network hardware.

If analysis indicates that the current network infrastructure is inadequate for current and potential needs, a migration plan should be developed for high-speed networking deployment. By implementing the plan over a period of time it is possible to maximize the current network architecture investment. Phased implementation also spreads the costs associated with the migration over an extended period of time. Finally, experience will be gained with the efforts involved in the migration process.

Over the next few years, I expect to see incremental moves toward high-speed LAN implementations. The infrastructure already in place is expensive and, in most cases, quite adequate for current and near-future requirements. Even with the deployment of intranet applications, judicious use of technologies such as Java should minimize bandwidth requirements. Evolutionary improvements in file server performance should shortly result in the ability to utilize higher-speed network interfaces.

Once high-speed network interfaces are deployed on local servers, the remainder of the communications infrastructure can be enhanced to take advantage of this new level of performance. Moving from 10 Mbps to 100 Mbps ethernet is an incremental step which can be achieved at relatively low cost. The largest investment will be in the network interface cards for the PCs. The cost per port of a non-switched 100 Mbps hub can be as little as $300 per port.

The next phase of implementation would involve the replacement of the non-switched hubs with switched hubs. Although the cost per port is $300, or about ten times the cost of a non-switched hub, the network interface cards in the PCs would not have to be replaced in order to fully utilize the new capabilities. Companies would be well advised to consider starting now to replace current NICs with 10/100 Mbps NICs. It would spread the cost of the upgrade over a longer period of time and would pave the way for future enhancements. Except for workstations or PCs with an identified need for isochronous communications, 100 Mbps switched communications should provide an adequate level of performance for at least the next 5-10 years.

Once switched 100 Mbps to the desktop has been implemented, further enhancements should focus on the network backbone. While some organizations already use either FDDI or 100 Mbps ethernet over fiber as their backbone, others have not yet implemented a backbone architecture. In either case, moving to ATM at either OC-3 or OC-12 speeds should significantly improve backbone performance and capacity. Where fiber has already been installed, moving to ATM need only involve replacement of the fiber termination equipment. Although multi-mode fiber can be used over distances of less than 2 kilometers, new installations should seriously consider deploying single-mode fiber, especially as the cost differential is minor.

Hub technology these days has improved to the point where expansion is a practical mechanism. Stackable hubs, or hubs with a high-speed backplane, can be extended with an ATM interface module. While the cost is currently quite high, these interfaces should become less expensive as they are more widely deployed. These hubs could be deployed in wiring closets and connect directly to an ATM backbone, providing a high level of performance and available bandwidth to the floor or campus level. Alternatively, a high-performance router could serve as the interface between the switched hub and the ATM backbone.

As mentioned earlier, and with a few notable exceptions, ATM to the desktop is likely to be unnecessary. Switched 100 Mbps technology should be able to provide adequate performance to the desktop, at reasonable cost. Despite recommendations which would permit transmission of ATM traffic on currently installed category 5 (and even category 3) cable plants, I believe that distance limitations will prevent this from being a practical alternative. NIC cost and interoperability issues will also likely work against implementation of ATM to the desktop.

Wide-Area Networking

Wide-area networks were, until recently, fairly expensive to implement. The most common architecture used point-to-point leased lines to connect multiple locations. The topology was typically implemented as a star, as the cost of leased lines is high, and increases according to speed and distance. Although technology exists to multiplex data and voice over leased lines, it was not widely deployed except by larger organizations. These companies could partially justify the cost of leased lines due to the reduction of long-distance voice communications between branches.

Leased-lines are available at a wide range of speeds, ranging from DS-0 (56/64 Kbps) up to OC-x. Full or fractional T1 (1.544 Mbps) is being deployed extensively these days. A T1 link can carry 24 DS-0 channels, with fractional T1 using only a subset of the total available channels. The interface to DS-0 or T1 links is typically provided through use of a Channel Service Unit/Data Service Unit (CSU/DSU). Access to fractional T1 is generally provided through a LARSCOM interface unit. This unit breaks out the active DS-0 channels and provides a V.35 synchronous serial interface for connection to the Customer Premises Equipment (CPE).

Telephone companies use Time Division Multiplexing to carry multiple telephone conversations over a single link. This is necessary so as not to require a dedicated circuit from source to destination for every single simultaneous telephone conversation. In the same way that telephone conversations are multiplexed to maximize utilization of limited resources, so the telephone companies are deploying technologies which permit sharing of data-oriented communications resources.

One of WAN technologies widely available these days is frame-relay. Access to a providersí frame-relay network is provided at speeds ranging from DS-0 to T3 (45 Mbps). Depending on the particular provider, backbone capacity can be impressive; 622 Mbps in the case of MCI. Rather than paying leased line charges from source to destination, a customer only pays for the leased line to a local point-of-presence (POP). The customer is also charged for network access and utilization. Customers will typically indicate how much network bandwidth they require through the Committed Information Rate (CIR).

Frame-relay networks function by providing virtual circuits (VCs) between subscriber locations. As such, it appears to the customer as a collection of point-to-point links. All of the VCs are multiplexed over a single communications link to the network. The VCs are typically configured by the service provider as Permanent Virtual Circuits (PVCs). Although some providers are considering implementing Switched Virtual Circuits (SVCs), security, signaling and control issues have yet to be fully addressed.

A more recent WAN service is variously named the Metropolitan Area Network (MAN) or the Switched Multimegabit Data Service (SMDS). Providing access at T3 rates, it is designed to provided interconnectivity over a limited geographical area. Although the Distributed Queue Dual Bus (DQDB) architecture can support multiple devices at the customer premises, it is usually deployed as a trivial subset with only two nodes on the bus. Due to the high-speed of the leased line connection to the network, the distance to the POP becomes vital to providing the service at a reasonable cost.

One of the primary goals of ATM was to provide a single facility which would support a wide variety of services. Isochronous communications support is an essential element of ATM. Not surprisingly, telephone companies were deeply involved in the specifications of ATM, seeing it as a vehicle for the provision of integrated voice/data communications over a single link. Since one of the ATM Adaption Layers (AALs) is geared expressly to the transport of SMDS (AAL 3/4), it is not surprising that ATM is positioned to provide WAN services.

Telephone companies will typically deploy ATM internally, providing a public User-Network Interface (UNI) for customer access to the network. While this will provide a powerful mechanism for companies with their own internal ATM networks, it will not be generally accessible to most subscribers. Since standards have yet to be finalized, I am not aware of any service provider currently providing UNI access. ATM has the potential to facilitate the next generation of WAN services, even though it might be impractical to connect to the public networks directly.

The best approach to the implementation of WANs is dependent upon the needs of the organization, and the services available. Frame-relay is widely deployed and attractively priced. For those organizations able to take advantage of the higher bandwidth, SMDS is being aggressively promoted at this time by certain providers. Since SMDS will be able to utilize the facilities provided by ATM in the future, it is an intelligent choice for investment protection. Given the cost and limited bandwidth, and the growing availability of ISDN, leased lines should not be considered for any new installations. Finally, it is not expected that public ATM access will become widely available in the near-term.

Conclusions

Given the volume of articles being published on networking technologies and strategies, it is sometimes difficult to discern the truth from the hype. What I have attempted to do is apply knowledge and experience to the question of which technologies are worthwhile and which will fade into oblivion, or otherwise remain inapplicable. Although I donít claim to have a crystal ball, most of the developments in the communications field which have been successful have shared a common genesis: a real solution to a real, identifiable problem.

ATM has been touted as the ultimate solution to computer networking. I believe that, while there are some highly appropriate applications for the technology, it will not replace solutions which work well today and are readily extensible to accommodate future requirements. Once a company has made a significant financial investment in their networking infrastructure, wholesale replacement of an operational network is unnecessary and unwise. ATM will be deployed in the LAN environment but not to the desktop, except where justifiable.

WAN design is less definable than LAN design, involving a variety of trade-offs, usually in the area of cost versus speed. Service availability still has a major impact on the choices made in WAN topology. Services such as ISDN are still not quite as ubiquitous as they should be, and telephone companies tout differing solutions, usually dependent upon their own infrastructure choices. Frame-relay and SMDS are both good choices for a WAN infrastructure, with decreasing prices and increasing local availability. They should also enjoy continued support, even with the eventual arrival of ATM.

Copyright © 1999 by Phil Selby