February 18, 1998

I was prompted to write this letter to the editor of Network World after reading a series of published articles about "Fast Token Ring," most notably, "Token Ring: Already built for gigabit speeds."  By the way -- although several of my previous letters had been published, the editors of Network World did not publish this one -- wonder why..."
  
 

    Readers new to Network World and many computer industry novices may actually start to believe Kevin Tolly’s misguided garbage about token ring! I think it is fair to say most of us acknowledge token ring as a has-been technology that most organizations would rather avoid, yet Tolly continues to find refuge in the pages of Network World, an otherwise honest and flawless trade publication that I have been reading for many years. This guy is peddling an agenda rather than practicing objective journalism, and you, my friends, are providing him with a soapbox. I have written to you before to refute Tolly’s claims in numerous articles about the fictional fast token ring. I am writing again to debunk the claims in his recent Network World article, "Token Ring: Already built for gigabit speeds."
    In the opening paragraphs of this piece Tolly reflects on the intentions of token ring’s creators - that they "[believed] Ethernet’s architecture to be flawed" and "deliberately designed their LAN to avoid such problems." In so doing those creators, to use Tolly’s own words, "over-engineered" the token ring architecture. In this article Tolly now attempts to turn this into an advantage for gigabit token ring (another non-existent technology, by the way). Token ring is so over-engineered and overly complicated that it requires several ancillary functions and backup mechanisms just to keep it running properly (i.e., Active Monitor, Standby Monitors, Neighbor Notification, Claim Token, and Beaconing -- which never seems to fix anything without human intervention - but I digress…). Because so many things can and do go wrong within the normal operations of the token ring network access method numerous mechanisms are implemented to ensure that the ring can recover when problems do occur. Furthermore, token ring requires every frame to pass through every length of cable and every potential point of failure in the entire token ring LAN (See illustration). Surely this cannot be considered a "good idea" -- not to mention the latency such a system imposes on the entire network.
    Tolly goes on to state that "Key elements of token ring’s architecture would not become meaningful, or exploitable at the original 4M bit/sec LAN speed. Without knowing it, the first architects were designing gigabit token ring." I supposed Tolly thinks these architects were either psychics or divinely inspired! I only resort to the absurd because Tolly went there first. His entire article is absurd. And who exactly were those "first architects" he speaks of with such admiration? How about a little chronological accuracy to illuminate just how visionary those first archtects were? Please correct me if I am mistaken, but as I recall, Proteon developed the first commercial token ring products in 1983. This is easier to present in a table (see below - all are token passing rings). Of all these efforts only the last two are still significant enough to warrant mentioning and they are quickly being eclipsed by newer, faster, and frequently cheaper products.
 
 
1981 Proteon ProNET/10 10 Mbps token passing ring 
1983 Proteon ProNET/80 80 Mbps token passing ring
1985 IBM IEEE 802.5  4 Mbps token passing ring 
1987 FDDI ANSI X3T9 100 Mbps token passing ring 
1988 IBM IEEE 802.5 16 Mbps token passing ring 
 

    In his next paragraph Tolly addresses the "unique set of challenges" presented by campus backbones. He ponders "how can gigabit pipes be used efficiently when, even using Ethernet’s largest possible frame size, some 80,000 frame/sec are needed to fill the pipe?" This is a meaningless red herring, what we in the trenches usually refer to as adult male bovine fecal material. Could Ethernet benefit by implementing a somewhat larger frame size? Even Ethernet’s inventor, Bob Metcalf, says yes - but what has this got to do with using gigabit pipes efficiently? Just because a network can support large frames does NOT mean the protocols or applications will use them! And more importantly, just because a network can support large frames does NOT mean it is a good idea!
    In mid-paragraph the author then jumps track to the issue of providing Quality of Service in a multi-switch backbone. While an interesting topic with several solutions underway, it has nothing to do with the topic at hand - the effect of frame size on network efficiency. Then he throws in another red herring by raising the question, "how can a mesh network be built with Layer 2 switches?" The short answer is that robust, reliable, fault tolerant mesh networks are not built with Layer 2 switches - we use routers, also known in some cases as Layer 3 switches. I realize that such a curt answer will meet with more than a few rebuttals, but again this is not the topic of this article.
    Later in the piece Tolly continues this frame fallacy when he states that "Larger frame sizes are a key factor in achieving full utilization of a high speed LAN." Tolly cites Ethernet’s current maximum frame size (1518 bytes) as being the source of an efficiency problem, but neglects to mention that larger frame sizes only benefit large file transfers and can even cause inefficiencies of their own! He makes the broad, and in my opinion, misleading if not erroneous claim that larger frames deliver better effective throughput, and then goes on to make a completely meaningless reference to the number of token ring frames required to fill a gigabit pipe. It’s as if Tolly thinks network architects (read: makers of LAN products) have any say whatsoever about what frame sizes a given application will require! Is it the applications which drive network efficiency (or inefficiency as is more often the case)! Telnet will still transmit one character of data at a time even though the LAN may be capable of supporting an 18k byte frame. The content of Web pages is usually kept small so that it can be retrieved rapidly over a variety of network connections. Databases usually generate relatively small queries and transfer individual records between clients and servers. Some databases may perform larger file transfers between servers or between clients and servers, but this should be kept to a minimum. And then there is video conferencing and voice over IP.
    Consider the following: ATM and voice over frame relay. The 53-byte cells used by ATM are the smallest transmission units used in any commercial network technology, yet ATM is capable of supporting high-speed data transfers as well as live, interactive (isochronous) video and voice sessions via its various service levels: AAL Levels 1-5. This seems to contradict Tolly's argument for big frames, but moreover, simply providing Quality of Service is not enough. Latency must be controlled, consistent, and low in order to efficiently support video and voice. ATM accomplishes this by keeping the maximum transmission unit small and fixed -- not big and variable!
    Vendors that sell voice over Frame Relay products accomplish this in much the same way by restricting the maximum transmission unit (MTU) to a very small size (64 - 128 bytes). Of course, this does mean sacrificing the efficiency of data transmissions to accommodate the more delay-sensitive video and voice sessions. The fact that many organizations are considering implementing voice and video over their data networks means that larger frames would be a detriment, NOT a benefit! Ergo, the entire premise of Tolly’s argument for larger frames is bogus. (Please correct me if you think I’m wrong - there’s no point in being wrong - and I hate being wrong…)
    Furthermore, if anyone actually implemented the ~18k byte frames supported by 16Mbps token ring it would surely have a negative impact on overall network performance. Larger frames mean greater inter-token latency (i.e., longer delays between free tokens). Therefore, not only would larger frames cause enough variable latency on the network to disrupt video and voice sessions, it would also impact the response time of some ordinary data transmissions! By the way --Tolly’s precious SNA systems rarely (if ever) require frames larger than those provided by Ethernet. And finally, most WAN services that I’ve seen use smaller frames. The Internet uses 576 byte frames (I think that's the magic number), and the process of segmenting large LAN frames into smaller chunks to be passed over the WAN adds latency at the router (or whatever edge device you choose to use)! Bigger is NOT always better!
    Tolly thinks traffic prioritization schemes such as token ring Priorities is a panacea - never mind that virtually no protocols or applications support it. Token Ring’s Priority bits are less than rarely used, they are a moot point. Virtually no one uses this feature because there are no applications or protocols that can make such a priority request. If I’m wrong, name 2 protocols or applications that do. In addition, it seems to me that by using token ring’s priority mechanism you negate the deterministic characteristics of the network. Furthermore, token ring’s deterministic nature cannot guarantee a constant bit rate as many people may assume, it only guarantees that the performance of all stations suffers and degrades equally.
    Again Tolly illuminates his misunderstanding of the subject as he attempts to defame the Spanning Tree Protocol in favor of Source Route Bridging. Spanning Tree was developed to provide fault tolerance between LAN segments - and it does this. Source Route Bridging does support multiple concurrent paths as the article states, but each established session is static and cannot change its path of rings and bridges once sessions are established. Source Route Bridging does not provide the fault tolerance Tolly claims it does.
    If a Source Route Bridge or one of its rings goes down, the sessions using that path also go down - even though there are other ring/bridge paths available -- SRB does not dynamically reroute sessions around the failure area. Furthermore, IEEE 802.1D Transparent Bridging (which is what this paragraph is really about) places the path processing burden on the bridges rather then the end stations, and it was designed to support token ring and FDDI as well as Ethernet. An 802.1D Bridge dynamically learns the MAC address of each station on each segment connected to it and builds and maintains its forwarding table automatically.
    A Source Route Bridge only knows its own Bridge ID and the Ring ID’s of the rings attached immediately to that bridge, and all of this has to be programmed by a person! This means administrative overhead. If the network changes, a person must reprogram all of the affected Source Route Bridges! Although Source Route Bridging places the processing burden on the end stations the end stations know nothing about the network beyond those rings and bridges they are currently using. Also, the Source Route Bridges know nothing about the end stations, other bridges, or other rings not directly attached to them. As such, Source Route Bridges cannot provide fault tolerance by rerouting sessions around failed areas of the network.
    Tolly’s distaste of layer 3 internetworking (i.e., routers) is always readily apparent, and in my opinion further illuminates his ignorance of how internetworking works on THIS planet. He refuses to acknowledge that as long as we use TCP/IP (and IPX, and DECnet, et al…) routers will be required. The only way to move an IP packet from one network or subnet to another is through a router (or layer 3 switch = usually the same thing). And finally, Layer 2 switches are great for managing LAN bandwidth and are an excellent complement to routers, but routers can provide certain capabilities that switches cannot. But alas, this has nothing to do with the topic of the article…
    The architectural superiority of token ring is an academic debate best conducted over many beers rather than the pages of this publication. Clearly, there is inadequate interest in the market to warrant the continued pursuit of this technology. Besides, we’ve had access to a form of fast token ring for over a decade now with FDDI. It has always been too expensive and still lacks in popularity after all these years even though it CAN provide fault tolerance!
    So I ask you, why does Network World continue to print the biased and erroneous information propagated by just this one person? Is there no one else in the entire industry who would take the same positions as Tolly? Clearly he has an agenda - and it is not objective reporting -- not when his company is being funded by the High Speed Token Ring Alliance!

Your pal,
    Buddy Shipley ;-}
    Buddy@VailMtn.net