[TREF logo]

SUNET TRef - report from vendor visits 2000



During November 2000 the SUNET technical reference group visited computer network software and hardware vendors in the USA. The vendors we planned to visit were: The motivation for the visits was to get firsthand information about which way these vendors thought networking technology was headed, and to hear what new software and hardware these vendors were planning on releasing the next two years.

This report tries to summarize the information received, which is not covered by NDA agreements.

Summary

There was only one really interesting new idea for how to build campus networks presented, see the "Future" section below.

10 gigabit Ethernet is supported by everyone, and is seen as the big technology for building future LAN/MAN and even WAN networks. There are two main versions of 10 GE, one that leverages SDH technology to make a manageable 10 GE WAN solution, and one meant for direct connections over a single fiber/wavelength in a LAN environment.

The industry sees the wireless market exploding. There will be between 4 and 100 times as many wireless devices as wired the prediction goes. Speeds for wireless will lag orders of magnitude behind wired though, so wireless will be a complement to, not a replacement for, wired networks.

Regarding network management/monitoring progress seems to be slow, a few improvements are to be expected, but at the cost of much more complex network management systems. The best bet still seems to be Spectrum.

Infrastructure

Cat5 twisted pair cabling is enough to support up to gigabit speeds at 100 meters. The tolerances are low though, so you should test any cable that you plan to run gigabit Ethernet on. Cat5E has had such testing done, so use Cat5E for new installations (same cost as Cat5). There seems to be no need to put in Cat6 or higher cabling.

Single-mode fiber is the only viable alternative for more than very short-range gigabit Ethernet, even more so for the upcoming 10 gigabit Ethernet standard. Deploy single-mode fiber only on any long-range connections, and even on short distances single-mode should be predominant.

If you are planning on fiber to the desktop you should install single-mode to be future-proof speed-wise, but there is a case for using multi-mode if all you want is to increase the distance between desktop and wiring-closet, not speed. The equipment cost is still 3-4 times copper though.

The cost for single-mode lasers will drop significantly starting late next year (2001).

The current generation of single-mode fiber will handle up to 40 Gbps per wavelength. To go faster than that per wavelength new fibers might have to be deployed. This might possibly affect the predicted life-length of nationwide fiber deployments.

Power-over-ethernet (via for example BIAS-T connectors or special switches with built-in power) for powering devices such as IP telephones and wireless base stations is gaining momentum. Gives 5W today, aiming for 15W. See IEEE 802.3af (DTE power via MDI).

There will be GBICs for specific colors in a DWM network, and inexpensive very short-range lasers for connecting to external multiplexors. Range around 300 meters with one-tenth of the price of traditional short or medium range lasers.

The choice of connectors for future fiber cards does not seem to be obvious. We did not get a definite answer on that. SC, LC and MT-RJ connectors were all mentioned.

10 gigabit Ethernet, IEEE 802.3ae

There is overwhelming support for 10 gigabit Ethernet. 10 GE will be widely deployed in LAN/MAN and even WAN networks. 10 GE will be fiber only, full duplex only.

There are two main versions of 10 GE. One using a LAN PHY that is meant for simple fiber connections with no repeaters, and hence has no means to monitor the underlying infrastructure, and one WAN PHY that can monitor each hop of a 10 GE link over a diverse underlying infrastructure, for example using wavelengths, repeaters or traditional SDH system hops (OC-192c / VC-4-64c).

The cost for 10 GE will be 3-4 times the cost of 1 GE, and 1/5 to 1/10 the cost of the corresponding SDH equipment (due to the need for very accurate clocking equipment for SDH).

The standard for 10 gigabit Ethernet is meant to be finalized in March 2002, and so far there is an 80% likelihood that it will meet that date. Despite that fact, everyone we talked with believed that after March 2001 there will be mostly editorial changes, so pre-standard equipment will be available in the summer of 2001.

Distances:
850 nm 50µ multimode 65 meters
1310 nm 62.5µ WWDM (4 channels) multimode, expensive 300 meters
1310 nm 9µ singlemode 10 km
1550 nm 9µ singlemode 40 km

Channel multiplexing, IEEE 802.3ad, is part of the standard, but jumbo-frames are not (though there are vendor specific extensions). 10 GEchannel will probably be used more for resiliency than for raw bandwidth.

Resilient packet rings, IEEE 802.17

Current examples of such technology are DPT and DTM.

No vendor, except Cisco, said they had any plans to make DPT equipment, and they did not see DPT as a good technical solution for LAN environments (but might be usable for a MAN).

Cisco will have DPT cards for the future GSRs, and plan to make DPT cards for the Catalyst 6500 series in the future (but not in the immediate future).

Cisco will release a router with 24 ports fast Ethernet (copper or fiber) and 2 ports for connecting to an 2.4 Gbps SRP (DPT) ring priced like a 7200 router (for the copper model) soon, there will also be a another model with 4 GE ports and 12 fast Ethernet in addition to the SRP interfaces.

Wireless

There was a general consensus that wireless will be big. The predictions ran from 4 to 100 times as many wireless devices as wired, but the wireless networks will lag orders of magnitude behind in speed, and will thus be a complement to, and not a replacement for, wired networks.

Equipment for IEEE 802.11b will become cheap. Expect $120 for a PC-card at the beginning of next year, and $50 at the end of the year. There is a problem with Bluetooth interference with 802.11b networks, which will grow as more Bluetooth devices are deployed.

Check for the "WiFi" (wireless fidelity) mark on wireless equipment. WiFi marked devices have been proven to work together with all other WiFi marked devices.

There is work an 22 Mbps, 54 Mbps and ~100 Mbps wireless. The 22 Mbps stuff will be available for around $200 per PC-card during 2001. The 54 Mbps equipment will be available during the second half of 2001, and the 100 Mbps equipment during 2002, but the higher speed equipment will be prohibitively expensive initially.

One interesting research was to make ultra-wide-band (UWB) wireless that uses very low power (less than the background noise) per wavelength. In this way you could have license-free 1 gigabit per second wireless with a 10 meter range. This could be used for example to have one such basestation per cubicle at work and all equipment there wireless.

Currently the Lucent wireless equipment seems to be the best, used in for example the Apple AirPort base-stations (which are cheap ;-)). These devices also have management/monitoring support from major vendors.

There are concerns regarding security on 802.11b networks. Not all vendors implement WEP (wire-equivalent privacy), and the implementations do not interoperate well. See also work on IEEE 802.1x. Will probably not be standardized until 2002.

IPv6

There is no drive for going to IPv6 in existing networks, however, the mobile phone 3rd generation networks will be based on IPv6, and countries in the third world that have no existing networks might (mostly) deploy IPv6 directly.

Will probably be IPv6 core networks with IPv4 used in the networks connecting the end users for the foreseeable future.

Cisco will include IPv6 support in the standard IOS release from this summer. IPv6 will require much more memory in routers.

Multicast

All the visited vendors have equipment that handles layer 2 multicast at wire speed.

Beware that IGMPv3 will require software changes in intelligent switches, and of course to the IP stacks on the hosts.

The network management vendors had no plans on making tools for layer three multicast fault detection and monitoring within the next two years.

MSDP/MBGP is widely supported, and seen as the only viable cross-domain multicast interconnect today.

Noteworthy is that as multicast becomes more widely used, and the backbone networks increase in capacity faster than the edge devices, one solution is to always send all multicast everywhere in the backbone and just have the edge devices filter out the groups wanted. The overhead for keeping state for lots of constantly changing multicast groups is higher than for just installing fatter pipes.

Management

The network management/monitoring people seem to be moving very slowly towards better tools for intelligent root cause analysis, i.e. only telling the user that for example a certain trunk has failed instead of reporting every single network/device/host that became unreachable because of this.

The focus is moving towards monitoring the services running on the hosts, and of course the traditional layer 2 network monitoring. We felt that there is much missing between these two as regards to layer 3 monitoring, especially regarding routing and multicast.

What the vendors really wanted to sell was systems for doing software upgrades of hosts, keeping inventories etc. etc. This is in itself interesting, but not what we wanted to hear now. There was also talk about single-sign-on systems, and user administration.

Aprisma will support making automatic overlays of VLAN, OSPF, and multicast topologies on top of the physical network map in the February 2001 release. They also had the good sense to separate the network discovery process from the actual live management. You could now use the network discovery to cut and paste into your live manager, or to make automatic checks if the network has changed for example.

Computer Associates had some interesting work on using neural network agents (neugents) to correlate the vast amounts of data a NMS system collects and making predictions for future failures (as the network changes rapidly it is very hard to make static rules for error prediction, the NMS system has to "learn" what causes errors). For example they had a router neugent that could tell if something was wrong with the routing in a device, although not what.

There was also an agreement that as the complexity of the networks grows, more and more of the monitoring will have to be done in the wiring closets, either on special add-on cards to the networking equipment, or by putting special "manager" computers in each wiring closet that only report data to the central manager when needed.

Management of IP telephony is still in its infancy. The IP telephony vendors do not provide enough information to make intelligent monitoring.

In the US it is common to require that the service provider places agents in their network that can be used to verify SLA agreements.

LANs

There is talk that VLANs are a bad thing as they separate your physical and logical network. The "having to draw the map twice" problem. The recommendation is that you limit your VLAN use, and try to make sure that your logical and physical networks match.

There is a move towards partitioning networks purely geographically, for example all users in one wiring-closet on one network, one network per wiring-closet.

Most vendors see more and more intelligence moving towards the user to be able to keep up with the networking speed. The most extreme bid being one routed port per user. See the solution described in the "Future" section below for the other extreme.

Regarding monitoring of VLAN trunks (e.g. to see which VLAN on the trunk is sending the most traffic) Extreme Networks were the only ones who would have support for this in the near future. They will support individual counters for up to 96 VLANs on a trunk.

Extreme were also working on replacing spanning-tree, which takes ages to switch over to an alternative path, with a statically configured alternative path, this would give sub-second failovers. This makes a lot of sense as it is easy to build your network so that you know what the best alternative path is. There is also work ongoing on speeding up SPT convergence. See IEEE 802.1w.

There is a concern that most vendors did not have large enough buffers on their equipment to handle the future SUNET case where we have cross-Atlantic gigabit connections that will not see any bottleneck before the last switch hop (closest to the end user/consumer). This means that the last (first from our direction) switch should be able to handle at least 200 ms, preferably 350 ms, buffering of data per port. For gigabit speed this corresponds to 25-50 MB of buffering. Now most vendors seem to have 3-4 MB per 24 ports on shared buffer devices. Crossbar architectures like the Cisco 6500 only has a 64 kB buffer for 100 Mbps ports and a 512 kB buffer for Gb ports.

None of the vendors had any plans on making this upgradeable for customers who want larger buffers.

WANs

10 gigabit Ethernet using WAN PHYs are seen as a major contender for making cheaper WANs, and especially MANs.

Wavelength multiplexing will be used heavily, and the cost for low-density multiplexing (say 16 channels) will drop. There will be GBICs for different wavelengths, removing the need for wavelength conversion in the external multiplexors.

The bandwidth over a single fiber doubles every nine months. You can now run 6.4 Tbps on a single fiber for 4000 km with only optical repeaters (we didn't say it was cheap... :-) ).

Using wavelength multiplexing the increase in networking speed is now higher than Moore's law, meaning that you can always build networks that are faster than the end equipment can handle. It is now more effective to build really fat networks that can handle all traffic than to do any sort of QoS in the core. The future core networks will be pure optical as electronics cannot keep up with the bandwidth explosion on the Internet.

There is work on making sub 100 msec route convergence a reality. This is needed for heavy IP telephony deployment.

Misc.

Some points, comments and links that we got, and that haven't been mentioned elsewhere in this report:

Quotes

"Keep it simple" - see the gigabit network design presentation from Cisco.
"Bandwidth is cheaper than complex logic" - regarding QoS in the core.
"It is simpler to go faster" - another complex logic comment.
"Bandwidth grows faster than Moore's law" - expect fast future networks.
"The future core network will be all optical with low-speed terrabit routers at the edges" - low speed terabit... ;-)

Future

The only really interesting new idea for how to build a campus network cannot be described in detail as it was covered by an NDA agreement. What follows is a high-level description.

Premises:

The interesting solution proposed is then to build really cheap, stupid, low-end devices to put in the wiring closets that just multiplex all ports onto trunks based on 10 GE technology, which are then aggregated, per house for example, onto a wave-length multiplexed backbone and fed to a redundant central high-end switch/router that does all the work.

This creates a setup where you only have one (redundant) central switch/router that has every port on campus directly connected (via these multiplexors). The rest of the network is just stupid, cheap, low-end devices.

Benefits:

Is this realistic? Yes, we believe it is. The example we were shown used no equipment that is not available today or will be available soon. This includes cheap low-density wavelength multiplexors, 10 gigabit ethernet equipment, and terrabit routers. In fact it is the wait for cheap 10 GE equipment that was seen as the major reason for not introducing this earlier. They now expect to have a beta system up and running next summer, and for a system that handles around 15.000 end users connected at 100 Mbps to be generally available sometime around the summer of 2002.

Recommendations

The technology described in the "Future" section above is seen as a strong future path for large campus networks, and for that reason the technical reference group recommends that, if possible, you wait for more information on this (possibly until 2002) before making any major, costly upgrades to large campus networks.

For smaller campuses anything goes. Gigabit and 10 gigabit Ethernet seem to be the most cost effective solutions. Just make sure you get equipment that handle jumbo frames and buffering to leverage the future multi-gigabit SUNET connections.

For more discussions and current design examples, see the next generation campus LAN design report.

Disclaimer

SUNET, the individual members of the technical reference group or their organisations are not to be held responsible for the use of the information in this document. The information and configuration guidelines must be tailored for the individual organisation!
Report from vendor visits 2000 / SUNET-TREF@SEGATE.SUNET.SE / 28 Dec 2000