During the spring of 2003, the LAN group of the SUNET Technical Reference Group have visited several interesting companies to discuss LAN technologies and products believed to be available in the next 2-3 years. This document summarises our experience from the visits and additional research after the trip.
This year, the LAN group consisted of Börje Josefsson, Björn Rhoads, Magnus Höglund, Johan Sandfeldt and Kent Engström. We have visited the following companies:
Nortel Networks, Cisco Systems, Extreme Networks and Juniper Networks in the Bay Area (end of February)
PacketFront in Stockholm (April)
3com in London (May)
The LAN group made a similar trip in November 2000. The report (http://proj.sunet.se/lanng/lanng2000) from that trip consisted of a background part and a set of network designs. We have chosen not to include any designs in this reports, as they would look quite similar to the 2000 designs, albeit with newer model numbers and higher performance. Instead, we have decided to concentrate our effort at describing trends and technologies.
Desktop and laptop computers are getting Gigabit Ethernet fast. Currently, the price difference (for the vendor) between a 10/100/1000 chipset and a 10/100 chipset is approximately 4 dollars. Conclusions:
Our users will demand to be able to connect to the campus LAN using Gigabit Ethernet when all their newly bought computers have a 10/100/1000 Ethernet card.
From the start, most users will not be able to utilise the bandwidth because of the lack of I/O performance. As personal computers get better I/O bandwidth this will change.
Bad TCP stack tuning will stop users from getting good performance over long-distance links. See http://proj.sunet.se/E2E for SUNET-related work in the end-to-end performance area.
The potential for denial-of-service attacks will be greater when personal computer running consumer operating systems are connected at Gigabit speed.
We need to think about the uplink capacity and overprovisioning. This might necessitate the use of rate limiting on the incoming traffic to the edge interfaces.
10 Gigabit Ethernet was adopted as the IEEE 802.3ae standard in 2002. For the first time, Ethernet leaves its CSMA/CD roots as the standard provides for full duplex operation only.
Four different fibre versions are specified: 850 nm, 1310 nm, 1550 nm and 1310 nm CWDM, with a maximum reach of 40 km using 1550 nm over single-mode fibre. Some vendors believe that 10GBASE-LX4, capable of lengths up to 300 meters over existing multi-mode fibre using 1310 nm CWDM, will be the first version to become cheap enough to be commercially successful.
There is no copper version specified at all in the current standard. Two activities within the IEEE work on this:
802.3ak, 10GBASE-CX4 Task Force, which tries to develop a copper version for use in switch stacking, datacenter applications etc. It uses multiple twinaxial cables with a maximum length of 15 meters.
10GBASE-T Study Group, which studies the feasibility of a copper version over normal twisted-pair cabling for lengths up to 100 meters. This is a much more daunting project, but if it succeeds, it will most likely render 10GBASE-CX4 obsolete.
In addition to the normal LAN versions, a special so called WAN PHY is specified. By adapting the rate to SONET/SHD specifications, 10 GbE using WAN PHY can be transported transparently over SONET OC-192c or SDH VC-4-64c. More information on the different versions etc. can be found at http://www.10gea.org/
Almost all optics from now on will be modular. Traditional GBICs will be replaced by smaller form factor optics (SFPs). Also, 10 GbE optics will be modular, at first with a form factor similar of today's GBICs.
For the coming years, we think that 10 GbE will first be used for links between switches, to be followed by 10 GbE connections for high end servers. 10 GbE to the desktop is still far away, mostly because of the lack of a suitable copper version.
Questions to think about:
Will Storage Area Networking (SAN) over Ethernet be a huge success? If so, will this push for more 10 GbE Ethernet sooner than would have been the case otherwise?
More users are connected using wireless technologies instead of cables. Will this mean that the bandwidth race will slow down? Or will it just mean that more users will be online all the time, thereby using more network capacity?
As 10 Gigabit Ethernet has been standardised and is beginning to be deployed, the Ethernet standards people should be designing for the next magnitude of bandwidth increase.
Following the tradition, the next step would be 100 Gigabit Ethernet. However, the designers start to encounter harder technical difficulties, so the next step may be 40 Gigabit Ethernet instead (to match the speed of OC-768 as Ethernet moves into the WAN area).
On the other hand, a fourfold increase in bandwidth may not be enough to entice LAN customers who can get more bandwidth by bonding together a number of 10 GbE channels. As a side note, bonding multiple 10 Gbps channels may also become popular for WAN connections, instead of going for higher speeds on a single channel.
The jury is still out on this one.
The standardization of RPR as IEEE 802.17 is proceeding. The current time line calls for a final standard to be due in March 2004. See http://grouper.ieee.org/groups/802/17 for more information on the standard.
Cisco has been working with SRP, a proprietary version of the same concept, since the late 90s. Cisco leads and drives this market since 1999, with STM-4, -16, and -64. The 802.17 draft is ~75% SRP based, with the major differences in the areas of: optional single transit buffer support (instead of currently two), three MAC transmit buffers (instead of currently two), steering protection feature (in addition to wrapping), new fairness algorithm (due to different MAC buffer options), strict mode support, new TLV-based topology discovery algorithm, and a different header/frame format.
Cisco has been successful in the last years with SRP. A number of products such as the 12000 series, the 7000 series, the 10720 and the 7600 currently support line cards based on SRP technology. In addition, Cisco has also been very active in standardising the technology through the IEEE 802.17 RPR working group. Besides Cisco, competitors also support their version of pre-standard RPR technology (both NT and Luminous have some proprietary deployments in the field). Cisco will provide a way for SRP customers to migrate from SRP to RPR.
Nortel Networks is also active in the RPR standardization and has
products available based on pre-standard RPR, currently focused on
the service provider market. More information can be found
at
http://www.nortelnetworks.com/corporate/technology/rpr.
Since RPR allows different transport layers, several vendors will most likely provide RPR applied on top of the Ethernet physical layers at Gigabit and 10 Gb speed to utilise cheap optics. This could make RPR a viable alternative to Ethernet for campus backbones.
Not surprisingly, Extreme Networks is not a big advocate of RPR. Their alternative is "normal" Ethernet in a ring topology with EAPS to provide fast protection switching (< 50 ms). See the section on EAPS below for more information.
Fibre to the desktop is currently not a hot issue. It may become hot again later, but not during the period we are looking at here.
One solution to future proof the infrastructure would be to adapt an air-blown fibre cabling infrastructure, not only to the desktop but also between buildings. That way it is much faster, easier and more flexible to install the "right" fibre by blowing the exact amount of fibre when and where needed through a system of pre-installed tubes or tube cables. There are a variety of products on the market today that cover the range of a few air-blown fibres to the desktop (single tubes) up to hundreds of fibres between buildings (tube cables with up to 19 inner tubes). This process can easily be repeated if the fibre becomes obsolete, compared to conventional fibre installations. Another big advantage is that it uses the conduit space much more efficiently.
Metro Ethernet is a term used to describe a set of standards developed by the Metro Ethernet Forum (http://www.metroethernetforum.org/) to facilitate the use of Ethernet technology in metropolitan area networking. Customers use standard Ethernet equipment instead of special purpose devices and cabling to connect to the service provider. This has a number of benefits, including:
Price. Ethernet equipment is cheap due to the large volumes produced and sold.
Ease of use. Customers are familiar with Ethernet.
Flexibility. E.g., the interfaces operate at a standard Ethernet speed such as 100 Mbps regardless of the bandwidth the customer is buying. Adding more bandwidth can be done using management software instead of an on-site visit.
The Metro Ethernet Forum focuses on standards, service definitions etc needed to build reliable metro area networks on top of the base Ethernet technologies.
Probably, Metro Ethernet will not be widely used in Sweden.
The three WLAN standards to be concerned about for the moment are:
802.11b, the current widely deployed standard operating at speeds up to 11 Mbps on the 2.4 GHz band, providing 3 non-overlapping channels out of 13 overlapping in Europe.
802.11g, the backwards compatible (with 802.11b) standard operating at speeds up to 54 Mbps on the 2.4 GHz band, still with 3 non-overlapping channels. Although the specification is due to be completed in the summer of 2003, products are already appearing.
802.11a, operating at speeds up to 54 Mbps on the 5 GHz band, with 8 non-overlapping channels. The regulation of this frequency band differs extensively between countries. It looks like Sweden will allow use of half the band (4 channels, 5150 - 5250 MHz) at a maximum effect of 200 mW eirp, for indoor use only.
Dual-band WLAN cards are already here and will probably become more popular. Dual-band access points are also already available. It is possible to design a WLAN network today using 802.11b and add 802.11a later on an as-needed basis (to provide for higher data rates or to provide smaller cells).
Things to think about:
VLAN-capable access points are available. Users can be sorted into VLANs based on their MAC address or 802.1X identity.
Today, users do not require network access while roaming. When the user move to another layer-2 area, he is satisfied if he gets a new IP address using DHCP. As handheld devices with WLAN capability become more popular, demand for seamless roaming will increase. This will require Mobile IP or similar technologies.
WLAN security is getting better and more standardised (see below). However, using strong application level cryptography or VPN will still be a good idea.
Will laptop computers replace stationary computers, instead of being an extra device? Some high-tech companies in the US say so. Will that also be the case at Swedish universities?
A rule of thumb is that you need approximately 1 access point per 20 clients in the coverage area. This is more or less valid regardless if the standard is 802.11a, b or g.
802.11b access points will not be useless when you 802.11a gets more popular. You can install 802.11a access points in the central high-usage areas and still use 802.11b in the less dense areas.
The 8 non-overlapping channels of 802.11a makes it possible to separate cells better, when compared to the 3 non-overlapping channles of 802.11b and 802.11g (see the cell maps).
![]() |
![]() |
![]() |
Several vendors are in the process of releasing products that split the functionality of the traditional access point into two parts. The WLAN radio and antenna is placed in a small unit located where it is needed, while the intelligence and management is centralised into a "wireless switch".
For more information, see:
Extreme Network's "Wireless Ports", http://www.extremenetworks.com/libraries/prodpdfs/products/UnifiedWireless.asp
Symbol Mobius Wireless System, http://www.symbol.com/products/wireless/mobius_wireless_system.html
The deficiencies of the original WEP standard are well known. The protocol is vulnerable to several cryptographical weaknesses. Also, the fact that the access point shares the key with all the users makes WEP almost useless when providing WLAN access to a large group of users (such as all staff and students at a university). As a result, many providers of WLAN service use no security features at all, relocate the problem to another level (requiring the use of VPN, encrypted application protocols etc.) or use a proprietary solution.
IEEE working group 802.11i has been working to fix these issues, by defining two new encryption algorithms to replace WEP and by incorporating the use of 802.1X port based access control. The two new encryption algorithms are:
TKIP, Temporal Key Integrity Protocol. This quick fix should address all the weaknesses of WEP, but corners have been cut in order to make it implementable using existing WEP hardware, thus requiring only firmware/software upgrades.
CCMP, Counter-Mode with CBC-MAC Protocol. This is the longer-term solution, with more robust algorithms based on the AES encryption standard. The downside is that new hardware will be required to support it efficiently.
The full 802.11i standard is expected to be ratified in September 2003. A subset of the full standard has been standardised in advance by the Wi-Fi Alliance as WPA, Wi-Fi Protected Access. The subset lacks CCMP (providing only TKIP) and some other features. When the full 802.11i standard is available, a new compatible version of the WPA standard will be published.
To provide user authentication and individual cryptographic keys, the 802.1X standard is used. See the section on Port Based Network Access Control below. However, for environments without the infrastructure required by 802.1X, a pre-shared pass-phrase can be used with 802.11i/WPA. That pass-phrase has to be the same on the base-station and all clients, though.
This is not a report on IP telephony per se. However, we believe that IP telephony will be deployed at more sites during the next years. Also, we believe that in the long term it is not a viable solution to build separate Ethernet networks for the telephony traffic. Our main campus LANs need to be able to support IP telephony. That means increased demand for:
Redundant connections.
Fast protection switching to be able to use the redundant connections, using techniques such as EAPS or newer versions of the spanning tree protocol.
High-availability features on network equipment (redundant components, hitless upgrade).
Quality-of-Service solutions.
Power over Ethernet.
Thin clients are useless when the network goes down. Thus, the trend of replacing traditional desktop PCs with thin clients will demand higher network availability and, to some extend, higher bandwidth.
As IP telephony begins to get integrated into our campus LANs, storage networking may be the next thing that demands entry into that domain. People designing campus LANs need to stay informed about storage networking.
Today at out campuses, storage is usually provided through:
Directly Attached Storage (DAS), a fancy term for the old practice of putting discs into the computers (using plain SCSI, IDE or something similar).
Network Attached Storage (NAS), which means that file servers provide access to storage using protocols like NFS, SMB, AFP, etc. The file servers can be general purpose servers running common operating systems or more specialised appliances.
Storage Area Networks (SAN), where several servers are connected to disc arrays and backup robots through Fibre Channel links and switches. Today, this is cost-effective only for high-end servers with large storage demands, as the equipment needed to interface to a SAN is expensive.
In the short term, network equipment vendors like Cisco are entering the SAN switch market. This can benefit both worlds:
The SAN world gets VLANs (called VSANs), EtherChannel-like bundling, ping/traceroute utilities etc. from the Ethernet/IP world.
The Ethernet/IP world benefits as switch manufacturers copy high-availability features from the SAN world.
There are possibilities for further convergence:
FCIP, being developed by the IETF, provides a way to tunnel Fibre Channel frames over TCP/IP. This makes it possible to use a normal IP network to provide connectivity between two SANs, for example to provide remote backup or to mirror data as a disaster recovery mechanism. The FCIP gateway looks like Fibre Channel to the SAN and like any other host to the IP network.
iSCSI, also developed by the IETF, does not depend on Fibre Channel. It provides a way to send SCSI commands and replies directly over TCP/IP. This could be more revolutionary, as it makes it possible to provide SCSI storage over normal Ethernet (GbE or even 100BASE-T). Using an iSCSI-to-FC translator, a lot of iSCSI-based servers could connect to a "normal" SAN disc array; the servers need no expensive Fibre Channel hardware.
Questions to think about:
Will iSCSI and FCIP become popular? Will they replace Fibre Channel? Storage vendors seem to be a bit reluctant, and are eager to point out that Fibre Channel will still have a place in the high-capacity datacenter SANs, especially when the 10 Gb version is available.
Will we build separate Ethernet/IP networks for iSCSI/FCIP, or will we run it on our normal backbones, together with IP telephony and our "normal" network traffic?
One consequence of a demand for higher network uptime is that using the normal Spanning Tree Protocol (STP) for management of backup links will not be acceptable; alternatives with faster convergence will be needed.
Cisco has a number of proprietary extensions to plain STP available:
UplinkFast is
useful when an access switch has a primary and a secondary uplink to
the core. STP blocks the secondary uplink. When UplinkFast is used,
the secondary uplink will be put in forwarding mode immediately if
the primary uplink fails. When the primary uplink becomes
operational again, the secondary uplink will still forward packets
until the primary uplink reaches forwarding mode. Also, special
tricks are used to update the CAM table on other switches when
switching to the secondary uplink. More information is provided
at
http://www.cisco.com/warp/public/473/51.html.
BackboneFast is a Cisco proprietary feature that, once enabled on all switches in a network, can save a switch up to 20 seconds when recovering from an indirect link failure, that is, when a switch has to change the status of some of its ports because of a failure on a link that is not directly attached to it. The feature requires switches to deduce indirect link failures from the STP PDUs they hear and a introduces a new PDU called Root Link Query used to verify that stored STP information for a port is still valid. See http://www.cisco.com/warp/public/473/18.html fore more information.
PortFast causes a switch port to enter the STP forwarding state immediately, bypassing the listening and learning states. As opposed to the two techniques above, this is used on ports connecting end systems, not other switches.
The new IEEE 802.1w Rapid Spanning Tree Protocol includes most of Cisco's proprietary enhancements to the old 802.1d STP such as BackboneFast, UplinkFast, and PortFast outlined above, as well as protocol changes that makes the old 802.1d timers obsolete in most cases (but they are still there as a backup). 802.11w can achieve fast convergence in a properly configured network, sometimes in the order of a few hundred milliseconds. The standard provides for interoperability with older switches using 802.1d.
For a Cisco-flavoured introduction, see http://www.cisco.com/warp/public/473/146.html. The standard itself is available at http://standards.ieee.org/getieee802/download/802.1w-2001.pdf.
Extreme Networks has developed an Ethernet-based layer 2 ring technology called EAPS. It provides protection switching similar to the Spanning Tree Protocol (STP), but offers the advantage of sub-second (often less than 50 ms) convergence time when a link in the ring breaks.
EAPS is enabled by configuring an EAPS domain on a ring. On that ring, one switch is designated the master node, and the other switches are designated as transit nodes. One port on the master node is designated the master node's primary port to the ring and another port is designated as the master node's secondary port on the ring. A control VLAN is configured on all nodes in the ring. This VLAN is only used to send and receive EAPS messages. The EAPS domain is then configured to protect one or more data-carrying VLANs.
In normal operation the master node blocks the secondary port for all non-control traffic belonging to the EAPS domain. If the master node detects a break in the ring it opens the blocked secondary port, flushes its FDB and sends a "flush FDB" message to all transit nodes on the control VLAN. A break in the ring is detected either by receiving a "link down" message from a transit node, or by not receiving health-check packets (transmitted from the primary port) on the secondary port. When the broken link is restored the operation is reversed. The master node blocks it's secondary port and sends out a "flush FDB" message on the control VLAN.
Traffic patterns can be engineered by configuring several EAPS domains on a single ring, protecting different VLAN's. By having different master nodes for each domain one can balance the load on the different segments of the ring. As the ring is Ethernet-based one can also have different bandwidth on different segments of the ring and configure the EAPS domain to use the fat pipes when the ring is complete and fall back to the smaller pipes when the ring breaks. A link can also be used in more than one EAPS ring creating a structure of several interconnected rings. A single VLAN can also be protected over a structure of more than one interconnected EAPS rings.
EAPS is described briefly in
http://www.ietf.org/internet-drafts/draft-shah-extreme-eaps-03.txt,
which also states that "Extreme Networks Inc. has filed patent
applications on and related to the technology described herein."
A short whitepaper is present
at
http://www.extremenetworks.com/libraries/whitepapers/technology/EAPS_WP.asp.
RPR (and the pre-standard SRP) provides fast protection switching on the link level. See the section on RPR above.
Having alternative links and fast convergence protocols such as 802.1w or EAPS is of no use if switches fail or have to be taken down to be upgraded. We believe that vendors will focus more on redundancy, hitless upgrade and other high availability features. In doing so, they may borrow features and experiences from the storage area network technologies.
Quality-of-Service techniques will be more important in the future if we integrate IP telephony and storage networking into our campus LANs. As we get more familiar with QoS, we may also use it more frequently in places where enough bandwidth is not available, for example on some inter-campus links and for student dorm connections.
QoS features have traditionally been surrounded by a lot of mystical abbreviations and concepts. Cisco has recently introduced AutoQoS, primarily an intelligent macro feature at the CLI level that turns a single "set qos autoqos" into all the different settings needed to enable QoS using default settings. On the other hand, Extreme, Juniper and other vendors might argue that their QoS features are more convenient to begin with.
When using the DiffServ QoS framework, coordination of parameters between networks is needed. Should the need arise within the SUNET community, SUNET is willing to handle this coordination.
Power over Ethernet ("PoE", IEEE 802.3af) will make deployment of IP telephony easier. It can also be used to power access points, web cameras etc. Although it will be very useful to have Ethernet switches that provide this technology, there are other aspects to consider. The standard says that a single port should provide up to 15.4 watts of power (which leaves just under 13 watts of usable power for attached devices due to power loss in a worst-case scenario).
First of all, you have to check that the switches you consider to use have support for PoE on each and every port, as there are products out there today that have only a limited number of ports supporting PoE.
A 24-port PoE-capable switch will also consume up to about 400 Watts compared to a standard 24-port switch, which consumes only about 150 Watts. The combination of using a UPS (Uninterruptible Power Supply) with PoE is attractive. However, that will generate even more heat.
Today many wiring closets have been built only with proper air circulation and no extra cooling, if any arrangements have been made at all. The need for cooling in the wiring closets will increase dramatically when you start using equipment and switches that takes advantage of PoE and has to be addressed in a more early stage in the process of designing new buildings and when renovating old ones.
For more info about PoE, see http://www.poweroverethernet.com and http://grouper.ieee.org/groups/802/3/af.
All the vendors were talking about their IPv6 support and hardware-based forwarding. They are taking IPv6 deployment seriously, although they were still not seeing a huge demand in the US.
We believe that the next generation SUNET core will run IPv6 natively. A next generation campus LAN should also be designed to handle IPv6 natively using a dual-stack approach. One should note that this might affect lower-layer equipment as well as routers: the switches that snoop on IPv4 IGMP (to learn about multicast groups) today must provide the same functionality in the IPv6 world. Management systems and infrastructure services (such as DNS servers) also need to become IPv6 aware. Of course, it is also important to verify the vendor claims about full IPv6 support in hardware.
Implementing IPv6 routing everywhere too early using a dual-stack approach might be risky, though; a bug in IPv6 handling could bring down the IPv4 core too. One intermediate alternative is the "router-on-a-stick": IPv6 traffic is VLAN-transported to IPv6 capable routers, separate from the IPv4 core.
See http://www.sunet.se/ipv6.html for information about ongoing SUNET IPv6 work.
Traditional campus LAN design assumes that a computer stays connected to its "correct" port, which can be manually configured to belong to the appropriate VLAN. In a more mobile environment where users and their computers move around, this traditional assumption has become a problem. The introduction of wireless LANs have of course made the problem even more evident. We have tried different solutions, such as:
Dynamic VLAN assignment using VQP/VMPS for switch ports, placing a computer on the correct VLAN based on the MAC address.
Captive portals (such as Bifrost Nomad and LiU Netlogon) for public switch ports and public wireless LANs, requiring users to log on before accessing the main campus LAN and the Internet.
VPN. If computers on untrusted networks are allowed to speak only to a VPN termination equipment, the users need to establish a VPN connection to get beyond the untrusted network. Thus, the VPN system provides the same functionality as a captive portal, in addition to its normal function.
The problem with the first two solutions is that they rely on the MAC address (more or less). A user that is able to fake a MAC address can trivially fool VQP/VMPS and is able to take over a connection from a bona-fide user of a captive portal. Using a full-scale VPN solution can be complex and might be considered overkill for some applications.
IEEE 802.1X tries to provide a solution to the problem: a protocol directly on top of Ethernet or 802.11 that allows computers to log on to the switch port or the access point. Using 802.1X, the network device closest to a computer is able to verify its identity using methods stronger than just checking the MAC address.
There are a lot of protocols involved, as 802.1X reuses an extensible authentication protocol (EAP) that originated in the PPP world and provides the means (EAPOL) to transport EAP over the LAN link. Furthermore, to relieve network devices such as switches and access points from having to implement advanced authentication protocols, the EAP session is not between the computer and the network device, but between the computer and an authentication server. The EAP traffic is tunneled from the network device to the authentication server using RADIUS.
As EAP is an extensible authentication protocol, it can be used with different authentication methods. Some of the candidates are:
EAP-TLS (RFC 2716). The authentication is based on TLS (earlier versions of this protocol are known as SSL), with both the authentication server and the computer presenting X.509 certificates. Thus, each computer (user) needs a certificate, which mandates the existence of a working PKI structure. This method is supported by a lot of clients and authentication servers.
LEAP (proprietary Cisco solution). This method support password-based authentication, which means that no PKI structure is required.
EAP-TTLS (Internet Draft). Like EAP-TLS, this method is based on TLS. However, only the authentication server needs a certificate. After the initial TLS session is set up, it provides protection as authentication information (password-based or other) is exchanged. The support is not as widespread as for EAP-TLS yet.
PEAP (Internet Draft). This method is similar to EAP-TTLS. After the initial TLS session is set up (no client certificate is needed here either), an inner EAP conversation takes place protected by the TLS session. This method is not widely used yet.
See http://www.oreillynet.com/pub/a/wireless/2002/10/17/peap.html for a comparison.
As a side effect of the authentication methods above, keying material is produced that can be used to provide a non-fixed key if the computer is connecting over a WLAN.
We are seeing initiatives to make network devices more manageable, compared to the current situation with a number of devices across the campus more or less ad-hoc managed using command-line interfaces or the current breed of network management tools. These initiatives range from a centralization of switching to a single switch (which also centralises the management), to more evolutionary management enhancements.
In the 2000 LANG report, we hinted at a new way of designing high speed reliable campus networks. The premises were:
In a campus environment where most of the traffic is going to or from some central equipment or the Internet (i.e., the 80/20 locality rule is not valid) it is quite stupid to put more and more intelligent and costly equipment into the wiring closet for the purpose of off-loading the central equipment. Most of the traffic will have to be handled by the central equipment anyway.
The campus has enough fibre that, at least when using cheap low-density wavelength multiplexing, it is no problem to make a really fat campus core network.
CWDM equipment and 10 Gigabit Ethernet equipment will become quite cheap.
The next generation high-end layer three switches/routers will be able to handle multi-terabit speeds.
The conclusion is then to stop adding complexity at the edge of the network. Instead, we should centralise the switching capacity into one huge router/switch with adequate redundancy. The wiring closet equipment would then be seen as a port extension to the centralised equipment.
We have come to the conclusion that this concept is closer to reality today than in 2000. This design concept should be kept in mind when planning for the future campus LAN.
Instead of centralising switching, the XRN technology from 3com aims at distributing switching and routing across a group of switches (called a distributed fabric). However, all the switches in a distributed fabric behave as a single switch for management purposes. Using XRN, 3com will be able to provide customers with the means to start with a single switch and then add more switches (for more capacity and higher availability) at a low cost, as compared to buying a huge redundant chassis switch at the start.
See http://www.3com.com/corpinfo/en_US/technology/tech_paper.jsp?DOC_ID=128070 for more information on XRN.
The Swedish company PacketFront has developed a framework for Ethernet-based broadband networks consisting of hardware components (chiefly the ASR4000 series of router switches) and BECS and SPECS management systems. Their main focus is on customers such as energy companies offering broadband services to their customers, and the technical features provided by their systems are not found on switches/routers designed for the enterprise market.
Universities that manage student residential area networks should take a look at their technology. The ASR4000 is also interesting as an example of a very rugged, fan-less device for deployment in harsh environments. The fact that it uses routing to the edge of the network is also worth noting.
See http://www.packetfront.se/ for more information.
Gazing into the crystal ball, trying to see the future, has never been an easy task. In this report, we have tried to share our thoughts about the things we have seen. We are confident that your have read this report with a critical mind, though.
We would like to end this report by thanking the companies and individuals who have helped us with information, insights and practical arrangements. Thanks!
Börje Josefsson