This report should be read with the information in the "travel report, vendor visits 2000" fresh in mind.
The general design is the same both for the current next generation campus LAN proposal, and for the LAN described in the "Future" section of the travel report - the proposed infrastructure works in both cases. By upgrading the building blocks this infrastructure will last several generations of campus LAN upgrades.
In short - one (two in the redundant case) pair of single-mode fibre from every wiring closet to the central equipment, using wavelength multiplexing between buildings if not enough fibre is available.
In the current proposal the building blocks interconnect with at least gigabit capacity. Gigabit ethernet, preferrably with jumbo frame support (9 kB MTU), is seen as the most cost-effective technology to achieve this, with 10 gigabit ethernet around the corner.
Most users connect with full duplex fast ethernet, this is deemed enough for the majority of users. Power-users and servers connect with gigabit ethernet. One gigabit user per 20 normal users is deemed a good design parameter. The wiring closet uplinks are also gigabit ethernet. One gigabit uplink per 200 users (10 power users) is deemed reasonable, with capacity growth by using gigabit etherchannel.
The equipment should be sized to handle full gigabit speed between the wiring closets and to the servers, with the SUNET/NORDUnet connection being at least 2.5 Gbps now, and maybe 10 Gbps within the lifelength of the equipment. The edge switches should preferrably handle at least 200 ms buffering of gigabit traffic, roughly 25 MB shared packet memory, to handle a cross-atlantic gigabit connection, or a few within Sweden.
If you have (or can deploy) enough fibre to connect the central equipment to each wiring closet directly with one (two in the redundant case) fibre pair - do so!
These connections are assumed to be full duplex gigabit ethernet, and will normally need to be single-mode fibre unless the distances are short. You will definitely need single-mode fibre to every wiring closet for future higher speeds, so start deploying now.
If you do not have enough fibre to connect every wiring closet directly to the central equipment we recommend using WWDM (4 channel) or future cheap 16 channel DWDM (CWDM?) equipment to multiplex the connections to each building over 4 or 16 times less physical fibre. In this case you could also get away with using multi-mode fibre within the building, distances permitting.
Some of the products have back-haul protection switching, meaning that you could connect a multiplex with two fibers taking different paths to the central equipment. If one of the fibers is cut traffic will still flow uninterrupted. This gives a reasonable degree of protection in the not fully redundant case as it is the fibers between buildings that are most likely to get cut.
Another alternative (discouraged as it will introduce an extra switch hop with associated latency/jitter, packet loss and buffering issues) is to use a switch with one 10 gigabit ethernet connection towards the central equipment, and multiple gigabit ethernets towards the wiring closets.
Note that the multiplex equipment should handle jumbo frames (9 kB MTU) and VLAN trunking.
Anyone who knows about cheap WWDM products or wavelength specific GBICs - please contact us!
The switch group might be a cluster of fixed-configuration switches (with at least one gigabit per second interconnect) or a larger modular chassis. The important thing is that the chosen switches have as large port buffers as possible to handle a large bandwidth*delay product. Shared memory architectures are normally better than having a fixed buffer per port as not all ports will be used simultaneously at the edge. A 25 MB shared memory buffer for gigabit speeds corresponds to rougly 200 ms, which is reasonable.
Avoid daisy-chaining switches to keep the number of switch hops as low as possible. Each switch hop increases delay and most importantly jitter and packet loss, and of course also lowers the overall MTBF.
It is beneficial if the switches respect and can set IP priority (ToS, DiffServ), or at least IEEE 802.1p CoS, as we see a need for prioritizing for example voice over IP and network based video end-to-end. Two to four different queues is deemed enough.
Most ports should be VLAN capable full duplex fast ethernet. Any port on campus can be connected to any other. This simplifies deployment and moves, and gives better utilization of switch-ports, leading to better economy. If you do not want to deploy campus-wide VLANs (for example if you prefer just one subnet per wiring closet) there is no problem doing that either, and you still have the possibility to add ports elsewere.
There should be at least one full duplex gigabit ethernet (1000Base-T or GBIC) port per 20 normal users for power-user or server connections. One gigabit uplink per 200 fast ethernet (10 gigabit ethernet) connections is deemed reasonable, with capacity growth by using gigabit etherchannel.
The uplinks will normally be 1000Base-LX/LH single- or possibly multi-mode fibre (see the multiplex section).
Note that if you deploy auto-sense 10/100 switches you should configure the ports statically whenever possible to avoid the all too common and hard to detect duplex mismatch problem. The building block interconnects at least should always be statically configured.
Do not use the spanning tree protocol, use etherchannel links and layer three redundancy instead.
Other switches like the Cisco Catalyst 35xx/29xx XL series and the Extreme Networks Summit48i have only 4 MB buffer, which is way too little for a gigabit edge switch. A little better for the Cisco Catalyst 2980G/2948G at 8 MB buffer, but this is still too little.
To get the needed performance you will need layer three hardware switching in the central equipment. You want hardware support for multicast too. This almost certainly implies that internal routing must be done by an integrated gigabit switch/router.
In the primary only case the central equipment should have as much redundancy as possible, in particular dual switch engines where you can upgrade one at a time, and that can do as seamless service takeover as possible, without having to reboot etc (minimum downtime). Of course you should also have dual power supplies, all modules should be hot-swapable etc.
You should also have redundant routing in the central equipment, either by having redundant router modules similar to the switch engines (upgradeable one at a time, fast takeover) or by using VRRP (HSRP, ESRP...) between two completely separate router modules. The first alternative means less complex configuration, the second allows for load balancing between the routers.
Whether to let the same routers handle internal routing and peering with SUNET is a choice you must make. Using the same routers means less equipment, and as both internal and peering routers must handle routing at multiple gigabit speeds (SUNET will be at least 2.5 Gbps now, going towards 10 Gbps) this makes economic sense as such routers are fairly expensive. The router feature sets found on integrated switch/routers do not always include BGP4+ and in particular MBGP/MSDP support though, and for configuration simplicity splitting internal routing and external peering might also be beneficial.
On possible alternative in the latter case might be to have a secondary switch/router with lower capacity, possibly only fast ethernet connections to the wiring closets (discouraged), as it will only be used when the primary equipment fails. Note that you will still need gigabit capacity in this box for the backup SUNET connection.
As this case assumes box redundancy, the boxes themselves need not have high internal redundancy - a single switch engine in each, and one router to each chassis using VRRP (HSRP, ESRP...) between the boxes for routing redundancy.
The SUNET peering in this case has the backup connection to/via the secondary box.
All comments for the "primary only" case are valid here too. The equipment should allow both VLAN (layer 2) and routing (layer 3) switching, hardware support is needed for layer 3 and multicast etc etc.
NOTE THAT NOTHING IS DECIDED ABOUT THE FUTURE SUNET YET!
The fine print is - the information and configuration guidelines must be tailored for the individual organisation, and SUNET TRef will not be held responsible for how you use this information.