Main conclusions:

From a QoS point of view it may be beneficial to split the port usage evenly between ASICs for best utilization of all buffer memory.

  • Egress buffer memory is shared between ports on the same ASIC.
  • On WS-C3560E-48TD, ports 0-24 are on ASIC 1, port 25-48 on ASIC 2 and the uplink ports are on ASIC 0.

The buffers are rather limited (about 1.9 MB in total per ASIC for the 3560-E). In the extreme case one physical port can queue up to about 1.2 – 1.3 MB of this memory.

Avoid configuring buffer shares and queue thresholds extremely low or high.

  • Give every queue at least 10 % buffer space with a thresholds of 40 %.
  • Anything much lower and it will barely be able to queue a single full size frame.
  • Setting thresholds too high may result in one highly congested port stealing buffer memory from all the other ports.
  • Anything between 1% buffers, 40 % threshold and 70 % buffers, 500 % threshold per queue should be safe when all queues are used.

On access ports – reserve lots of buffers for the queues, but not 100%.

  • If every queue reserves the maximum possible very little will be left in the common buffer pool.
  • If a queue doesn’t reserve anything it may be starved by the other queues/ports.
  • 70-80% on average should be good a tradeoff.

Don’t worry about the details.

  • Cisco doesn’t talk about megabytes, only percentages. This hides a lot of the complexity.
  • The actual size and distribution of the buffers may differ between models (even within the same family) and software release.

The gory details

To measure the size of the egress queues on the Catalyst 3560-E and 2960, two Netrounds measurement probes were used. One probe acts as the sender and one as the receiver. The sender is connected with a higher speed than the receiver.

By controlling the Netrounds probes from the cloud server and sending a steady flow of frames at a fixed rate slightly higher than the egress port bandwidth, congestion will occur and the switch will start queueing frames. The increased one-way delay when the frame travels through the switch is the queueing delay.

Queue size (in bits) = Queueing delay / egress bandwidth.

The testing was done on a WS-C3560E-48TD with IOS 12.2(44)SE. We only tested the “access ports” of each switch. For the the uplink ports there are fewer ports per ASIC which means the amount of reserved buffers per port is probably higher, but the total buffer size should be the same (the switch reports all ASIC’s as the same model/version).

Buffer memory is divided into 256 byte “blocks” This is verified by creating congestion on an egress port by configuring the Netrounds probes to generate different frame sizes and measuring how it affected the queueing delay.

Every frame is stored in a multiple of 256 bytes. A 64B frame uses just as much buffer space a 250B frame. The result is a less than 100% efficient usage of the buffer memory, worse with small frames.

The buffers are shared within one ASIC This is verified by creating congestion on two egress ports simultaneously. If the egress ports are on different ASIC’s the combined queued frame count is much higher.

Use the command:

c3560e#show platform pm if-numbers

to show ports/ASIC layout. In the “port” column 1/13 means port 13 ASIC 1.

Each port has a “reserved pool” of up to ~200 “blocks” The queues on each port may reserve 0-100% of these blocks. Blocks that aren’t reserved by queues counts as the “common pool” and may be used by other ports within the same ASIC.

Please note that even ports that are not in use (no link, adm down etc) will reserve buffers.

Each physical port can queue a maximum of ~800 frames No matter how much buffer memory is allocated to a queue it can never hold more than 800 frames (independent of frame size). One single port will never use all the ASIC buffer space. This number is distributed between the queues by the buffers setting.

If one queue is allocated 10% of the buffers, that queue can hold no more than 80 frames

There are a total of ~7400 of these “blocks” per ASIC This is the maximum total number blocks that can be used when creating egress congestion on several ports within one ASIC. 7400*256 bytes = 1.9MB.

Configuration example
c3560e(config)#mls qos queue-set output 1 buffers 10 20 30 40
c3560e(config)#mls qos queue-set output 1 threshold 100 200 50 500
c3560e(config)#mls qos queue-set output 2 threshold 200 300 60 400

This will result in the following buffer allocations:

Queue1

threshold1 is at 100% of 10% of 200 “blocks” = 20 256B blocks
threshold2 is at 200% of 10% of 200 “blocks” = 40 256B blocks
threshold3 is at 500% of 10% of 200 “blocks” = 100 256B blocks
reserved pool for Queue1 is 50% of 10% of 200 “blocks” = 10 256B blocks
Queue1 may hold a total maximum of 10% of 800 frames = 80 frames

Queue2

threshold1 is at 200% of 20% of 200 “blocks” = 80 256B blocks
threshold2 is at 300% of 20% of 200 “blocks” = 120 256B blocks
threshold1 is at 400% of 20% of 200 “blocks” = 160 256B blocks
reserved pool for Queue2 is 60% of 20% of 200 “blocks” = 24 256B blocks
Queue1 may hold a total maximum of 20% of 800 frames = 160 frames