jsharma

Ethernet Packet Rate and Throughput Calculations

Discussion created by jsharma Employee on Sep 13, 2016
Latest reply on Nov 8, 2016 by sl_engin

Almost on daily basis we encounter with packet rate, throughput and bandwidth calculations on Ethernet links in our testing. Also, on the traffic generator, we set packet rates or bandwidth rates etc. for traffic and throughput related testing.
Following explanation will help you understand these calculations:

 

  • When we talk about link speeds in Mbps or Gbps, it is always a factor of 1000, and not 1024. Memory storage multiplier is 1024, but link speed multiplier is 1000.
  • For every packet transmitted on Ethernet link, we send 7 bytes of preamble and 1 byte of SFD (start of frame delimiter) plus 12 byte of IFG (inter-frame gap). Some implementation provide the flexibility to reduce IFG to lower values like
    10 or 8 these days to exploit the available BW further. The primary purpose of IFG is to allows receivers some minimum ‘‘breathing room’’ between frames to perform necessary housekeeping chores (posting interrupts, buffer management,
    updating network management statistics counters, and so on).
  • So, for every packet of X bytes (starting from DMAC up to and including CRC), actual bytes transmitted on the link are (X+20).
  • Ethernet link speed includes these overheads as well and so actual useful traffic is always less than the given Ethernet link speed. Bigger the frame size, lesser is the percentage overhead, as 20 bytes overhead is fixed irrespective of Ethernet frame size. So data throughput for 1518 byte packet is always more than for 64 byte packet.
  • Now the calculations:
    • Let’s calculate the maximum number of 64 byte packets that can be theoretically sent over a 1 Gbps link:
      • Link speed in bits/sec = 1*1000*1000*1000 (say A)
      • Packet size in bits including (Preamble+ SFD+ IFG) overheads = (64+20)*8 (say B) [Multiply by 8 is for 8 bits/byte]
      • Maximum packets/sec that can be sent on the link = A/B = 1488095.23 packets/sec (say C)
      • Now maximum throughput for 64 byte packet on a one Gbps link = C*64*8 =
        761904757.76 bps = 761904757.76/(1000*1000) = 761.90 Mbps [because actual useful packet was
        still 64 bytes]
    • Maximum throughput for Q-in-Q service
      • As an extra VLAN tag will get added on NNI side on top of the VLAN tagged packets
        sent by user, so packet size X will actually become (X+4) on NNI.
      • Now suppose NNI is again a one Gbps link same as UNI, so maximum packets per second
        from user perspective will be less, considering a user packet now has (X+4+20)
        bytes.
      • So calculate the maximum packets per second considering (X+4) and then multiple
        the packet rate with X*8 to get actual maximum throughput possible
    • Maximum throughput for MPLS service
      • Total of 26 bytes added on MPLS transport to the actual user frame, 6 byte DMAC, 6
        bytes SMAC, 2 byte protocol-type, 4 byte link VLAN, 2 labels of 4 bytes each,
        so 6+6+2+4+4+4 = 26 bytes (Note: SAOS does not support creating IP interfaces
        directly over the physical interfaces and allows only over a VLAN)
      • Depending on if you have configured “egress-untag-vlan” same as your link VLAN for NNI,
        link VLAN will not be sent on the link and overhead will be 22 bytes
      • SAOS does not support PW control word, otherwise 4 bytes of that would also add
        making it total 30 bytes of overhead
      • Apply similar calculations as Q-in-Q, just the overhead is more now in case of MPLS.

 

Note: These calculations can be used in your benchmark testing (RFc 2544) as well. Keep in mind
whether while showing the test results, whether overhead is considered or not by the FPGA.

Outcomes