VIAVI TestCenter: Why does setting Receiver Ready Delay (RRD) on one port affect the Port Average Latency on another port?

Knowledge Base - FAQ

VIAVI TestCenter: Why does setting Receiver Ready Delay (RRD) on one port affect the Port Average Latency on another port?
• We’ll use the following topology for our discussion: • • The basic ideas: The Receiver Ready Delay (RRD) on STC FC ports is implemented as indicated in the following ladder diagram (same as the one we shared before).    • • R_RDY is a buffer-to-buffer flow control mechanism and so has influence only on the directly connected port (I used T11 FC-FS-3, rev. 0.90, section 19.4.6 as a reference).  But you can imagine that it can have indirect effect on the source of the data transmission.  • The basic idea, using the port labels in the diagram and just considering the flow from A to B, is: • if (B's R_RDY rate) >= (1's forwarding rate) then      1 won't have to buffer any frames ==> no additional delay • if (B's R_RDY rate) we see additional delay      if (1's buffers begin to fill)      then           the fabric must tell 0 to also delay R_RDY           in turn A will have to slow down • The latter case can be extended in the single source, multi destination case by generalizing the nested if statement to • if (any FC switch egress port buffers begin to fill) • Note that STC puts the Tx timestamp in the Signature field in the frame the moment before it goes on the wire (i.e., after the port gets an R_RDY).  This means that any delay must be a result from the SUT. More details • Pcaps can show the effect of RRD (please see attached pcaps) • We can see the effect of RRD by using capture.  Here’s the test sequence and results: • Configure STC ports Receive credits = 8 RRD on B = 1000 usec RRD on C = 500 usec • Log all F_Ports into the fabric • Start capture Capture is set to Stop when it’s full • Start the stream block on A • Stop capture • B’s capture shows the first 9 frames are received quickly due to sufficient BB_Credit_CNT.  Then from frame 10 on, they are received at 1 msec intervals (i.e., RRD value). • • C’s capture shows the first 9 frames are received quickly then from 10 on, they are received at 0.5 msec intervals • • R_RDY rate calculation R_RDY rate = min(max frame rate for frame size, RRD^-1 when R_RDY >0) Example:      if RRD = 2 usec per R_RDY then           R_RDY rate = 2 usec^-1          = 500,000 R_RDY/sec • A’s frame rate calculation (Port based scheduling mode) Port based scheduling is the simplest mode.  Frames will be sent out in a round robin fashion.  So, A’s frame rate  =                    (line rate * port load)      ------------------------------------------------ * number of streams      (sum(frame lengths) + (32 IFG * numStreams)) * 8 Example:          50% load      stream block 1 = 1 stream 100 bytes      stream block 2 = 3 steams, 252 bytes •                 6,799,999,897 bps * 50%      ------------------------------------------------------------- * 4      ((100 + 252 + 252 + 252) + (32 *4)) bytes/frame * 8 bit/byte •      = 1,727,642 fps • R_RDY and A’s frame rate calculations are in the attached spreadsheet  • The yellow cells are for user input • Please see the attach Word doc that goes through an example of RRD at work.  I believe this should explain what you’re seeing (i.e., setting RRD on one port affecting the Port Average Latency on another port)