User Tools

Site Tools


Sidebar

openbach:exploitation:reference_scenarios:network:rate:rfc

RFC ideas

Defining Network Capacity (RFC 5136)

  • Capacity is not necessarily fixed, and consequently, a single measure of capacity at any layer may in fact provide a skewed picture (either optimistic or pessimistic) of what is actually available
  • The closer to “instantaneous” a metric is, the more important it is to have a plan for sampling the metric over a time period that is sufficiently large.
  • Networks may treat packets differently (in terms of queuing and scheduling) based on their markings and classification.
  • Networks may also arbitrarily decide to flow-balance based on the packet type or flow type and thereby affect capacity measurements.
  • IP packets of different sizes can lead to a variation in the amount of overhead needed at the lower layers to transmit the data, thus altering the overall IP link-layer capacity.
  • The measurement of capacity depends not only on the type of the reference packets, but also on the types of the packets in the “population” with which the flows of interest share the links in the path.
  • Two approaches are propose in RFC 5136 for measuring link/path capacity:
    • Providers preferred approach: Measure capacity using a broad spectrum of packet types (as generic as possible).
    • Application users preferred approach: to focus narrowly on the types of flows of particular interest.
  • Path capacity is defined as the smallest capacity of all the link along that: pathC(Path,T,I) = min {1..n} {C(Link_n,T,I)},where T is the start time of the capacity test and I the duration of the test.

A Framework for Defining Empirical Bulk Transfer Capacity Metrics (RFC 3148)

  • Problem/specifications when doing Bulk Transfer Capacity (BTC) tests (such as in TCP iperf case)
    • The amount of data sent should only include the unique number of bits transmitted (i.e., if a particular packet is retransmitted the data it contains should be counted only once).
    • The legal diversity in congestion control algorithms creates a difficulty for standardizing BTC metrics because the allowed diversity is sufficient to lead to situations where different implementations will yield non-comparable measures – and potentially fail the formal tests for being a metric.
    • There is also evidence that most TCP implementations exhibit non-linear performance over some portion of their operating region. It is possible to construct simple simulation examples where incremental improvements to a path (such as raising the link data rate) results in lower overall TCP throughput.

Framework for TCP Throughput Testing (RFC 6349)

  • Methodology proposed in RFC 6349 for TCP throughput testing:
    • Variables involved in the test performances: BB (bottleneck Bandwidth), RTT, Send/received socket buffer, minimum TCP RWND, path MTU, and achievable TCP Throughput when TCP is in the Equilibrium state.
    • Provide a practical test approach that specifies tunable parameters (such as MTU (Maximum Transmission Unit) and Socket Buffer sizes) and how these affect the outcome of TCP performance over an IP network.
    • Proposed metrics (as defined in RFC 6349) to identify the performance of TCP tests: Transfer time ratio / TCP efficiency / Buffer Delay.
  • Attention: It is not possible to make an accurate TCP Throughput measurement when the network is dysfunctional. In particular, if the network is exhibiting high packet loss and/or high jitter, then TCP Layer Throughput testing will not be meaningful. As a guideline, 5% packet loss and/or 150 ms of jitter may be considered too high for an accurate measurement.
  • Unordered List ItemAttention: End-users with “best effort” access could use this methodology, but this framework and its metrics are intended to be used in a predictable managed IP network. No end-to-end performance can be guaranteed when only the access portion is being provisioned to a specific bandwidth capacity.
  • Steps for testing methodology:
    • Identify the path MTU by means of the Path MTU Discovery (PMTUd) [RFC4821] algorithm  to avoid fragmentation
    • Identify non congested Baseline Round-Trip Time and Bandwidth  to estimate TCP RWND and send socket buffer size.
    • TCP Connection Throughput Tests (with above customized parameters): single- and multiple-TCP-connection throughput.
  • Thus, a preparation phase shall allow to:
    • Modify TCP stack tunable elements on the extremity machines  allowed with OpenBACH jobs
    • Adjust Send/receive socket buffer sizes.  allowed with OpenBACH “tcp_buffer” job
    • Measure RTT (RFC 6349 recommends TCP dedicated tools).  allowed with OpenBACH “hping/fping” jobs
    • Attention: access control lists, routing policies, and other mechanisms may be used to filter ICMP packets or forward packets with certain IP options through different routes.
    • Perform PMTUd  A job should be implemented in OpenBACH (based on ping –M do option)
openbach/exploitation/reference_scenarios/network/rate/rfc.txt · Last modified: 2019/06/11 16:21 (external edit)