User Tools

Site Tools


openbach:exploitation:reference_scenarios:network:rate:index

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
openbach:exploitation:reference_scenarios:network:rate:index [2018/02/15 13:48]
dpradas
openbach:exploitation:reference_scenarios:network:rate:index [2020/01/20 11:39] (current)
kuhnn
Line 3: Line 3:
  
 ==== Context ==== ==== Context ====
-To assess the throughput/​rate in a deployed network, we can exploit different ways of computing the available link rate. We recommend ​to compare ​the results ​of different rate measurement jobs.+To assess the throughput/​rate in a deployed network, we can exploit different ways of computing the available link rate. Following RFC recommendations (e.g.: RFC 5136 "​Defining Network Capacity, RFC 3148 "A Framework for Defining Empirical Bulk Transfer Capacity Metrics"​ or RFC "6349 "​Framework for TCP Throughput Testing"​),​ we have detailed in [[openbach:​exploitation:​reference_scenarios:​network:​rate:​rfc|RFC ideas]] some interesting items regarding the ways to evaluate ​the network capacity. 
 + 
 +We summarise below some of the specifications:​ 
 + 
 +  - Repetitive tests are needed as well as different test durations. 
 +  - The rate scenario shall include packets marked with different ToS. 
 +  - The rate scenario must include tests with different packets sizes. 
 +  - The rate scenario must include single- and multiple-TCP-connection throughput tests. 
 +  - The measurements shall be taken on the TCP equilibrium state (as defined in RFC 6349). 
 +  - Follow methodology of RFC 6349. 
 +  - In addition to already available metrics of jobs iperf/​nuttcp,​ it might be interesting to compute metrics like the maximum MTU size allowed by the network (computed by the [[https://​wiki.net4sat.org/​doku.php?​id=openbach:​exploitation:​jobs:​stable_jobs:​pmtud_1.0 | PMTUd job]], the RTT, send/​received socket buffer, etc.  
 +  - The rate scenario shall include tests with different rate measurement jobs. 
  
  
 ==== Objective ==== ==== Objective ====
-We recommend to compare at least the following OpenBACH jobs, which are based on active rate measurements (i.e. they perform measurements based on their own generated traffic):+ 
 +We recommend to compare at least two of the following OpenBACH jobs (iperf3 and nuttcp), which are based on active rate measurements (i.e. they perform measurements based on their own generated traffic):
   * ''​iperf3''​ (server or client) generate TCP/UDP traffic and performs different kind of measurements on this traffic. Regarding TCP traffic, it tries to charge the link (depending on the window size) and it is capable of measuring rate (b/s) and data sent (bits). Regarding the UDP traffic, it is possible to specify the bit rate, and it is capable of measuring rate (b/s), data sent (bits), packets sent, jitter (ms), loss and PLR.   * ''​iperf3''​ (server or client) generate TCP/UDP traffic and performs different kind of measurements on this traffic. Regarding TCP traffic, it tries to charge the link (depending on the window size) and it is capable of measuring rate (b/s) and data sent (bits). Regarding the UDP traffic, it is possible to specify the bit rate, and it is capable of measuring rate (b/s), data sent (bits), packets sent, jitter (ms), loss and PLR.
   * ''​iperf2''​ (server or client): uses the version 2 of iperf. The configuration parameters and the metrics are the same of iperf3 job.   * ''​iperf2''​ (server or client): uses the version 2 of iperf. The configuration parameters and the metrics are the same of iperf3 job.
-  * ''​nuttcp''​ (under development) +  * ''​nuttcp''​ (server or client): similar methodology and measurement ​of iperf3.
- +
-Regarding the rate metrology, it is also possible to perform passive test with jobs that measure the rate of the traffic generated by other components/​jobs,​ such as the ''​rate monitoring''​ job (based on iptables packets/​bits counting). +
- +
- +
-==== Example ==== +
- +
-It might be interesting to compare the traffic measured by the active jobs with a passive job. You can find below two examples of scenario in UDP mode and TCP mode during 45s +
- +
-=== TCP mode ===+
  
-The json format for the TCP test is available below. You can import ​it to your project via the HMI OpenBACH (you need to change ​the name "​Client1"​/"​Proxy1"​ in the .json for the name of your client/server entities).+Regarding ​the rate metrology, ​it is also possible ​to perform passive test with jobs that measure ​the rate of the traffic generated by other components/jobs, such as the [[https://​wiki.net4sat.org/​doku.php?​id=openbach:​exploitation:​jobs:​core_jobs:​ratemonitoring_1.0 |rate monitoring]] job (based on iptables packets/​bits counting). It is recommended to do that for validation purposes, if you are not confident with the metrics shown by iperf3/nuttcp/​iperf2.
  
-<code json rate_metrology_tcp.json>​ +We have prepared a OpenBACH reference scenario for the rate metrology:
-+
-  "​description":​ "",​ +
-  "​openbach_functions":​ [ +
-    { +
-      "​id":​ 139724458,​ +
-      "​label":​ "​server",​ +
-      "​start_job_instance":​ { +
-        "​iperf3":​ { +
-          "​server_mode":​ "​True",​ +
-          "​exit":​ "​True",​ +
-          "​port":​ "​2500"​ +
-        }, +
-        "​offset":​ 0, +
-        "​entity_name":​ "​Server1"​ +
-      } +
-    }, +
-    { +
-      "​wait":​ { +
-        "​time":​ 11, +
-        "​launched_ids":​ [ +
-          139724458 +
-        ] +
-      }, +
-      "​id":​ 44711595, +
-      "​label":​ "​client",​ +
-      "​start_job_instance":​ { +
-        "​iperf3":​ { +
-          "​time":​ "​45",​ +
-          "​port":​ "​2500",​ +
-          "​client_mode_server_ip":​ "​172.20.34.26",​ +
-          "​interval":​ "​1"​ +
-        }, +
-        "​offset":​ 0, +
-        "​entity_name":​ "​Client1"​ +
-      } +
-    }, +
-    { +
-      "​wait":​ { +
-        "​launched_ids":​ [ +
-          44711595 +
-        ] +
-      }, +
-      "​id":​ 109991227,​ +
-      "​label":​ "rate", +
-      "​start_job_instance":​ { +
-        "​rate_monitoring":​ { +
-          "​protocol":​ "​tcp",​ +
-          "​destination_port":​ "​2500",​ +
-          "​chain":​ "​INPUT",​ +
-          "​interval":​ "​1"​ +
-        }, +
-        "​offset":​ 0, +
-        "​entity_name":​ "​Server1"​ +
-      } +
-    }, +
-    { +
-      "​wait":​ { +
-        "​finished_ids":​ [ +
-          44711595 +
-        ] +
-      }, +
-      "​id":​ 7144480, +
-      "​stop_job_instances":​ { +
-        "​openbach_function_ids":​ [ +
-          109991227 +
-        ] +
-      }, +
-      "​label":​ "​stop_rate"​ +
-    } +
-  ], +
-  "​constants":​ {}, +
-  "​name":​ "​iperf3",​ +
-  "​arguments"​{} +
-+
-</​code>​+
  
-=== UDP mode ===+  * [[openbach:​exploitation:​reference_scenarios:​network:​rate:​network_rate|network_rate]] that launches iperf3/​nuttcp in TCP mode and nuttcp in UDP mode. The scenario allows to modify different traffic parameters on TCP mode (such as MTU size, ToS, number of parallel flows), with a post-processing phase allowing to plot timeseries of the Throughput results per test and the CDF.
  
-The json format for the UDP (at a rate of 2Mb/s) test is available below. +==== Limitations ====
-<code json rate_metrology_udp.json>​ +
-+
-  "​description":​ "",​ +
-  "​openbach_functions":​ [ +
-    { +
-      "​id":​ 139724458,​ +
-      "​label":​ "​server",​ +
-      "​start_job_instance":​ { +
-        "​iperf3":​ { +
-          "​server_mode":​ "​True",​ +
-          "​exit":​ "​True",​ +
-          "​port":​ "​2500"​ +
-        }, +
-        "​offset":​ 0, +
-        "​entity_name":​ "​Server1"​ +
-      } +
-    }, +
-    { +
-      "​wait":​ { +
-        "​time":​ 11, +
-        "​launched_ids":​ [ +
-          139724458 +
-        ] +
-      }, +
-      "​id":​ 44711595, +
-      "​label":​ "​client",​ +
-      "​start_job_instance":​ { +
-        "​iperf3":​ { +
-          "​time":​ "​45",​ +
-          "​port":​ "​2500",​ +
-          "​interval":​ "​1",​ +
-          "​udp":​ "​True",​ +
-          "​client_mode_server_ip":​ "​172.20.34.26",​ +
-          "​bandwidth":​ "​2M"​ +
-        }, +
-        "​offset":​ 0, +
-        "​entity_name":​ "​Client1"​ +
-      } +
-    }, +
-    { +
-      "​wait":​ { +
-        "​launched_ids":​ [ +
-          44711595 +
-        ] +
-      }, +
-      "​id":​ 109991227,​ +
-      "​label":​ "​rate",​ +
-      "​start_job_instance":​ { +
-        "​rate_monitoring":​ { +
-          "​protocol":​ "​udp",​ +
-          "​destination_port":​ "​2500",​ +
-          "​chain":​ "​INPUT",​ +
-          "​interval":​ "​1"​ +
-        }, +
-        "​offset":​ 0, +
-        "​entity_name":​ "​Serveur1"​ +
-      } +
-    }, +
-    { +
-      "​wait":​ { +
-        "​finished_ids":​ [ +
-          44711595 +
-        ] +
-      }, +
-      "​id":​ 7144480, +
-      "​stop_job_instances":​ { +
-        "​openbach_function_ids":​ [ +
-          109991227 +
-        ] +
-      }, +
-      "​label":​ "​stop_rate"​ +
-    } +
-  ], +
-  "​constants":​ {}, +
-  "​name":​ "​iperf3",​ +
-  "​arguments":​ {} +
-}+
  
-</​code>​+The reference scenario associated to the rate metrology is currently limited to estimating a network with 650 ms and 230 Mbps (for iperf3 and nuttcp) in TCP. 
openbach/exploitation/reference_scenarios/network/rate/index.1518698926.txt.gz · Last modified: 2019/06/11 16:18 (external edit)