User Tools

Site Tools


Sidebar

openbach:exploitation:reference_scenarios:network:rate:complex_scenario_dev

Complex Rate Scenario dev

The following scenario allows to compare different parameters (MTU size, ToS, number of parallel flows) on UDP/TCP mode, with different number of iterations per test and with a post-processing phase allowing to plot timeseries of the Throughput results per test and the CDF. This is done thanks to the used of the scenario builder and data_access tools, as well as the auditoriums scripts.

The scenario is created, launched and post-processed within the same script. For that, you should just correctly configure (mainly your project name and entity names, and the configuration of your test) and launch the script within the auditoriums scripts directory as follows:

 # ./rate_metrology.py 

Below, we describe the different script parts:

Variables/constants

The declaration of different parameters:

  • Related to the project/scenario name.
  • The configuration of measurement jobs (iperf3/nutcp): number of parallel flows, mtu size, ToS, UDP/TCP, iterations.
  • Initialisation of some variables for the post-processing.
import itertools
import matplotlib.pyplot as plt
from scenario_observer import ScenarioObserver
 
import scenario_builder as sb
 
SCENARIO_NAME = 'Rate_Metrology'
SCENARIO_DESCRIPTION = 'Rate metrology scenario measuring network bandwidth'
UDP_RATES = range(15000000, 17000000, 4000000)
NUTTCP_CLIENT_UDP_LABEL = 'nuttcp client: {} flows, rate {}, mtu {}b, tos {} (iter {})'
NUTTCP_SERVER_UDP_LABEL = 'nuttcp server: {} flows, rate {}, mtu {}b, tos {} (iter {})'
CLIENT_TCP_LABEL = '{} client: {} flows, mtu {}, tos {} (iter {})'
SERVER_TCP_LABEL = '{} server: {} flows, mtu {}, tos {} (iter {})'
PROJECT_NAME = 'rate_jobs'
POST_PROC = []

The main is composed of the:

ScenarioObserver creates the scenario / the post_processing is used to request the statistics from the desired jobs (by means of the labels of the openbach-functions)

  1. Creation of the scenario by means of the ScenarioObserver (which creates de scenario). This is done thanks to the auditoriums scripts, that will also allow to start/monitor the scenario.
  2. Building of the scenario: by means of the scenario builder tool, allowing to generate your scenario (several nuttcp/iperf clients and servers launched with different parameters). See the function create_scenario() used in this part of the code for building the scenario.
  3. Launch of the scenario and wait for its finalisation.
  4. Post-processing of the collected data from jobs, compute an average and plot the results by means of matplotlib. The function used to recover the job instance id of each test is detailed here. An the used function for create/print the graphs is detailed here.
def extract_iperf_statistic(job):
    data = job.statistics_data[('Flow1',)].dated_data
    return [
            (timestamp, stats['throughput'])
            for timestamp, stats in data.items()
    ]
 
 
def extract_iperf_statistics(job):
    data = job.statistics.dated_data
    return [
            (timestamp, stats['throughput'])
            for timestamp, stats in data.items()
    ]
 
 
def extract_nuttcp_statistics(job):
    data = job.statistics.dated_data
    return [
            (timestamp, stats['rate'])
            for timestamp, stats in data.items()
    ]
 
def main(project_name):
    #Build a scenario specifying the entity name of the client and the server.
    scenario_builder = build_rate_scenario('client', 'server', udp=False)
    #ScenarioObserver creates the scenario / the post_processing is used to request the statistics from the desired jobs (by means of the labels of the openbach-functions)
    observer = ScenarioObserver(SCENARIO_NAME, project_name, scenario_builder)
    for pp in POST_PROC:
       if pp[1] == "iperf3":
           if pp[2] > 1:
               observer.post_processing(pp[0], extract_iperf_statistics, ignore_missing_label=True)
           else:
               observer.post_processing(pp[0], extract_iperf_statistic, ignore_missing_label=True)
       else:
           observer.post_processing(pp[0], extract_nuttcp_statistics, ignore_missing_label=True)
 
    # Launch and wait function starts your scenario, waits for the end and returns the results requested on post_processing.
    result = observer.launch_and_wait()
 
    # The plots: timeseries and CDF
    plt_thr = plt.figure(figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
    plt.ylabel('Throughput (b/s)')
    plt.xlabel('Time (s)')
    plt.title('Comparison of Throughput')
    for label, values in result.items():
        origin = values[0][0]
        x = [v[0] - origin for v in values]
        y = [v[1] for v in values]
        plt.plot(x, y, label=label, markersize=15, linewidth=3)
    plt.legend()
 
    plt_cdf= plt.figure(figsize=(12, 8), dpi=80, facecolor='w', edgecolor='k')
    plt.ylabel('CDF')
    plt.xlabel('Throughput (b/s)')
    plt.title('CDF of Throughput test')
    for label, values in result.items():
        origin = values[0][0]
        x = [v[0] - origin for v in values]
        y = [v[1] for v in values]
        n, bins, patches = plt.hist(y, 1000, density=1, cumulative=True, label=label)
    plt.legend()
 
    plt_thr.show()
    plt_cdf.show()
    input()

The whole script can be found in https://forge.net4sat.org/openbach/openbach-extra/blob/master/scenario_examples/rate_scenario/rate_metrology.py. Some examples of results are shown below:

openbach/exploitation/reference_scenarios/network/rate/complex_scenario_dev.txt ยท Last modified: 2019/06/11 16:21 (external edit)