![]() |
![]() |
|
|
DescriptionFunctional ncarking (FBM) Model The central idea behind the Functional Benchmarking (FBM) model is to take a functional rather then an abstract system view with respect to assessing a content distribution system. This implies that a system under test (SUT) is not generically defined (e.g. as VoD system) but rather is characterised by a specific (set of) function(s). For instance, a VoD system can be assessed by looking at its capability of delivering video with certain latency requirements to a user. It then needs to be further specified what the relevant functional elements are and how they can be specified in a system independent manner. This is important to allow comparison between different systems that provide the same (or a similar) service. Thus, the system to be evaluated as depicted in Figure 1 will be characterised by the functions that are investigated ![]() Figure 1: System-under-test approach In order to make the evaluation standing up to rigorous scientific scrutiny the results have to be reproducible. Therefore the test conditions need to be clearly specific and the workbench must provide a controlled (respectively controllable) environment. Further, the different static and dynamic parameters (i.e. workload and input QoS) under which the test takes place need to be well-defined. Finally, the parameters that express the system behaviour according to which the system will be assessed (i.e. Output QoS) needs to be well-specified. ![]() Figure 2: Classification of input and output parameters These considerations have led to the specification of the Content FBM model as shown in Figure 2. At the centre of the model is the (set of) function(s) that will be benchmarked. For the benchmarking well-defined input parameters define the conditions under which the evaluation takes place. The output parameters express the system functionality specifically looked at. More specifically they are:
Ideally it should be possibly to develop a content benchmarking workbench that can be used for different systems providing similar functionality. In this case it should be possibly to replace the SUT (defined by the functionality under test) through another SUT (providing the same functionality) while maintaining the same set of input (i.e. environment and workload) and output (i.e. cost and performance parameter) as shown in figure 3. Despite the fact that systems may differ in their functional scope it is important that in case of comparative benchmarking only the common functional elements are assessed. Therefore all systems undergoing such a test will have to use the environment and workload parameters as well as agree to a common definition of the performance and cost parameters. ![]() Figure 3: Benchmarking Workbench model In
addition to the input and output parameters each SUT might have a set
of individual configuration parameters that “tune” the system
behaviour. In these cases the system performance might be improved for
instance by adding or using resources in a specific manner. It is
important to carefully distinguish input and output parameter used to
assess the functional capabilities of a system from these internal
configuration parameters through which a certain aspect of the system
can be improved. For instance, a system that tries to improve content
availability through caching should not assess the caching point or
number of replicas but should use a measure of availability. The latter
will then be used to assess how well it performs but also to compare it
against a system that might use other mechanisms to improve content
availability. However, in case different configurations cause different
performance and costs it is important to run these tests separately
indicating the changes in the output parameters caused by configuration
adaptation. |
| To provide new input comments, etc, please send an email to Piotr Srebrny, Gareth Tyson, Andreas Mauthe or Thomas Plagemann |