FBM Banner FBM

Functional Benchmarking of Content Distribution

Recent years have seen an increase in the number of content distribution solutions available. This observation was made early on in the Content project and it was also noticed that there is a significant lack of well-defined metrics to compare and evaluate such systems. This website presents the Functional Benchmarking (FBM) Framework developed by the Content NoE. The idea of this framework is to offer a collection of metrics and environmental and workload parameters that can be used to evaluate (and later compare) content networking systems in a standardized way. This is illustrated by a set of instantiations, i.e. examples of how the Content FBM framework can be applied in the context of specific areas related to content distribution (i.e. content delivery, search and quality of experience – QoE) As such the FBM Framework provides a standard approach together with a rule set that can be generally applied for evaluation tasks. Further, this site aims to provide a repository for benchmarking and evaluation activities related to content distribution through providing links to other activities and pointing to suitable tools.

The benchmarking of content networks is substantially different to that of existing benchmarking approaches. A number of computing areas utilize benchmarking, with well established standards such as TPC (http://www.tpc.org) for databases. Content Networking, however, is much more divers and consists of different approaches and services. Thus, it is impossible to offer a single set of standard scenarios and metrics that can be used to benchmark and evaluate all such systems. The Content FBM Framework therefore offers an abstract approach alongside a range of potential metrics that can be used by researchers. In order to make these metrics more tangible and useful we use “instantiations” of specific areas related to content delivery. Instantiations are used to apply the generic FBM model within a specific domain in order to provide more tangible examples and guidelines on how to use the Content FBM framework. This should help to understand the basic concept better and make it easier to apply it to a specific research context.

Further suitable tools are being identified that can be used in the actual system evaluation. It is then the job of the system developer to select how to best use these. To ease in this process, this website also offers a range of tools and hints to show how best to implement benchmarking.

Goal

The goal of the Content Functional Benchmarking (FBM) Framework is to provide a methodology and framework for evaluating and comparing content distribution solutions in a systematic and organized manner. A substantial part of the work has been made up of considering peer-to-peer solutions since they provide a large group of current content networks. This is mainly reflected in the discussion of the FBM model in the context of the content delivery instantiation. However, this does not restrict the generality of the approach or the applicability to other distribution schemes. Thus, the Content FBM Framework can be applied to evaluate content distribution solutions in general. Actually, we believe it to be generic enough to be applied to a number of system contexts beyond the actual content delivery domain.

The FBM model has been designed to provide the necessary information on availability, characteristics and issues concerning environments, workloads, and performance and cost metrics. Through this initiative  we want to assist correct system evaluation, and increase the awareness of underlying issues related to content delivery. This is intended help to create meaningful and comparable assessment results that focus on the most general issues with respect to system performance. It should also help to avoid common mistakes in the evaluation of content distribution systems and thus allow more objective discussions and analysis of content distribution solutions.

Approach

droppedImage_2.png

To be able to evaluate and compare various content distribution schemes we adopt a functional approach; i.e. the function is at the centre of the evaluation. The assumption is that the goal of every content distribution has to be to provide certain functionality related to content distribution effectively and efficiently (e.g. streaming of video content to a number of users). In out approach this function is regarded as (a part of) the “system under test” (SUT).

The Content Functional Benchmarking (FBM) model is generic and can be applied at different level of the system architecture. The levels that have been considered are shown in figure 1.

History

In the early beginnings of the Content project we recognized that there is a lack of a widely accepted set of metrics, environments and workloads for evaluating content distribution systems that prevents an open an objective discussion about different solutions and approaches. Surveying the work in the field we found that system developers use different environments and tools to assess their solutions, often not even designed for the specific requirements of content distribution. Furthermore, metrics vary from case to case rendering it impossible to compare different approaches.

To address the problem we decided to devote considerable effort to establish how a content networking evaluation and benchmarking framework should look like. Additional workshops dedicated to benchmarking issues where also organised to focus on the uncovered aspects. The Content project is bringing together researchers from different areas of content networking which allows us to explore together benchmarking issues from different perspectives. A common theme, however, is solutions based on, developing further, or competing with the peer-to-peer paradigm. This has been used as initial starting point in our discussion. The work has then been split and some worked on overlays (especially: video on demand, live streaming and delivery); some worked on benchmarking of search mechanisms in P2P networks; and worked on the benchmarking of quality of experience (QoE). By applying the FBM model in such a wider variety of areas we gained the confidence that it can be used in various different fields.

Having such broad perspective we divided our work into two essential parts. The first part was to analyze existing work, collecting various metrics, environments, and workloads. The second part was to organize the collected material in a systematic way that could be applied to the systems we studied.

During our initial workshop in Lancaster (May 2007), we decided to follow a layered model to organize sample metrics. However, further in time we recognized that due to mutual interdependencies between metrics it is impossible keep the layered model and during the Oslo workshop (January 2008) we switched to the functional model, which can accommodate all the interdependencies and can be applied for all types of systems we studied.

The reasons for moving away from systems benchmarking and assessments towards the Functional Benchmarking Model was that we realised that content distribution systems are very divers and complex so that it is next to impossible to capture this by a standardised systems based approach. The issues uncovered are related to the diversity of requirements different systems try to address, and the resulting diversity of systems aspects and approaches taken. Further, there are various dependencies (e.g. between approaches and addressed aspects) so that eventually every system is very unique in the service it is providing and how it is providing this service. By focusing on a specific (set of) function(s) we are aiming at reducing the complexity. Only some key-aspects are looked at and a comparison only takes place on the functional set under consideration. This might mean that not all features of a system are evaluated but offers a much clearer approach that can also be standardise

After verifying the FBM framework in Oslo, we decided to expose it to external opinion by organizing an open benchmarking workshop in Dagstuhl.

Details, instantiations and resources

For more details on the benchmarking approach, resources we have collected (tools, traces, and URLs),a discussion on specific instantiations , and FAQ, please follow the links below. Instantiations in this context refers to a more detailed specification of the FBM model considering specific areas within the content networking domain (i.e. content distribution, quality of experience and search). The different parameters are specified in more detail and examples are used to show how FBM can be applied in these cases.

Click to Zoom / Clique numa imagem para aumentar
To provide new input comments, etc, please send an email to Piotr Srebrny, Gareth Tyson, Andreas Mauthe or Thomas Plagemann