Network calculus is a theory designed to compute upper bounds of flow delays and server backlogs in networks. Before presenting how to compute such bounds, we present in this chapter the modeling process so that the network calculus model is an abstraction of a real system. This includes modeling of the data that circulate in a network (flows) and modeling of the processing of data (servers). The modeling process also justifies the hypothesis we made in this book.
1.1. Modeling principles
Formal methods, as described in Figure 1.1, are used to compute properties in real systems (such as there is no loss of message): if a model is built from a system Σ, then any property P of the system must be translated into a formal property Φ of the model. Such a property can, for example, be that there is no buffer overflow.
In our context, we want to ensure that if the model states that Φ is satisfied, then P is also satisfied. We then want a conservative modeling: it can never happen that a good property (i.e. the system satisfies all of the desired requirements) is satisfied in the model, but not in the real system. Indeed, the system’s behavior would not be guaranteed.
Hence, we ...