DNM
The deterministic network modeling (DNM) module implements different network models for access control and delay guarantees to be used by the routing module of the ECES framework. This allows to use the routing module for finding paths with strict delay guarantees in communication networks.
The logic of the models and the implementation rely on deterministic network calculus concepts. See our technical report about the topic and the main reference defining and describing network calculus concepts.
This repository corresponds to the reference implementation for the Chameleon and DetServ models described in:
- Amaury Van Bemten, Nemanja Ðerić, Amir Varasteh, Stefan Schmid, Carmen Mas-Machuca, Andreas Blenk, and Wolfgang Kellerer. "Chameleon: Predictable Latency and High Utilization with Queue-Aware and Adaptive Source Routing." ACM CoNEXT, 2020, and
- Jochen W. Guck, Amaury Van Bemten, and Wolfgang Kellerer. "DetServ: Network models for real-time QoS provisioning in SDN-based industrial environments." IEEE TNSM, 2017,
and also implements the state-of-the-art QJump and Silo models described in:
- Matthew P. Grosvenor, Malte Schwarzkopf, Ionel Gog, Robert NM Watson, Andrew W. Moore, Steven Hand, and Jon Crowcroft. "Queues Don’t Matter When You Can JUMP Them!." USENIX NSDI, 2015, and
- Keon Jang, Justine Sherry, Hitesh Ballani, and Toby Moncaster. "Silo: Predictable message latency in the cloud." ACM SIGCOMM, 2015.
This mostly comes in the form of different Proxy subclasses (see the routing module for the description of what is a proxy) which implement different access control strategies.
The proxies require the existence of a NCRequestData instance attached to the same entity as the Request object.
Usage
The project can be downloaded from maven central using:
<dependency>
<groupId>de.tum.ei.lkn.eces</groupId>
<artifactId>dnm</artifactId>
<version>X.Y.Z</version>
</dependency>
Implemented Models
We currently have two models, i.e., two proxies, implemented.
QJump Proxy
The QJumpProxy implements the access control of the QJump system.
The configuration of QJump (number of hosts, link rate, packet size and cumulative processing time) is assumed to be stored in a QJumpConfig object attached to the same entity as the Graph object on which routing is to be performed.
DetServ Proxy
The DetServProxy implements the access control of the Chameleon and DetServ models.
It is based on a configuration assumed to be stored as an instance of the DetServConfig object attached to the same entity as the Graph object on which routing is to be performed.
The configuration object consists of the following elements:
- Access control model: this is either the multi-hop model (MHM) or the threshold-based model (TBM) - see the DetServ paper. Chameleon corresponds to the TBM. In a nutshell, the multi-hop model assigns a maximum burst and rate to each queue while the threshold-based model assigns a maximum delay to each queue.
- Cost model: cost function for a given queue. This can be defined using the classes deriving from CostModel. For example,
CostModel costFunction = new LowerLimit(new UpperLimit(new Division(new Constant(), new Summation(new Constant(), new QueuePriority())), 1), 0);
defines a cost function of 1/(1+p) bounded between 0 and 1 and where p is the priority of the queue.
- Burst increase model: along its path, a flow see its burst increasing. There are different ways of taking this into account: neglecting it, taking the worst-case delay (request deadline) as worst-case burst increase and taking the real burst increase (but then routing becomes sub-optimal, see our ICC paper about that). See the DetServ paper for more information on this.
- Input link shaping (ILS): whether or not we use ILS. See the DetServ paper for more information on this. This is just a modeling change in order to be less conservative. It however increases runtime and makes the routing problem an M1 problem (see our ICC paper about that).
- Residual mode: Within the assumption of sub-additive arrival curves and super-additive service curves, there are different ways of computing the residual rate latency service curve from an arrival curve and a service curve. Since they can both have several knee points, the residual service curve can also have multiple knee points, but because the arrival curve (resp. service curve) is assumed to be sub-additive (resp. super-additive), the residual service curve will be super-additive. From this, there are different ways of transforming a super-additive service curve in a rate-latency curve depending on which slope of the curve is used for the rate-latency one: the highest slope, the least one (then the least latency also) or the real curve (which is then not a rate-latency curve). Note that, for networks with uniform link rates, this has no influence.
- Maximum packet size: max packet size in the network (this defaults to 1530 bytes).
- Resource allocation: the MHM and TBM need resources (either rate/burst or delay) to be allocated to each queue in the network. A SelectResourceAllocation object defines a given resource allocation algorithm (subclass of ResourceAllocation) per scheduler, i.e., per physical unidirectional link.
The proxy also implements the Silo model. Indeed, Silo is a particular instance of the TBM model with real burst increase computation, no input link shaping, a shortest path cost function and using the TBMSiloDefaultAllocation default resource allocation for each scheduler.
Components used by the DetServ Proxy
For its implementation, the DetServ proxy attaches, to each queue, a service model (QueueModel) and an input model (ResourceUtilization). The former models the service offered by a queue (simply, its service curve) and the latter the traffic entering the queue (simply, its arrival curve).
For the MHM, the QueueModel is extended with the maximum token bucket that can be accepted at this queue (MHMQueueModel).
Both MHM and TBM use a simple TokenBucketUtilization component to model the arrival curve at a given queue. When ILS is enabled, both models then use a PerInEdgeTokenBucketUtilization component, which simply keeps track of the token bucket arrival curves per incoming edge (if the flow starts at the given edge, this current edge is used as "incoming edge" label).
DNM System
The DNM system is used by some specific model configurations to automate some actions and automatically update state information. For example, it automatically allocate resources when a new scheduler is created. Also, it automatically updates the service curves when a new flow is added.
Examples
The Silo model can be configured in the following way:
DetServConfig modelConfig = new DetServConfig(
ACModel.ThresholdBasedModel,
ResidualMode.LEAST_LATENCY,
BurstIncreaseModel.NO,
false,
new Constant(),
(controller, scheduler) -> new TBMSiloDefaultAllocation(controller));
That config (or any other) must then be attached to the subject graph and initialized with the used controller:
modelingConfigMapper.attachComponent(myNetwork.getQueueGraph(), modelConfig);
modelConfig.initCostModel(controller);
The routing algorithms in use must then be configured with a single proxy instance:
DetServProxy proxy = new DetServProxy(controller);
algorithm1.setProxy(proxy);
algorithm2.setProxy(proxy);
...
algorithmN.setProxy(proxy);
and then a traditional routing request with the additional NCRequestData object will trigger a routing + admission control + registration run:
Entity entity = controller.createEntity();
try (MapperSpace mapperSpace = controller.startMapperSpace()) {
requestMapper.attachComponent(entity, new UnicastRequest(h1.getQueueNode(), h3.getQueueNode()));
ncRequestDataMapper.attachComponent(entity, new NCRequestData(
CurvePwAffine.getFactory().createTokenBucket(flowRate, flowBurst),
Num.getFactory().create(deadline)););
selectedRoutingAlgorithmMapper.attachComponent(entity, new SelectedRoutingAlgorithm(aStarPruneAlgorithm));
}
See tests for other simple examples.
See other ECES repositories using this library (e.g., the tenant manager) for more detailed/advanced examples.