scispace - formally typeset
Open AccessJournal ArticleDOI

Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures

TLDR
A survey of the recent efforts towards a systematic understanding of layering as optimization decomposition can be found in this paper, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems.
Abstract
Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of layering as optimization decomposition, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how layering as Optimization Decomposition provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures

read more

Content maybe subject to copyright    Report

INVITED
PAPER
Layering as
Optimization Decomposition:
A Mathematical Theory of
Network Architectures
There are various ways that network functionalities can be allocated to different
layers and to different network elements, some being more desirable than others.
The intellectual goal of the research surveyed by this article is to provide a
theoretical foundation for these architectural decisions in networking.
By Mung Chiang, Member IEEE, Steven H. Low, Senior Member IEEE,
A. Robert Calderbank,
Fellow IEEE, and John C. Doyle
ABSTRACT
|
Network protocols in layered architectures have
historically been obtained on an ad hoc ba sis, and m any of the
recent cross-layer designs are also conducted through piece-
meal approaches. Network protocol stacks may instead be
holistically analyzed and systematically designed as distributed
solutions to some global optimization problems. This paper
presents a survey of the recent ef forts towards a systematic
understanding of Blayering[ as Boptimization decomposition,[
where the overall communication network is modeled by a
generalized network utility maximization problem, each layer
corresponds to a decomposed subproblem, and the interfaces
among layers are quantified as functions of the optimization
variables coordinating the subproblems. There can be many
alternative decompositions, leading to a choice of different
layering architectures. This paper surveys the current status of
horizontal decomposition into distributed computation, and
vertical decomposition into functional modules such as con-
gestion control, routing, scheduling, random access, power
control, and channel coding. Key messages and methods
arising from many recent works are summarized, and open
issues discussed. Through c ase studies, it is illustrated how
BLayering as Opt imization Decomposition[ provides a common
language t o think about modularization in the face of complex,
networked interactions, a unifying, top-down approach to
design protocol stacks, and a mathematical theory of network
architectures.
KEYWORDS
|
Ad hoc network; channel coding; computer net-
work; congestion control; cross-layer design; distributed algo-
rithm; feedback control; game theory; Internet; Lagrange duality;
medium a cces s control (MAC); networ k u til ity maximiza tio n
(NUM); optimization; power control; reverse-engineering;
routing; scheduling; stochastic networks; transmission control
protocol (TCP)/Internet protocol (IP); wireless communications
I. INTRODUCTION
A. Overview
1) Structures of the Layered Protocol Stack: Network
architecture determines functionality allocation: Bwho does
what[ and Bhow to connect them,[ rather than just resource
Manuscript received July 24, 2005; revised September 6, 2006. The works at
Princeton University and Caltech that are summarized in this paper were supported
by the National Science Foundation (NSF) Grants ANI-0230967, EIA-0303620,
CNS-0417607, CNS-0435520, CCF-0440443, CCF-0448012, CNS-0417607,
CNS-0427677, CNS-0430487, CCF-0635034, and CNS-0519880, by the
Air Force Office of Scientific Research (AFOSR) Grants F49620-03-1-0119
and FA9550-06-1-0297, by the ARO Grant DAAD19-02-1-0283, by the Defense
Advanced Research Projects Agency (DARPA) Grant HR0011-06-1-0008 and
CBMANET program, and by the Cisco Grant GH072605.
M. Chiang is with the Electrical Engineering Department, Princeton University,
Princeton, NJ 08544 USA (e-mail: chiangm@princeton.edu).
S. H. Low is with the Computer Science and Electrical Engineering Departments,
California Institute of Technology, Pasadena, CA 91125 USA (e-mail: slow@caltech.edu).
A. R. Calderbank is with the Electrical Engineering and Mathematics Departments,
Princeton University, Princeton, NJ 08544 USA (e-mail: calderbk@math.princeton.edu).
J. C. Doyle is with the Control and Dynamic Systems, California Institute of Technology,
Pasadena, CA 91125 USA (e-mail: doyle@cds.caltech.edu).
Digital Object Identifier: 10.1109/JPROC.2006.887322
Vol. 95, No. 1, January 2007 | Proceedings of the IEEE 2550018-9219/$25.00
2007 IEEE

allocation. It is often more influential, harder to change,
and less understood than any specific resource allocation
scheme. Functionality allocations can happen, for example,
between the network management system and network
elements, between end-users and intermediate routers, and
between source control and in-network control such as
routing and physical resource sharing. The study of network
architectures involves the exploration and comparison of
alternatives in functionality allocation. This paper presents
a set of conceptual frameworks and mathematical languages
for a foundation of network architectures.
Architectures have been quantified in fields such as
information theory, control theory, and computation
theory. For example, the source-channel separation prin-
ciple is a fundamental result on architecture in informa-
tion theory. The choices of architectural decisions are even
more complicated in networking. For example, the
functionality of rate allocation among competing users
may be implemented through various combinations of the
following controls: end-to-end congestion control, local
scheduling, per-hop adaptive resource allocation, and
routing based on end-to-end or per-hop actions. However,
we do not yet have a mature theoretical foundation of
network architectures.
Layered architectures form one of the most fundamen-
tal structures of network design. They adopt a modularized
and often distributed approach to network coordination.
Each module, called layer, controls a subset of the decision
variables, and observes a subset of constant parameters and
the variables from other layers. Each layer in the protocol
stack hides the complexity of the layer below and provides
a service to the layer above. Intuitively, layered architec-
tures enable a scalable, evolvable, and implementable net-
work design, while introducing limitations to efficiency
and fairness and potential risks to manageability of the
network. There is clearly more than one way to Bdivide
and conquer[ the network design problem. From a data-
plane performance point of view, some layering schemes
may be more efficient or fairer than others. Examining
these choices of modularized design of networks, we
would like to tackle the question of Bhow to[ and Bhow
not to[ layer.
While the general principle of layering is widely rec-
ognized as one of the key reasons for the enormous success
of data networks, there is little quantitative understanding
to guide a systematic, rather than an ad hoc, process of
designing layered protocol stack for wired and wireless
networks. One possible perspective to understand layering
is to integrate the various protocol layers into a single
theory, by regarding them as carrying out an asynchronous
distributed computation over the network to implicitly
solve a global optimization problem modeling the network.
Different layers iterate on different subsets of the decision
variables using local information to achieve individual
optimality. Taken together, these local algorithms attempt
to achieve a global objective. Such a design process can be
quantitatively understood through the mathematical lan-
guage of decomposition theory for constrained optimization
[104]. This framework of BLayering as Optimization
Decomposition[ exposes the interconnections between
protocol layers as different ways to modularize and dis-
tribute a centralized computation. Even though the design
of a complex system will always be broken down into
simpler modules, this theory will allow us to systematically
carry out this layering process and explicitly tradeoff
design objectives.
The core ideas in BLayering as Optimization Decom-
position[ are as follows. Different vertical decompositions
of an optimization problem, in the form of a generalized
network utility maximization (NUM), are mapped to dif-
ferent layering schemes in a communication network. Each
decomposed subproblem in a given decomposition cor-
responds to a layer, and certain functions of primal or
Lagrange dual variables (coordinating the subproblems)
correspond to the interfaces among the layers. Horizontal
decompositions can be further carried out within one
functionality module into distributed computation and
control over geographically disparate network elements.
Since different decompositions lead to alternative layering
architectures, we can also tackle the question of Bhow and
how not to layer[ by investigating the pros and cons of
decomposition methods. Furthermore, by comparing the
objective function values under various forms of optimal
decompositions and suboptimal decompositions, we can
seek Bseparation theorems[ among layers: conditions
under which layering incurs no loss of optimality. Robust-
ness of these separation theorems can be further char-
acterized by sensitivity analysis in optimization theory:
how much will the differences in the objective value
(between different layering schemes) fluctuate as constant
parameters in the generalized NUM formulation are
perturbed.
There are two intellectually fresh cornerstones behind
BLayering as Optimization Decomposition.[ The first is
Bnetwork as an optimizer.[ The idea of viewing protocols
as a distributed solution (to some global optimization
problem in the form of the basic NUM) has been suc-
cessfully tested in the trials for transmission control
protocol (TCP) [56]. The key innovation from this line of
work (e.g., [64], [72], [73], [87], [89], [90], [96], [116],
[125],and[161])istoviewtheTCP/IPnetworkasan
optimization solver, and each variant of congestion control
protocol as a distributed algorithm solving a specified basic
NUM with a particular utility function. The exact shape of
the utility function can be reverse-engineered from the
given protocol. In the basic NUM, the objective is to
maximize the sum of source utilities as functions of rates,
the constraints are linear flow constraints, and optimiza-
tion variables are source rates. Other recent results also
show how to reverse-engineer border gateway protocols
(BGPs) as a solution to the stable path problem [44], and
contention-based medium access control (MAC) protocols
Chiang et al.: Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures
256 Proceedings of the IEEE |Vol.95,No.1,January2007

as a game-theoretic selfish utility maximization [76], [78].
Starting from a given protocol originally designed based on
engineering heuristics, reverse-engineering discovers the
underlying mathematical problems being solved by the
protocols. Forward-engineering based on the insights
obtained from reverse-engineering then systematically
improves the protocols.
The second key concept is Blayering as decomposition.[
As will be discussed in Sections I-A2, generalized NUM
problems can be formulated to represent a network design
problem involving more degrees of freedom than just the
source rates. These generalized NUM problems put the
end-user utilities in the Bdriver’s seat[ for network design.
For example, benefits of innovations in the physical layer,
such as better modulation and coding schemes, are now
characterized by the enhancement to applications rather
than just the drop in bit-error rates (BERs), which the users
do not directly observe. Note that an optimal solution to a
generalized NUM formulation automatically establishes the
benchmark for all layering schemes. The problem itself does
not have any predetermined layering architecture. Indeed,
layering is a human engineering effort.
The overarching question then becomes how to attain
an optimal solution to a generalized NUM in a modular-
ized and distributed way. Vertical decompositions across
functional modules and horizontal decompositions across
geographically disparate network elements can be con-
ducted systematically through the theory of decomposition
for nonlinear optimization. Implicit message passing
(where the messages have physical meanings and may
need to be measured anyway) or explicit message passing
quantifies the information sharing and decision coupling
required for a particular decomposition.
There are many ways to decompose a given problem,
each of which corresponds to a different layering
architecture. Even a different representation of the same
NUM problem may lead to different decomposability
structures even though the optimal solution remains the
same. These decompositions have different characteristics
in efficiency, robustness, asymmetry of information and
control, and tradeoff between computation and commu-
nication. Some are Bbetter[ than others depending on the
criteria set by network users and operators. A systematic
exploration in the space of alternative decompositions is
possible, where each particular decomposition leads to a
systematically designed protocol stack.
Given the layers, crossing layers is tempting. As ev-
idenced by the large and ever growing number of papers on
cross-layer design over the last few years, we expect that
there will be no shortage of cross-layer ideas based on
piecemeal approaches. The growth of the Bknowledge
tree[ on cross-layer design has been exponential. How-
ever, any piecemeal design jointly over multiple layers
does not bring a more structured thinking process than
the ad hoc design of just one layer. What seems to be
lacking is a level ground for fair comparison among the
variety of cross-layer designs, a unified view on how and
how not to layer, and fundamental limits on the impacts
of layer-crossing on network performance and robustness
metrics.
BLayering as Optimization Decomposition[ provides a
candidate for such a unified framework. It advocates a
first-principled way to design protocol stacks. It attempts
at shrinking the Bknowledge tree[ on cross-layer design
rather than growing it. It is important to note that
BLayering as Optimization Decomposition[ is not the same
as the generic phrase of Bcross-layer optimization.[ What
is unique about this framework is that it views the network
as the optimizer itself, puts the end-user application needs
as the optimization objective, establishes the globally
optimal performance benchmark, and offers a common set
of methodologies to design modularized and distributed
solutions that may attain the benchmark.
There have been many recent research activities
along the above lines by research groups around the
world. Many of these activities were inspired by the
seminal work by Kelly et al. in 1998 [64], which initiated
a fresh approach of optimization-based modeling and
decomposition-based solutions to simplify our under-
standing of the complex interactions of network
congestion control. Since then, this approach has been
substantially extended in many ways, and now forms a
promising direction towards a mathematical theory of
network architectures. This paper
1
provides a summary
of the key results, messages, and methodologies in this
area over the last 8 years. Most of the surveyed works
focus on resource allocation functionalities and perfor-
mance metrics. The limitations of such focus will also be
discussed in Section V.
2) NUM: Before presenting an overview of NUM in this
section, we emphasize the primary use of NUM in the
framework of BLayering as Optimization Decomposition[
as a modeling tool, to capture end-user objectives (the
objective function), various types of constraints (the
constraint set), design freedom (the set of optimization
variables), and stochastic dynamics (reflected in the
objective function and constraint set). Understanding
architectures (through decomposition theory), rather
than computing an optimum of a NUM problem, is the
main goal of our study.
The Basic NUM problem is the following formulation
[64], known as Monotropic Programming and studied
since the 1960s [117]. TCP variants have recently been
reverse-engineered to show that they are implicitly solving
this problem, where the source rate vector x 0isthe
1
Various abridged versions of this survey have been presented in 2006
at the Conference of Information Science and Systems, IEEE Information
Theory Workshop, and IEEE MILCOM. Two other shorter, related
tutorials can be found in [85] and [105].
Chiang et al.: Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures
Vol. 95, No. 1, January 2007 | Proceedings of the IEEE 257

only set of optimization variables, and the routing matrix
R and link capacity vector c are both constants
maximize
X
s
U
s
ðx
s
Þ
subject to Rx c: (1)
Utility functions U
s
areoftenassumedtobesmooth,in-
creasing, concave, and dependent on local rate only,
although recent investigations have removed some of these
assumptions for applications where they are invalid.
Many of the papers on BLayering as Optimization
Decomposition[ are special cases of the following generic
problem [18], one of the possible formulations of a
Generalized NUM for the entire protocol stack:
maximize
X
s
U
s
ðx
s
; P
e;s
Þþ
X
j
V
j
ðw
j
Þ
subject to Rx cðw; P
e
Þ;
x 2C
1
ðP
e
Þ; x 2C
2
ðFÞ or 2 /ðwÞ;
R 2R; F 2F; w 2W: (2)
Here, x
s
denotes the rate for source s and w
j
denotes the
physical layer resource at network element j. The utility
functions U
s
and V
j
may be any nonlinear, monotonic
functions. R is the routing matrix, and c are the logical
link capacities as functions of both physical layer resources
w and the desired decoding error probabilities P
e
. For
example, the issue of signal interference and power control
can be captured in this functional dependency. The rates
may also be constrained by the interplay between channel
decoding reliability and other hop-by-hop error control
mechanisms like Automatic Repeat Request (ARQ). This
constraint set is denoted as C
1
ðP
e
Þ. The issue of rate-
reliability tradeoff and coding is captured in this con-
straint. The rates are further constrained by the medium
access success probability, represented by the constraint set
C
2
ðFÞ, where F is the contention matrix, or, more generally,
the schedulability constraint set /. The issue of MAC
(either random access or scheduling) is captured in this
constraint. The sets of possible physical layer resource
allocation schemes, of possible scheduling or contention-
based medium access schemes, and of single-path or
multipath routing schemes are represented by W, F , and
R, respectively. The optimization variables are x, w, P
e
, R,
and F. Holding some of the variables as constants and
specifying some of these functional dependencies and
constraint sets will then lead to a special class of this
generalized NUM formulation. Utility functions and con-
straint sets can be even more general than those in problem
(2), possibly at the expense of losing specific problem
structures that may help with finding distributed solutions.
A deterministic fluid model is used in the above
formulations. Stochastic network dynamics change the
NUM formulation in terms of both the objective function
and the constraint set. As will be discussed in Section V-D,
stochastic NUM is an active research area.
Whether modeled through a basic, general, or stochas-
tic NUM, there are three separate steps in the design
process of BLayering as Optimization Decomposition:[
First formulate a specific NUM problem, then devise a
modularized and distributed solution following a particu-
lar decomposition, and finally explore the space of al-
ternative decompositions that provide a choice of layered
protocol stacks.
The following questions naturally arise: How to pick
utility functions, and how to guarantee quality-of-service
(QoS) to users?
First of all, in reverse-engineering, utility functions are
implicitly determined by the given protocols already, and
are to be discovered rather than designed. In forward-
engineering, utility functions can be picked based on any
combination of the following four considerations:
First, as in the first paper [122] that advocated the
use of utility as a metric in networking, elasticity of
application traffic can be represented through
utility functions.
Second, utility can be defined by human psycho-
logical and behavioral models such as mean opin-
ion score in voice applications.
Third, utility functions provide a metric to define
optimality of resource allocation efficiency.
Fourth, different shapes of utility functions lead to
optimal resource allocations that satisfy well
established definitions of fairness (e.g., a maxi-
mizer of -fair utilities parameterized by 0:
UðxÞ¼ð1 Þ
1
x
1
[96] can be proved to be an
-fair resource allocation).
In general, depending on who is interested in the
outcome of network design, there are two types of
objective functions: sum of utility functions by end users,
which can be functions of rate, reliability, delay, jitter,
power level, etc., and a network-wide cost function by
network operators, which can be functions of congestion
level, energy efficiency, network lifetime, collective es-
timation error, etc. Utility functions can be coupled across
the users, and may not have an additive structure (e.g.,
network lifetime).
Maximizing a weighted sum of all utility functions is
only one of the possible formulations. An alternative is
multiobjective optimization to characterize the Pareto-
optimal tradeoff between the user objective and the
operator objective. Another set of formulations, which is
not covered in this survey, is game-theoretic between users
and operators, or among users or operators themselves.
While utility models lead to objective functions, the
constraint set of a NUM formulation incorporates the
following two types of constraints. First is the collection of
Chiang et al.: Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures
258 Proceedings of the IEEE |Vol.95,No.1,January2007

physical, technological, and economic restrictions in the
communication infrastructure. Second is the set of per-
user, hard, inelastic QoS constraints that cannot be vi-
olated at the equilibrium. This is in contrast to the utility
objective functions, which may represent elastic QoS
demands of the users.
Given a generalized NUM formulation, we do not wish
to solve it through centralized computation. Instead, we
would like to modularize the solution method through
decomposition theory. Each decomposed subproblem con-
trols only a subset of variables (possibly a scalar variable),
and observes only a subset of constant parameters and
values of other subproblems’ variables. These correspond,
respectively, to the limited control and observation that
each layer has.
The basic idea of decomposition is to divide the original
large optimization problem into smaller subproblems,
which are then coordinated by a master problem by means
of signaling. Most of the existing decomposition techni-
ques can be classified into primal decomposition and dual
decomposition methods. The former is based on decom-
posing the original primal problem, whereas the latter is
based on decomposing the Lagrange dual of the problem.
Primal decomposition methods have the interpretation
that the master problem directly gives each subproblem an
amount of resources that it can use; the role of the master
problem is then to properly allocate the existing re-
sources. In dual decomposition methods, the master
problem sets the price for the resources to each sub-
problem which has to decide the amount of resources to
be used depending on the price; the role of the master
problemisthentoobtainthebestpricingstrategy.
Most papers in the vast, recent literature on NUM use a
standard dual-decomposition-based distributed algorithm.
Contrary to the apparent impression that such a decom-
position is the only possibility, there are in fact many
alternatives to solve a given NUM problem in different but
all distributed manners [104], including multilevel and
partial decompositions. Each of the alternatives provides a
possibly different network architecture with different
engineering implications.
Coupling for generalized NUM can happen not only in
constraints, but also in the objective function, where the
utility of source s, U
s
ðx
s
; fx
i
g
i2IðsÞ
Þ,dependsonbothits
local rate x
s
and the rates of a set of other sources with
indices in set IðsÞ.IfU
s
is an increasing function of
fx
i
g
i2IðsÞ
, this coupling models cooperation, for example,
in a clustered system, otherwise it models competition,
such as power control in wireless network or spectrum
management in digital subscriber loop (DSL). Such
coupling in the objective function can be decoupled
through Bconsistency prices[ [130].
3) Key Messages and Methodologies: The summary list of
key messages in Table 1 illustrates the conceptual
simplicity in this rigorous and unifying framework, which
is more important than any specific cross-layer design
derived from this framework.
In Table 2, the summary list of main methods de-
veloped in many recent publications aims at popularizing
these analytical techniques so that future research can
invoke them readily. Each method will be summarized in a
stand-alone paragraph at the end of the associated
development or explanation.
Sections II and III cover the reverse- and forward-
engineering aspects for both horizontal and vertical de-
compositions, as outlined in Table 3.
After presenting the main points of horizontal and
vertical decompositions, we turn to a more general dis-
cussion on decomposition methods in Section IV.
At this point, curious readers may start to raise
questions, for example, on the issues involving stochastic
network dynamics, the difficulties associated with non-
convex optimization formulations, the coverage of accu-
rate models, the comparison metrics for decomposition
alternatives, the engineering implications of asymptotic
convergence, and the justification of performance optimi-
zation in the first place. Some of these questions have
recently been answered, while others remain under-
explored. Indeed, there are many challenging open prob-
lems and interesting new directions in this emerging
research area, and they will be outlined in Section V.
In concluding this opening section, we highlight that,
more than just an ensemble of specific cross-layer designs
for existing protocol stacks, BLayering as Optimization
Decomposition[ is a mentality that views networks as
Table 1 Summary of 10 Key Messages
Chiang et al.: Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures
Vol. 95, No. 1, January 2007 | Proceedings of the IEEE 259

Citations
More filters
Journal ArticleDOI

Distributed Subgradient Methods for Multi-Agent Optimization

TL;DR: The authors' convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.
Journal ArticleDOI

Controllability of complex networks

TL;DR: In this article, the authors developed analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system's entire dynamics.
Journal ArticleDOI

Fog and IoT: An Overview of Research Opportunities

TL;DR: This survey paper summarizes the opportunities and challenges of fog, focusing primarily in the networking context of IoT.
Journal ArticleDOI

Multilayer Networks

TL;DR: In most natural and engineered systems, a set of entities interact with each other in complicated patterns that can encompass multiple types of relationships, change in time, and include other types of complications.
Book

Stochastic Network Optimization with Application to Communication and Queueing Systems

TL;DR: In this article, the authors present a modern theory of analysis, control, and optimization for dynamic networks, including wireless networks with time-varying channels, mobility, and randomly arriving traffic.
References
More filters
Journal ArticleDOI

A mathematical theory of communication

TL;DR: This final installment of the paper considers the case where the signals or the messages or both are continuously variable, in contrast with the discrete nature assumed until now.
Book

Convex Optimization

TL;DR: In this article, the focus is on recognizing convex optimization problems and then finding the most appropriate technique for solving them, and a comprehensive introduction to the subject is given. But the focus of this book is not on the optimization problem itself, but on the problem of finding the appropriate technique to solve it.
Journal ArticleDOI

Random early detection gateways for congestion avoidance

TL;DR: Red gateways are designed to accompany a transport-layer congestion control protocol such as TCP and have no bias against bursty traffic and avoids the global synchronization of many connections decreasing their window at the same time.
Journal ArticleDOI

Congestion avoidance and control

TL;DR: The measurements and the reports of beta testers suggest that the final product is fairly good at dealing with congested conditions on the Internet, and an algorithm recently developed by Phil Karn of Bell Communications Research is described in a soon-to-be-published RFC.
Journal ArticleDOI

Rate control for communication networks: shadow prices, proportional fairness and stability

TL;DR: This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks, which provide natural generalisations to large-scale networks of simple additive increase/multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions mentioned in the paper "Layering as optimization decomposition: a mathematical theory of network architectures" ?

This paper presents a survey of the recent efforts towards a systematic understanding of Blayering [ as Boptimization decomposition, [ where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. 

This section highlights some of the main issues for future research and their recent developments. 

One way to tackle the coupling problem in the utilities is to introduce auxiliary variables and additional equality constraints, thus transferring the coupling in the objective function to coupling in the constraints, which can be decoupled by dual decomposition and solved by introducing additional consistency pricing. 

To compare a variety of distributed algorithms, the following metrics all need to be considered: speed of convergence, the amount and symmetry of message passing for global communication, the distribution of local computational load, robustness to errors, failures, or network dynamics, the impact to performance metrics not directly incorporated into the objective function (e.g., userperceived delay in throughput-based utility maximization formulations), the possibility of efficient relaxations and simple heuristics, and the ability to remain evolvable as the application needs change over time. 

The engineering implication is that appropriate provisioning of link capacities will ensure global convergence of the dual-decompositionbased distributed algorithm even when user utility functions are nonconcave. 

Routing can be stabilized by including a strictly positive traffic-insensitive component in the link cost, in addition to congestion price. 

In the case where the rate-reliability tradeoff is controlled through the code rate of each source on each link, there are two possible policies: integrated dynamic reliability policy and differentiated dynamic reliability policy. 

It is also well-known, however, that as bandwidth-delay product continues to grow, TCP Reno will eventually become a performance bottleneck itself. 

After each transmission attempt, if the transmission is successful without collisions, then link l sets its persistence probability to be its maximum value pmaxl . 

The authors now present a simulation using Network Simulator 2 (ns2) that shows that x can depend on the flow arrival pattern because of the existence of multiple equilibria. 

The signal-to-interference ratio forlink l is defined as SIRlðPÞ ¼ PlGll=ð Pk 6¼l PkGlk þ nlÞ for a given set of path losses Glk (from the transmitter onVol. 95, No. 1, January 2007 | Proceedings of the IEEE 283logical link k to the receiver on logical link l) and a given set of noises nl (for the receiver on logical link l).