<<

. 69
( 87 .)



>>


13.3.1 From Closed Ad Hoc Networks to Spontaneous Networking
Ad hoc networks were first developed for military use. The current trend is to move from
these ad hoc networks managed by some authority toward a more spontaneous networking
([18]). This progression from closed small networks, managed by a single authority (i.e., a
military unit or a civil protection team), to open networks of reasonable size that sponta-
neously work when two or more people want to exchange data for some reason, will
change many rules. In fact, whereas in closed scenarios cooperation is not an issue, since
all the involved devices have the same purpose and belong to the same “community,” in
future ad hoc networks it is fundamental to understand whether a user, who in principle
just wants to have a service, should work to have it. Or even better: if a user enters into an
already working ad hoc network, why should he offer help, if it is not necessary? More-
over, cooperation does not come for free: processing and forwarding someone else™s traf-
fic can be extremely costly in terms of spent energy, probably the most precious resource
in such a system. A node can be for this reason encouraged to be selfish, that is, to be part
of the network without helping other nodes (Chapter 12).
Recent works [21, 24] have shown the severe impact that selfish nodes can have on
network performance: as the percentage of selfish nodes increases, the network through-
put decreases almost linearly, causing serious problems.
Moreover, traditional tools used to model ad hoc networks seem not very able to mod-
el a high-level property like cooperation. It is also difficult to study the correlations of a
collaborative behavior with spent energy and received service. For this reason, it is quite
natural to look at disciplines like economics and the social sciences in order to find mod-
els and analytical tools to borrow. Here we present the theoretical aspects of this problem;
the systems and protocols aspects are presented in Chapter [12].

13.3.2 Game Theoretical Model(s)
During the last fifty years, researchers were able to describe in great detail human society
at many levels, from an individual point of view, to economic entities, up to national inter-
actions. At every level, they have been able to predict behaviors and to choose optimal
strategies at critical moments, or have failed for lack of information or wrong models,
which has led to several wars. Such systems are composed of a huge number of complex
entities, each concerned with their personal status, and each with limitations and unpre-
361
13.3 COOPERATION MODELING


dictable behavior. A very useful tool that has been extensively used, developed, and re-
fined during this time is game theory. If the reader is not familiar with basic game theo-
retical concepts, the simpler notions are presented in Section 13.5, but we will try to make
the remainder of the section not too technical and understandable without any knowledge
of the subject. Paragraphs entitled “Formal Explanation” can be ignored if the reader is
not interested in technical details.

Basic Model. In this field, the research has mainly been concentrated in finding mech-
anisms or protocols for stimulating cooperation and to study the behavior of the proposed
mechanism. Models based on game theory have not been explored much in the ad hoc
networks literature. There exist some works [6, 9, 22, 30] introducing strategies for coop-
eration in ad hoc networks that implicitly are based on a game theoretic model. However,
these models are usually not formally stated, and for this reason the current trend is to
look for a general, unifying model [30, 31]. In all the proposed models, the only common
background is that, at a suitable level of abstraction, nodes composing an ad hoc network
can be seen as interacting entities that can request or offer a service. A single node can be,
and usually is, both a user and a provider. For the sake of clarity, we will just consider the
packet forwarding functionality. It is possible, at least in principle, to extend all the con-
siderations we will make to a large class of other services that need global cooperation,
like routing, even if, in practice, it can be difficult to deal with all the services at the same
level. The time is discrete (divided in slots), and in every round one or more nodes ask
some other node to forward messages for them. A potential provider can accept the task or
ignore it. In the latter case, ignored packets are lost, and the sources have to fix the prob-
lem in some way. Given the infrastructureless nature of an ad hoc network, nothing can
work if no providers accept relay requests. On the other hand, each node is really con-
cerned with energy consumption, and would prefer that someone else carry out the task,
since there is no explicit payment for it.

Formal Explanation. This situation is reminiscent of the prisoner™s dilemma: in a single
round, with no future play perspective, nodes have to decide whether to accept or reject
any forwarding request from their neighbors. If they all accept, than the system payoff is
maximized; but from a personal point of view, it would be better not to accept and let
some other node forward that packet. On the other hand, if a node is going to accept a for-
warding request, but sees its forwarding request rejected, then it is in deep trouble, where-
as if no node accepts, the the payoff is very low both from a system point of view and
from a personal one, but it is always better than being the only node working. The payoff
matrix for a two-node network is the same as that presented in Section 13.5, with moves
labeled as Acc (accept to forward) and Rej (reject forward requests) and with a > b > c >
d:

Acc Rej
Acc b, b d, a
Rej a, d c, c

Packet forwarding in a network can be seen as an infinite repetition of such a game,
with forwarding requests made at every round and decisions taken at the beginning in the
362 SELF-ORGANIZED AND COOPERATIVE AD HOC NETWORKING


form of a strategy (e.g., always accept, always reject, accept to forward just in even
rounds, do to others what they are doing to you, do to others what you expect they are go-
ing to do to you, and so on).

System Equilibria. Once a model in terms of a game is precisely given, equilibria of
the system can be studied. These represent the set of stable network states in which nodes
will arrive driven by their personal interests, which usually do not match with a global op-
timization, and from which they cannot leave by changing behavior in a unilateral way.
However, if in a repeated game a set of nodes agrees on some desired and specific
equilibrium (since, generally, there are many of them), it is possible to force all the nodes
to behave in that way by punishing the ones that deviate, thus making other strategies less
productive than the chosen one.

Formal Explanation. As has very well studied in prisoner™s dilemma, the unique Nash
equilibrium is the noncollaborative action that would imply a nonworking network in our
case. However, since we are in the presence of repeated plays, if a certain percentage of
nodes agrees on a collaborative behavior, then it is possible to choose a strategy that
makes collaboration advantageous for every node. It will be enough to “punish” nodes
that decided not to cooperate by excluding them from the network for a sufficient time,
that is, a sufficient number of rounds to make payoff obtained after a defection less than
losses given to network exclusion. If the future is important enough, then well-known
strategies like TIT FOR TAT are optimal in this sense [2].
In addition to these simple considerations, many approaches have been taken in recent
times, arriving at different solutions that stress different aspects, and that are, in many
ways, incomparable. We present here some approaches, trying to understand what they
have added to the basic model, and what are their possible weak points.

A First Model: Rewarding Selfishness. In [21], which is the starting point for many
other works, Marti et al. are not concerned about cooperation itself, but about offered
throughput in the network, which is, of course, affected by selfishness. They propose to
equip every node with a watchdog, a unit that listens to all the communications that arrive
at the node itself. It is possible, in many cases, to listen to service request messages, and to
understand whether the requested node does its task or not, if the nodes are near enough.
If some node is not cooperating, it will not be asked to do anything, but it will be still able
to use network services. The solution increases the performance of the network in terms
of percentage of delivered packets, since less paths containing bad nodes are used, but it
also encourages selfishness, as long as a sufficient number of nodes ensure network con-
nectivity. Moreover, although the watchdog unit is a nice theoretical departure from the
local knowledge of network operations,1 it has been used with very strong hypotheses,
like equal communication range for all the nodes, which is very unlikely to happen in
practice.


Formal Explanation. The watchdog unit is used to directly observe the moves played by
neighbor nodes (other players do not affect the payoff of a node). During a given time slot

1
One of the hypotheses in game theory is that moves are simultaneous and observable by other players, which,
of course, does not happen in ad hoc networks.
363
13.3 COOPERATION MODELING


tk, node i observes a certain fraction of packets that its neighbor node j has to forward
[R ji(tk)] and the packets that it effectively forwards [F ji(Tk)]. If F ji(Tk) R ji (tk), then, as-
suming a perfectly working watchdog, node j is probably2 not forwarding traffic, and node
i can record this information, which is passed to a path-rater component. Every time i is
asked to find a path to some node z, it will not use routes containing nodes marked as
“bad” by its path-rater units, assuring higher probability that delivered packets will pass
on the demanded route. Coming back to the model, Marti et al. propose this strategy: “Al-
ways cooperate, and if someone is not doing the same, do not use him, since he is unreli-
able,” which is, of course, very far from any equilibrium (since one who deviates is re-
warded instead of being punished). In terms of performance, the solution works, because
throughput tends to increase.

CONFIDANT: An Evolutionary Model. Buchegger and Le Boudec [6, 7] start from
the possibility of exploiting the previous solution, adopting an evolutionary approach:
they see nodes as an interacting population, and look for a strategy that yields more bene-
fit than any other strategy that a newcomer node can adopt [12]. If nodes in a network all
adopt such a mechanism (as if it were coded in their genes), then a node using a different
strategy (a mutation, because it arrived from another population in which evolution led to
other solutions) could not “survive,” that is, receive more service than preexisting ones,
and should change its strategy (adopting the official one), or die, being excluded from
network use. In [2] it is shown that a strategy can be evolutionary successful only if it is
adaptive. For this reason, nodes have to be equipped with a watchdog unit, to observe the
behavior of neighbors and adapt to their behavior. Moreover, they are also organized in a
friendship network (see Section 13.2). When a node discovers that one of its neighbors is
not cooperating, it starts warning all its friends, which mark the misbehaving node as bad.
At this point, the selfish node is cut off from network services, because bad nodes are not
served. Warning messages are surely the main weak point of the first proposed solution
[6]: they add overhead to the network, and it is possible to spread fake information, leav-
ing an open door to denial of service attacks. For this reason, in [7] the authors extended
the protocol in order to trust messages about unknown nodes only when they arrive close
in time and from a large number of friends.

Formal Explanation. The idea of this work is to find an evolutionary equilibrium in the
repeated game. Such a strategy would guarantee stability over time and the failure of any
other strategy, when adopted by small enough clusters of newcomer players. Again, every
node i can observe Rij(tk) and Fij(tk), and, when Rij(tk) Rij(tk), punish node j by not serv-
ing its requests. Moreover, to emulate the instantaneous knowledge of a played move by
all the player, alert messages are sent to some trusted friend, which in turn will stop help-
ing node j. The strategy is an approximation of the TIT FOR TAT one: start helping, and
treat nodes in the same way they treat you. This strategy undoubtedly has good properties
(first of all: it is extremely simple), but it permit noncollaborative nodes, after one turn of
punishment, to be reintegrated into the network if they start helping. In the cited works,
however, there is no formal analysis of the model used.

CORE: A Reputation-Based Model. The solution proposed in [22, 23] by Michiardi
and Molva overcomes the problem of alert messages, and makes it explicitly possible to

2
It is not possible to deduce sure facts by pure observation, as the authors point out with some examples.
364 SELF-ORGANIZED AND COOPERATIVE AD HOC NETWORKING


end a punishment when a node starts behaving well. For this reason, every node has a lo-
cal knowledge of the reputation of other nodes, which it can modify in just two cases:

1. With local observation (again, with a watchdog unit), it can increase or decrease
other nodes™ reputation, depending on how are they behaving. In this way, a previ-
ously noncollaborative node can start helping and be reintegrated into the network,
even if very slowly.
2. With indirect deductions that can just be positive. It is necessary to have, after the
execution of every cooperative functionality, a replay message containing the
names of every node that participated (that can be considered good). Nothing can
be deduced if a node is not in a replay message.

The authors analyze their solution in game theoretic terms, proving that if only half of the
nodes adopt it, the remaining nodes have to collaborate in order to use the network. In
principle, the model is made heavy by the presence of indirect reputation. It is not essen-
tial to collect informations about nodes not in the neighborhood, whereas from a practical
point of view, in presence of mobility, it can be useful to have some data about a newcom-
er node, either positive or uncertain. Moreover, the computation of indirect reputation in
some cases can be quite heavy in computational terms. For example, the authors claim
that when a route is established with the DSR protocol, it is possible, to a certain extent, to
infer that all the nodes in the path cooperated to build it.3 This means that at every route
establishment, the returned message (a list of nodes) has to be analyzed. However, this is a
theoretical weak point, which, in practice, can be disregarded.

Formal Explanation. The analyzed papers were written at an embryonic state of the
model, so many observations we are making at this point could have changed by the time
of this writing. The base model is always the same game, but the authors do not try to have
instantaneous global information of the move played, letting nodes know everything about
their neighbors, and something about past moves of distant nodes, if they played collabo-
ratively. For this reason, if a player is not cooperating, it is punished by all its neighbors,
which block traffic from and to it, which is enough to make selfishness unattractive. If the
model is not precisely stated, the authors use ERC theory [5] to model a multiplayer pris-
oner™s dilemma with a fair share of resources. ERC perfectly explains weird behaviors ex-
hibited by humans when playing games during research experiments. The observed ten-
dency to depart from Nash equilibria is explained by a major satisfaction deriving not
only from monetary payoff, but also from the distance of others™ payoffs. Ad hoc nodes
probably are selfish in an absolute way, and it is difficult to compare them with humans,
so a classical game theoretic analysis would have been more appropriate.

Nuglets: A Market Model. A radically different solution was proposed in [8, 9] by
Buttyan and Hubaux, who model a network as a market in which services are exchanged.
A virtual economy, based on a virtual currency called a nuglet (or bean), is then intro-
duced, forcing nodes to pay to have their packets forwarded, and to be paid when they for-
ward some data. In this way, a selfish node would soon use up its nuglets, and would be
forced to cooperate in order to send other packets. For the first time, energy considera-

3
This is true if the routing protocol is robust against changes of established routes, that is, if it is not possible for
a node to insert itself in a route in which it is not present.
365
13.3 COOPERATION MODELING


tions are explicitly made, pointing out that the first goal of nodes is to survive, finding the
optimal number of messages to send and forward, given the battery power of every node.
It is clear that nuglets should be managed in a tamper-proof part of the node, ensuring that
a malicious user does not change its device in order to steal, forge, or throw away nuglets.
Some of these problems have been solved in [9] with the introduction of counters in place
of nuglets, always managed in a separate hardware module.
Moreover, the proposed solution has been enriched by Zhong et al. in [34], where they
propose a solution for which no tamper-proof hardware is needed, but a centralized server
acting as a bank must exist, if not within the ad hoc network, then at least in another net-
work reachable by all the nodes.

Formal Explanation. This is one of the very first solutions proposed in the literature
and, interestingly enough, it is not watchdog-based, the main weak point of all the ana-
lyzed mechanisms. Every node has an initial account of C nuglets and a battery capability
of B. Packets are generated at a constant average rate fo, and every node receives packets
to forward at rate fr. It is possible then to compute the number of packets to send and for-
ward in order to maximize the throughput while not running out of battery power. The
point is clearly a fair optimum, since all the nodes are forced to spend the same amount of
energy for forwarding and for sending their data, but has to be enforced in hardware with
a tamper-proof module, since no node can observe what its neighbors are doing. More-
over, if not all the nodes are equipped with such a module, selfish nodes would have an
extremely easy life, with self-limiting neighbors that do not punish them. Four different
dynamic strategies for managing nuglets are analyzed, showing that the most generous
one (forward all the packets until you reach your limit) is also the best-performing one.
Again, no formal analysis of the underlying model has been done.

<<

. 69
( 87 .)



>>