<<

. 70
( 87 .)



>>

GTFT: An Energy-Aware Model. Srinivasan et al. [30] propose a trade-off between
the existent solutions. Since energy is the main concern of authors, nodes are seen as par-
titioned into energy classes (depending on their energy constraints), and communications
are arranged in sessions, each with an associated energy class, determined by the “weak-
er” node in the chain of forwarders. Each node, before the start a session, asks to all the
nodes in the route that will be used if they are going to support the session itself. It is pos-
sible to compute optimal strategies for every class having complete off-line network infor-
mation, and the authors also propose an algorithm that permits nodes to reach optimal al-
locations (by knowing them in advance) by simply recording how they are treated by other
nodes, and keeping things in balance, substituting nuglets with explicit requests to partic-
ipate in sessions. The solution does not enforce cooperation in the sense that other works
do, since a node is never really cut off from network services, and can continue to com-
municate without forwarding.

Formal Explanation. Every node i is associated with an energy class En, and has to de-
cide the ratio im of sessions of type Em to accept, where Em is the energy class of the node
with less energy (in absolute terms) taking part to the session. If ratios are stationary, that
is, if they do not vary during the time, it is possible to compute the Pareto efficient point
by solving a system of optimization problems, in which users want to maximize through-
put, given that the battery lasts enough (for example, an entire day). These problems in-
volve much off-line information about the network, like the probability that sessions con-
tain a certain number of nodes and the number of nodes in every energy class. Since this
366 SELF-ORGANIZED AND COOPERATIVE AD HOC NETWORKING


is practically unfeasible, a mechanism based on TIT FOR TAT is also presented. Every
node records how many sessions of type Ek are accepted (with respect to all the requests
for that type), and how many sessions of the same type, initiated by it, were accepted (al-
ways in percentage). The GTFT (Generous TIT FOR TAT) strategy consists in keeping the
two values in (not strict) balance with the optimal ratio, previously computed. It is shown
that it constitutes a Nash equilibrium of the repeated game (in an elegant way, without for-
mally stating the game), and that it assures the same throughput reached at the Pareto effi-
cient point.

A General Model. The authors of [31] do not present any cooperation-enforcement
mechanism, but they give a model in order to understand whether cooperation is really a
problem in every case, and if yes, when it is possible to obtain it. In this paper, they pro-
pose an approach to cooperation based on Bayesian games, in order to model the uncer-
tainty of nodes about their characteristics, and look for the equilibria points of the system,
as well as for the class of enforceable cooperative behaviors.
Each node is then endowed with some information about its neighbors and their ac-
tions, which includes its neighborhood, the traffic it sent and has to send, and the traffic it
received. Prior to choosing its next action, a node has an opportunity to analyze the past
behavior of its neighbors and its priorities in terms of energy consumption and through-
put, and to decide, consequently, how to act. In deciding to whom to forward packets and
to whom to discard packets, the node trades off the costs (energy consumption), the bene-
fits (network throughput), and the collaboration offered to the network by the neighbors.
This implicit incentive causes a neighbor to act in a selfish way only when obliged by en-
ergy constraints, but each node tends to cooperate with collaborative nodes.
It is possible to show that the noncooperation is always an equilibrium (even if it does
not necessarily imply that this will be the general behavior), and that, in general, there is a
limit on the traffic a generic node is willing, at maximum, to forward, thus inducing a
bound on network performance. However, this limit depends on the characteristics of the
nodes and on the obtained knowledge: the less a node is trusted, the more, in general, it
should work to have its packets forwarded. Moreover, the amount of cooperation that can
be forced depends on the mobility of nodes and on the network size: in very mobile sys-
tems it is difficult to locally punish selfish nodes if they are staying for a short time near
damaged nodes.

Formal Explanation. The authors of this paper modeled the life of an ad hoc network as
a discounted repeated game, with the discount factor depending on the mobility of the
network. They chose an imaginary utility function for nodes that (indirectly) depends on
spent energy and on received cooperation, with the importance of each component given
by its energy class (as in [30]). Equilibria in the single-shot game depend on the knowl-
edge of others™ energy classes, but the noncooperation is always present. Under repetition,
the discount factor makes cooperative behavior impossible in every case (by Nash folk
theorems), but it is still possible to have “kind” behavior by adopting an evolutionary ap-
proach.

Discussion and Future Directions. The common idea behind all the presented
works is a model (explicit or implicit) based on classical game theoretical concepts: every
player knows everything about the others, about the rules, and about the payoffs. The main
concept is that, if node i is rational and has access to information like number of sent
367
13.4 CONCLUSIONS


packets Si(tk), number of forwarded packets Fi(tk), number of packets it was required to
forward Ri(tk), and battery power B(tk), where tk is the kth time slot, then it is enough to
study a maximization problem for every node, of the form maximize Si(tk), considering as
a constraint that packets not forwarded are lost, and do not count in the function to be
maximized. It is then possible to find optimal quantities of traffic to be forwarded for
each node (at least on average, with appropriate hypotheses on traffic generation and mo-
bility), that enforce a certain degree of cooperation determined by the need for communi-
cation of every node (a do ut des concept). In [9, 30] an ulterior constraint (extremely im-
portant) has been added: the energy spent by every node to send and forward traffic has to
be less than the battery power available to the node itself. This consideration, present in
the mind of the authors of all the papers cited in this section but often not used in the mod-
els, has two interesting implications:

1. Selfishness has a formal explanation: if no energy considerations are made in the
model, there is no difference between helping others or not.
2. There is the need for every node to find the best trade-off between throughput and
consumed energy, assuming that its neighbors are also doing the same.

We believe that classical game theory can be a limited model for real scenarios in which
each node can be seen as an entity that barely knows its neighbors (before they start ex-
changing traffic, they know just their names), and that after each communication receives
some imprecise feedback from the environment. It can monitor neighbors in order to find
out what they are doing, but it is not practical to assume a perfect knowledge or synchrony
in its actions. Often, it is not possible to know the personal preferences in a given moment,
nor the motivations that are pushing other nodes to act like that.
Many works ([17] and [14] for a wired-networks case) have tried to overcome these
limitations, with interesting results.


13.4 CONCLUSIONS

We have attempted to justify the claim that the qualitative structure of ad hoc networks is
fundamental in determining and studying both their structural and dynamical properties.
Ad hoc networks, when their characteristics as communities of nodes are highlighted, ex-
press very interesting properties. In particular, it has been shown how this approach influ-
ences the communication network and, therefore, the routing, and how this can be benefi-
cial for the node cooperation. However, whereas in the case of routing there are several
traditional approaches that work equally well, for cooperation, this approach is essential.
Intuitively, game theory exploits the main novel characteristics of an ad hoc network:
games rules can be interpreted as the interaction, cooperation, or even competition among
nodes. Nodes can follow different strategies, and the effect of them will be reflected in the
network evolution. Nodes will try to optimize their own payoff, but as they also need to
have a functioning network, they adopt cooperative behavior to obtain better network per-
formance, unless they are selfish or malicious, and, in this case, game theory helps to in-
dividuate punishment strategies. We presented a first model that formalizes these proper-
ties using game theory. However, the research in this field has just started and all the
issues related to how to exploit the social and economic characteristics of ad hoc networks
represent interesting topics for further research.
368 SELF-ORGANIZED AND COOPERATIVE AD HOC NETWORKING


13.5 ELEMENTARY CONCEPTS OF GAME THEORY

We present here a few basic concepts of game theory that will help the reader to better un-
derstand the first part of the chapter. Obviously, we do not pretend this to be exhaustive,
so the reader is really encouraged to get a deeper view from a book entirely focused on the
subject (it is impossible for us to select just one or two of the many books available).
When n rational entities interact for some reason, they can be modeled as players in a
game. A strategic game has its rules, in terms of moves every player can make, and every
player has personal preferences about the results of the game. Formally, the basic blocks
of a game4 are:

A set N of n players
A set Ai of moves for every player, defining a global action space:

A* = —n Ai
i=1


A payoff function pi: A* Z for every player, defining the global payoff:

p*(a) = —n pi(a)
i=1


Every player knows the rules of the game and rational acts in order to maximize the pay-
off.

Example 1. Probably the most classical problem is the so called Prisoner™s Dilemma (see
[13] for a history). Two suspects of a heinous crime are caught after a minor infraction.
They are kept in separate rooms, and each one has to autonomously decide whether to
confess to the big crime or not. If only one of them confesses, he will be freed and used as
witness in the trial, whereas the other one will receive a sentence of ten years. If they both
confess, they will spend five years in prison, whereas if neither does, they will pay with
“just” one year of imprisonment for the minor crime.
This can be seen as game with two players, where each one has two possible moves:
confess and do not confess. The payoffs can be the number of years they are sentenced to,
but in negative terms (since it is a cost).
A very useful way to represent two-player games is via payoff matrix:

C D
C “7, “7 0, “10
D “10, 0 “1, “1

In the rows (columns) are represented Player 1 (2) moves, and in position (i, j) there are
the payoffs when Player 1 chooses her ith move and Player 2 plays his jth move.

Probably the most important concept concerning games is equilibrium: a move a* is an
equilibrium point if no player, with a unilateral deviation, can increase her payoff. In for-

4
We present here a simplified version of strategic games, where user preferences are substituted with payoffs.
Although this is not generally equivalent to the general case, in many practical cases there is no difference.
369
13.5 ELEMENTARY CONCEPTS OF GAME THEORY


mal terms, a* = (a*, . . . , a*) is a pure equilibrium for the game G = (N, A*, p*) if (and
1 n
only if):

i N. ai Ai.pi(a*) pi(a *)

where a * is the same as a* except for Player i, who plays ai instead of a*.
i


Example 2. The prisoner™s dilemma has a unique pure equilibrium, one in which both
players confess!
The following game has no equilibrium:

P D
P 1, “1 “1, 1
D “1, 1 1, “1

Example 2 shows that the concept of pure equilibrium is too weak, and for this reason it
has been generalized to mixed (Nash) equilibrium ([26]). Let G be a game derived by G =
(n, A*, p*) in the following way:

The set of players is the same.
The moves of player i are all the possible probability distributions over Ai.
The payoff functions are defined on lotteries over combinations of moves.

It is possible to prove that every game has at least a mixed equilibrium, even if the prob-
lem of finding the equilibria points has no efficient solution at this moment.
Many variants of strategic games have been presented in literature; we only report on
two basic forms here.
In repeated games, a strategic game is repeatedly played, either a finite or an infinite
number of times. The moves of single-shot games are combined into strategies (action
plans) that can be more or less complex. The payoff is a combination of single-game pay-
offs, considering that past results are “heavier” than future ones, because there is always
the possibility that a game is interrupted, or that two players do not meet again (i.e., there
is a discount factor on subsequent outcomes). The equilibrium is defined over strategies,
and it is possible to prove the following facts:

Proposition 1. If the game is repeated a finite number of times, then the Nash equilibria
are sequences of Nash equilibria of the constituent game.

Proposition 2. It is possible to have Nash equilibria in infinitely repeated games that are
not sequences of Nash equilibria of the constituent game.

The first proposition is the consequence of “backward induction” reasoning: if there is a
last move that will be played, then it must be a Nash equilibrium of the constituent game,
then also the move before, and so on until the first one. The second proposition is proven
with the so-called Nash folk theorems: if all the players agree on a desired sequence of
moves, then it is possible to enforce it by punishing deviating players for a sufficient time.
370 SELF-ORGANIZED AND COOPERATIVE AD HOC NETWORKING


It is in fact sufficient to make a profit derived from a deviation less than the losses that
follow this decision, and a rational player will avoid deviations.
In Bayesian games, every player has a secret (her type) which conditions her payoffs,
and has a prior belief of the secrets of other players, that is a distribution for the type of
every player. The equilibrium is now defined in terms of lotteries of personal beliefs:
every player will chose her best response to the distribution of possible moves of others.


ACKNOWLEDGMENTS

This work was partially funded by the Information Society Technologies Programme of
the European Commission, Future and Emerging Technologies, under the IST-2001-
38113 MOBILEMAN project.


REFERENCES

1. The internet engineering task force mobile ad-hoc networking page (manet): http://www.
ietf.org/html.chrters.manet-charter.html.
2. R. Axelrod, The Evolution of Cooperation. Basic Books, New York, 1984.
3. L. Blazevic, L. Buttyan, S. Capkun, S. Giordano, J. P. Hubaux, and J. Y. Le Boudec, “Self-orga-
nization in mobile ad-hoc networks: The approach of terminodes,” IEEE Communications Mag-
azine, 39(6), June 2001.
4. L. Blazevic, S. Giordano, and J.-Y. Le Boudec, “Self-Organizing Routing,” Cluster Computing
Journal, 5(2), April 2002.
5. G. E. Bolton and A. Ockenfels, “ERC: A Theory of Equity, Reciprocity and Competition,”
American Economic Review, 90: 166“193, 2000.
6. S. Buchegger and J.-Y Le Boudec, “Performance Analysis of the CONFIDANT Protocol: Co-
operation of Nodes”Fairness in Distributed Ad-Hoc Networks,” in Proceedings of IEEE/ACM
Workshop on Mobile Ad Hoc Networking and Computing (MobiHOC), Lausanne, Switzerland,
June 2002.
7. S. Buchegger and J.-Y. Le Boudec, “The Effect of Rumor Spreading in Reputation Systems for
Mobile Ad Hoc Netorks,” in Proceedings of WiOpt 2003, pp. 131“140.
8. L. Buttyan and J. P. Hubaux, “Enforcing Service Availability in Mobile Ad-Hoc WANS,” in

<<

. 70
( 87 .)



>>