. 12
( 87 .)


lished a subcommittee, IEEE802.11, to standardize and unify techniques and technologies
to be used for wireless LANs. Since the subcommittee was established involving experts
from companies and academia, it was also aware of the need for infrastructureless com-
munications and was working in parallel to address both infrastructure-based and infra-
structureless needs.
The DoD never lost interest in ad hoc networking, and funded programs such as the
Global Mobile Information Systems (GloMo) and Near-term Digital Radio (NTDR), the
former addressing Ethernet-type connectivity, and the latter focusing on military applica-
tions (NTDR also became the first nonprototype, real ad hoc network in the world). By
1997, the IEEE802.11 subcommittee had approved its first WLAN standard, defining the
physical layer as well as the MAC and logical-link control layers for infrastructured and
infrastructureless communication.

Today, the prices for IEEE802.11-based technologies are within everybody™s reach and
since an infrastructureless mode is defined, it has become the premier choice for the un-
derlying bottom two layers (PHY and MAC) for most simulation, test-bedding, and even
commercial ad hoc networks and applications. Yet, one should not forget that it is the in-
frastructureless part of the specification that permits the ad hoc mode, not the WLAN
technology, which provides for ad hoc networks. Another factor to keep in mind is that
most of the revenues are generated from the technology being deployed in WLANs and,
thus, some protocol issues significantly different in WLAN and ad hoc scenarios will
show a strong bias toward a primary WLAN behavior.

2.1.2 Wireless LANs
In the strict sense of the word, WLANs are infrastructure-based wireless networks, in which
there is a need to deploy wireless access points ahead of time; these access points control
network usage in their respective transmission range or domain. A local area network™s spa-
tial span is usually between 10 meters to a few hundred meters; thus, the same coverage
range is demanded from a wireless LAN. A node that wants to connect wirelessly to a
WLAN, should (i) be in the transmission range of the access point, (ii) obtain or carry an IP
address from the same IP domain (assuming IP communication) that the access point is in,
and (iii) use the access point as a bridge or router for every packet it sends or receives.
Wireless bandwidth is one of the most important natural resources of countries; thus,
its usage is regulated by national regulation bodies. In the United States, the regulatory
body in charge of the national radio frequency resources is the Federal Communications
Commission (FCC). In order for a frequency band to be used, the FCC has to issue licens-
es to devices using that band as well as a license to operate devices in that band. The FCC
has designated several frequency bands, commonly known as the ISM (Industrial, Scien-
tific, and Medical) and/or U-NII (Unlicensed-National Information Infrastructure) bands,
for which an FCC license is only needed for the device and not for the usage of the band.
WLANs take advantage of these ISM bands, so the operators do not have to request per-
mits from the regulatory bodies. The most common ISM bands for WLANs in order of
their importance are: 2.4 GHz“2.483 GHz, 5.15 GHz“5.35 GHz, 5.725 GHz“5.825 GHz
(United States) and 5.47 GHz“5.725 GHz (European Union), and 902 MHz“928 MHz
(not relevant).
Since WLANs rely on a centrally controlled structure, just like cells of cellular networks,
several access points can be used to create cellular-like WLAN structures. Some WLAN
technologies are more suited for such large-coverage, cellular-like WLANs, whereas others
may not perform well in such scenarios as it will be pointed out later in this chapter. The
term hot-spot recently became a frequently used term, referring to an area covered by one
or more WLAN access points to provide Internet connectivity at a fraction of the cost of a
cellular data connection to users whose terminals are equipped with wireless network in-
terface cards. Providing hotspots is an extremely controversial issue; current cellular
providers are likely to loose revenue unless they are the ones providing the service.

2.1.3 Wireless PANs
The term wireless personal area networks came along with the appearance of its first rep-
resentative technology: Bluetooth. WPANs (or, in short, PANs) are very short range wire-
less networks with a coverage radius of a few centimeters to about 10 meters, connecting

devices in the reach of individuals, thus receiving the name. WPANs do not necessarily re-
quire an infrastructure; they imply single-hop networks in which two or more devices are
connected in a point-to-multipont “star” fashion. Although the communication distance is
shorter, so that the power requirements are lessened, Bluetooth provides a significantly
lower symbol rate than WLANs. Fortunately, this contradicting “feature” is currently be-
ing addressed and it is likely that future WPAN technologies will provide users with op-
tions of significantly higher transmission speeds.

2.1.4 Digital Radio Properties
In order to fully comprehend the different aspects of medium-access control in WLAN
and WPAN standards and specifications, it is necessary to possess basic knowledge on the
behavior/terminology of digital radio transmissions. If using radio as the medium for
communication, the bit error rate (BER) due to undesired interfering sources could be-
come 8“10 orders of magnitude higher than in an optical or wired medium. The attenua-
tion of radio signals is proportional to at least the square of the distance and the square of
the carrier frequency in open propagation environments, in which there are no obstacles
(not even the earth™s surface) reflecting the radio signal, and the receiver and transceiver
are in line of sight. In real environments, statistically, the received signal strength can be
decaying with as much as the fourth power of the transmitter“receiver distance, due to ob-
stacles absorbing and reflecting the radio signal. Additionally, in a mobile environment,
reflection and absorption of signals from obstacles causes fading effects that can be clas-
sified into short- and large-scale fadings depending on how far the transmitter moves
away from the transmitter.
Rayleigh fading describes the fading of the signal when the transmitter“receiver dis-
tance varies around the wavelength of the carrier signal (about 12 cm at 2.4 GHz). With
Rayleigh fading, one has to consider that a radio signal can be received through different
paths via obstacles reflecting the signal. Signals received from multiple paths travel dif-
ferent distances; thus, their phases can vary significantly at the receiver, causing amplifi-
cation and attenuation of each other. Rayleigh fading causes local signal strength minima,
or fading dips, that are about half a wavelength (about 6.25 cm at 2.4 GHz) away from
each other, strongly depending on the carrier frequency.
Log-normal fading describes the fading effect when the signal strength™s variation is
measured on a large-scale (much greater than the wavelength of the carrier) movement.
With log-normal fading, different reflecting and line-of-sight components™ strengths can
vary with the order of the sizes of obstacles (buildings, etc.) absorbing the energy of the
signal; log-normal fading dips are thus 2“3 orders of magnitude farther away than those of
Rayleigh fading.
Thus, the received signal strength does not only depend on the approximate distance
from the transmitter, but strongly depends on the exact distance (see Figure 2.1) and loca-
tion, and on the exact frequency carrier used; that is, it is possible to produce a significant
change in the received signal strength just by moving the receiver a few centimeters or by
changing the carrier frequency by a few kilohertz.
Time dispersion is yet another problem to address”signals bouncing back from obsta-
cles have a time shift comparable to the duration of bit times. Time dispersion could cause
the reception of contradicting information, called intersymbol interference (ISI).
Since transmission and reception cannot occur at the same time on the same fre-
quency at a single node, and because most building blocks of receivers and transmitters

Received Signal Strength [dB]
Path loss pattern

fading pattern
Rayleigh fading

Distance (log scale)

Figure 2.1. Received signal strength.

are the same, it makes economic sense to use time-division duplexing to provide only a
single radio unit per device that can be switched between reception and transmission
modes. Additionally, the consecutive reception and transmission events of the radio do
not necessarily have to take place at the same frequency carrier in order to reduce the
risk of being in a fading dip. Unfortunately, it takes significant time (up to a few hun-
dred microseconds) to switch radios between transmission and reception modes (with or
without changing the frequency), waiting for all the transients to settle. This time is
sometimes referred to as radio switch-over time or radio turn-over time, during which
the radio is useless.
Since radio frequency is a scarce resource, it needs to be used wisely. In order to in-
crease the capacity of a system, the same frequency (band) may be reused at some dis-
tance where the other signal becomes low. In order to reduce the reuse distance, thus re-
ducing the interference, systems are sometimes required to implement radio-power
control, with which the transmission power to different clients can be dynamically adjust-
ed depending on the reading of the received signal strength.
To reduce the average transmission energy over small frequency bands and to provide
better protection against fading dips, spread-spectrum (SS) technologies are employed (in
fact, the FCC requires SS to be used in the ISM bands). The most well known and widely
used SS technologies are (Fast) Frequency Hopping (FH or FFH), Direct Sequence
Spreading (DSS), and the novel Orthogonal Frequency Division Multiplexing (OFDM)
and Ultra Wide Band (UWB). With FFH, the frequency band is divided up into several
narrower bands (using a central carrier frequency in each of these narrow bands). An FFH
transmission will use one of the narrow bands for a short period of time, then switch to
another, and, again, another, cyclically. The time spent at each carrier frequency is called
the dwell time. In DSS, the signal to be transmitted is multiplied by a high-speed chip-
code or pseudorandom noise (PN) sequence, essentially spreading the energy of the signal
over a larger band (resulting in less spectral efficiency). With OFDM, just like with FFH,
several frequency carriers are defined, but, unlike FFH, more than one carrier may be
used at the same time to transmit different segments of the data. As will be shown, Blue-

tooth employs FFH, whereas IEEE802.11b and IEEE802.11a employ DSS and OFDM,
respectively, and UWB is in its infancy.
The reader interested in more details of digital radio signal propagation and fading ef-
fects and their mitigation is referred to [39, 40]. Additionally, [44] provides a good
overview of differences in propagation for TDMA/FDMA and CDMA systems.
The rest of this chapter is organized as follows: Section 2.2 introduces WLAN tech-
nologies and outlines why they can/cannot be used for ad hoc networking. Section 2.3
deals with WPAN technologies, focusing mostly on Bluetooth, and outlines the problems
researchers are facing before Bluetooth can be used for ad hoc networking. Section 2.4
concludes the chapter.


As described in the previous section, the history of WLANs starts with the ALOHA sys-
tem. In the early 1990s, radio technologies became mature enough to enable the pro-
duction of relatively inexpensive digital wireless communication interfaces. The first
generation of WLANs operated in the 900 MHz ISM band, with symbol rates of around
500 kbps, but they were exclusively proprietary, nonstandard systems, developed to pro-
vide wireless connectivity for specific niche markets (e.g., military or inventorying).
The second-generation systems came along around 1997, enjoying a strong standardiza-
tion effort. They operated in the 2.4 GHz range and provided symbol rates of around 2
Mbps. The IEEE802.11 Working Group (WG) and its similarly named standard were the
most successful of the standardization efforts. People did not have to wait long for an in-
expensive third-generation (2.4 GHz band, 11 Mbps symbol rate) WLAN standard and
equipment, as the IEEE802.11b Task Group (TG) was quick in standardizing it, and, due
to increased need, products rolled out extremely quickly. Although the IEEE802.11a TG
was formed at the same time as IEEE802.11b TG and its standard was available at ap-
proximately the same time, it took longer for the first IEEE802.11a products to appear.
IEEE802.11a operates in the 5.2 GHz band with speeds up top 54 Mbps (or 108 Mbps
in a non-standardized “turbo” or dual mode) and represents the fourth generation of
WLANs. The Wireless Ethernet Compatibility Alliance (WECA) was established by
companies interested in manufacturing IEE802.11b and IEEE802.11a products. WECA
forged the by now widely accepted term Wi-Fi (Wireless Fidelity) to replace the user-
unfriendly IEEE802.11 name. WECA is known today as the Wi-Fi Alliance and provides
certification for 2.4 GHZ and 5.2 GHz products based on the IEEE802.11b and
IEE802.11a standards, respectively.
While the IEEE802.11 WG was working on IEEE™s WLAN standard, the European
Telecommunication Standards Institute (ETSI) was working on another standard known
as HiperLAN (High Performance Radio LAN). HiperLAN was released at about the same
time as the first IEEE802.11 standard in 1998 but has received less attention due to its
more stringent manufacturing requirements (representing better qualities, too). HiperLAN
operates in the 5.2 GHz band with data rates up to 20 Mbps. The ETSI updated the Hiper-
LAN standard in 2000, releasing HiperLAN 2, which provides similar data rates as
IEEE802.11a while enabling easy architectural integration into 3G wireless networks
(UMTS) and providing quality of service (QoS) provisioning.
In this section the readers will be introduced to the standardization efforts of the differ-
ent IEEE802.11 Task Groups as well as to the technology of HiperLAN 1 and 2. It will be

shown how these standards provide for not only WLAN usage scenarios but also for ad
hoc networking.

2.2.1 IEEE802.11 Technological Overview
The IEEE802.11 Working Group was formed in 1990 to define standard physical (PHY)
and medium-access control (MAC) layers for WLANs in the publicly available ISM
bands. The original goal was to have data rates of 2 Mbps, falling back to 1 Mbps in the
presence of interference or if the signal became too weak. Originally, three different phys-
ical layer options were provided: (i) infrared, (ii) frequency hopping spread spectrum
(FHSS) at 2.4 GHz, and (iii) direct sequence spread spectrum (DSSS) at 2.4 GHz. Due to
the possible need, two kinds of operation modes were also defined: a client-server, regular
WLAN mode that received the name IM-BSS (Infrastructure Mode Basic Service Set),
and an ad hoc operational mode called IBSS (Independent Basic Service Set). A Basic
Service Set (BSS) is nothing but a group of at least two nodes or stations (STA) cooperat-
ing via the wireless interface.
The infrared PHY layer did not catch up and has been neglected subsequently. The
FHSS PHY used 79 different carrier frequencies with 22 different hopping patterns,
defining 22 virtual channels with a dwell time of 20 ms (50 hops/s). Although most of the
research comparing the DSSS PHY and the FHSS PHY showed that the interference re-
sistance and resilience of the FHSS PHY layer was superior, the FHSS PHY slowly lost
the interest of the IEEE80.11 group and more emphasis was put on the DSSS PHY, main-
ly due to the fact that increasing the rate was hardly possible using the FHSS PHY. The
DSSS PHY divided up the available 80 MHz band at the 2.4 GHz range into three
nonoverlapping channels, each of them having around 20 MHz of bandwidth, thus en-
abling interinterferenceless operation of three different networks in the same spatial area.
The 1 or 2 Mbps stream was used to modulate a so-called Baker sequence”a well-de-
fined PN (pseudo random noise) sequence to spread the information over the respective
20 MHz band. The original MAC and PHY specifications of the IEEE802.11 were re-
leased in 1997.
Two different MAC channel access methods were defined. The first method, Distrib-
uted Coordination Function (DCF) to be used ether in the Infrastructure Mode or in the
IBSS ad hoc mode employing the Carrier Sense Multiple Access“Collision Avoidance
(CSMA/CA) MAC protocol, was first proposed in [25]. The second (optional) access
method is the Point Coordination Function (PCF), to be solely used in the Infrastructure
Mode, based on a MAC polling scheme. Only a few products have the capability to work
with a PCF method, and since the PCF is not defined for the ad hoc mode further descrip-
tion of it is omitted in this chapter.
According to the IEEE802.11 standard, all stations (STA) have to be able to work with
the DCF. The goals of the 802.11 group were to provide a similar service on the radio in-
terface as the interface defined for wired LANs in the IEEE802.3 standard or Ethernet;
that is, best effort with high probability access but no QoS guarantees. The IEEE802.11
MAC protocol is described in Chapter 3 of this book together with analyses of its perfor-
mance in ad hoc environments.
Providing security was a major concern of the IEEE802.11 group, whose goal was to
provide at least the same level of security as the wired Ethernet. IEEE802.11 defines its
own privacy protocol called WEP”Wired Equivalent Privacy. Since in IEEE802.11 pack-
ets are broadcast over radio, it is relatively easy to intercept messages and to get attached


. 12
( 87 .)