<<

. 10
( 132 .)



>>

time
multipath profile

Figure 2.22 Multipath autocorrelation peaks




Delay
equaliser


PN code Channel
generator Estimator




Σ
Delay
equaliser


PN code Channel
generator Estimator



Delay
equaliser


PN code Channel
generator Estimator


Figure 2.23 Rake receiver

which is correlated against the expected code to give an autocorrelated peak. This is fed
into a channel estimator, which drives a phase adjuster to rectify the phase of the signal
to be closer to that originally transmitted. This is needed since the phase of the different
paths will have been altered, depending on the path they have taken, and the objects off
which they have been re¬‚ected. Each ¬nger has a delay equalizer so that the resolved
peaks can be time aligned before passing to a summing unit where they are combined.
This process is known as maximal ratio combining (MRC).
Because this combined signal is stronger, it is possible that the BTS may tell the mobile
device to reduce its transmitting power. Any process of combining multiple versions of
38 PRINCIPLES OF COMMUNICATIONS


the same signal to provide a more powerful, better quality signal is known as diversity.
In CDMA, this multipath diversity is referred to as microdiversity.
Further improvements may also be made at a base station by use of multiple antennas,
separated in space, known as spatial diversity. Each antenna will receive the same signal,
but with a small time shift compared to the other antennas, thus enabling combination
of these signals. In WCDMA, up to four such antennas may be used to improve the
signal quality.


2.8.1 Soft handover
A key advantage of CDMA systems is the principle of soft handover. Since each cell
operates at the same frequency, it is possible for the mobile device to communicate
simultaneously with more than one cell. Thus when a handover is required, the connection
to the target cell can be established before the original connection is dropped. This is in
contrast to traditional cellular TDMA/FDMA systems, where a handover requires that
the connection is ¬rst dropped and then established at the target cell, since the cells
are at different frequencies. In a CDMA device, during an active call, typically the rake
receiver uses one ¬nger to make measurements of surrounding cells at the same frequency
as potential candidates for handover. Originally, soft handover was seen as advantageous
since it resulted in fewer dropped calls, because the user is never disconnected from the
network. However, now it is also used to provide diversity where the multiple active
connections can be combined to improve the quality of the received signal. It is usual for
a mobile device to be able to connect to up to three cells concurrently.


2.8.2 Fading and power control
The CDMA system needs a power control mechanism to overcome the effects of multiple
users with different propagation characteristics transmitting simultaneously. This is often
referred to as the near“far problem, where a remote user can easily be drowned out by
a user that is physically much closer to the base station. Power control endeavours to
ensure that signals arriving at the receiver are almost equal in power, and at a level that
meets the quality requirements in terms of SIR.
The three main features here are:

• attenuation due to increase in distance from the receiver;
• fading variations due to speci¬c features of the environment;
• fading variations due to the movement of the mobile device.

Radio waves propagating in free space are modelled by an inverse square law whereby
as the distance between the transmitter and receiver doubles, the signal loses half of its
power. Thus in the equation below a is generally regarded to be of value 2 and x indicates
the distance in metres:
Ptransmit
Preceive =
xa
2.9 PROTECTING THE DATA 39


This is not necessarily the case in a cellular system, where the terrain and buildings
can have a major effect on the propagation model, and thus a is usually considered to
be greater than 3. For example, in metropolitan areas, a = 4 for planning purposes. As a
user moves around, the power level at the receiver will ¬‚uctuate. These ¬‚uctuations can
be broken down into two general categories: slow and fast fading.
Slow fading or shadow fading is as a result of obstructions, which will result in changes
in received power level. Multiple versions of the same signal will form constructive and
destructive interference at the receiver as the relative time shifts vary due to different path
lengths and re¬‚ection/refraction characteristics of the surrounding environment. It is more
pronounced in urban areas, with signi¬cant changes in received signal strength occurring
over tens of metres.
Fast fading, or Rayleigh fading, is due to the Doppler shift, where the apparent wave-
length of the transmitted signal will increase as the mobile device moves towards the
receiver and decrease as the device moves in the opposite direction of the receiver. This
appears at the receiver as a change of phase of the transmitted signal. Generally a num-
ber of paths with different Doppler shifts will arrive at the receiver with changed phase
shifts. As these multipaths are combined at the receiver, the signal will exhibit peaks and
troughs of power corresponding to signals that are received in phase, and thus reinforce
each other, and out of phase, where they cancel each other out. These variations are
much faster than those occurring with environmental factors and can cause signi¬cant
differences in power levels over relatively short distances. Consider the WCDMA sys-
tem, where the transmit/receive frequency is in the 2 GHz range. The wavelength of this
is 150 mm, and thus relatively small movements of the mobile device of the order of
75 mm will result in a different interference pattern, and consequently a different power
level. This is why power control must be performed, and performed rapidly, in the system
to attempt to maintain an ideal, even received power level. In the WCDMA system, as
will be seen in Chapter 6 power control is performed 1500 times a second. In the IS-95
CDMA system, it is 800/second.



2.9 PROTECTING THE DATA
Despite the shift to data being transferred in digital format, there are still major problems
in sending data across the air. In a ¬xed-line communications system, most of the problems
of data transfer and ˜data loss™ are down to such issues as congestion, where data is stuck
in a traf¬c jam, or buffer over¬‚ow, where a network device is being asked to process
too much data. What is no longer considered to be a problem is the reliability of the
medium over which the data is travelling. Consider a ¬bre optic cable, which can now
be regarded as the standard for data transfer once out of the local loop. Fibre cables cite
bit error rate ¬gures of the order of 10’20 , and generally bit errors that do occur are
bit inversions, that is, a 1 that should be a 0 and vice versa. When this order of error
rate is achieved, one can assume that the medium is completely reliable. In fact, many
high-speed communications systems use this to their advantage; for example, as will be
seen later, ATM provides no error protection whatsoever on data, and does not require
a destination to acknowledge receipt of data. In general, ¬xed-line schemes provide, at
40 PRINCIPLES OF COMMUNICATIONS


best, an error checking mechanism on data, usually in the form of a cyclic redundancy
check (CRC). Should data arrive with errors, a rare occurrence, the sender is asked to
retransmit, if that level of reliability is required. For example, Ethernet transmits frames
of 1500 bytes of payload over which there is a 4-byte CRC, which introduces a relatively
low overhead on the data.
However, a wireless communications system is notorious for corrupting data as it
travels across the air. So far, cellular systems are focused on voice transmission, which is
extremely tolerant of errors. Typically, a voice system can sustain about 1% of error before
the errors become audible. With the introduction of mobile data solutions, more often the
information being carried across the air is data, such as an IP packet. Unfortunately,
data systems are very intolerant of errors, and generally require error-free delivery to an
application. For that reason, cellular systems must now implement more rigorous error
control mechanisms.
If a simple error checking scheme was introduced, there would be too much retrans-
mission, and the system would spend the majority of the time retransmitting data, thus
lowering the overall throughput. A better and more reliable scheme is required. The
solution is to implement forward error correction (FEC). With this, a correction code is
transmitted along with the data in the form of redundant bits distributed throughout the
data, which allows the receiver to reconstruct the original data, removing as many errors
as possible. For an ef¬cient and robust wireless communications system, it is essential
that a good FEC scheme is used to improve the quality of transmissions.
A problem common to all FEC schemes is the amount of overhead required to correct
errors. If a very simple FEC scheme is considered, in which each bit is merely repeated
to make the channel robust, then, as shown below, the amount of information to be
transmitted is doubled. However, what is lost in bandwidth, by increasing the amount of
information to be sent, is gained in the quality of the signal that is received.

Data: 10101011010100100100 1
Transmission: 110011001100111100110011000011000011000011

The standard terminology is that the data coming from a user application is quanti¬ed
in bits per second. However, the actual transmission is quanti¬ed as symbols per second,
since this transmission consists of data plus FEC bits. In the case above, one bit is
represented by two symbols.



2.9.1 Convolution coding
A popular FEC scheme is convolution coding with Viterbi decoding. Convolution coding
is referred to as a channel coding scheme since the code is implemented in a serial stream,
one or a few bits at a time. The principle of convolution coding was also developed by
Viterbi (1967).
Convolution coding is described by the code rate, k/n, where k is the number of bits
presented to the encoder, and n the number of symbols output by the encoder. Typical code
rates are 1 rate and 1 rate, which will double and triple the quantity of data respectively.
2 3
2.9 PROTECTING THE DATA 41


For example, to transmit a user application which generates a data rate of 144 kbps with
1
rate convolution coding, the transmission channel will be operating at 288 ksps.
2
At the receiver, the data is restored by a Viterbi decoder. This has the advantage that
it has a ¬xed decoding time and can be implemented in hardware, introducing minimal
latency into the system. Current commercial Viterbi decoders can decode data at a rate
in excess of 60 Mbps at the time of writing.
By implementing convolution coding, as already mentioned, there is a tradeoff in that
1 1
the bandwidth is either doubled ( 2 rate) or tripled ( 3 rate). However, the upside is that a
good convolution coding scheme will provide a 5 dB gain across the air interface for a
binary or quadrature phase shift keying (BPSK or QPSK) modulation scheme. This means
that a coded signal can be received with the same quality as an uncoded signal, but with
5 dB less transmit power.
Turbo coding is an advanced form of convolution coding which uses parallel concatena-
tion of two turbo codes. Turbo coding, developed in 1993 at the research and development
centre of France Telecom, provides better results than standard convolution coding. Turbo
coding is recommended for error protection of higher data rates, where it will typically
provide bit error rates of the order of 10’6 .
Both codes are designed to reduce the interference effects of random noise, or additive
white Gaussian noise (AWGN). In CDMA systems, the source of most of this noise is
other wideband user signals.
There are many other FEC schemes available, such as Hamming codes and Reed“Solo-
mon codes.


2.9.2 Interleaving
Despite the dramatic improvements that a FEC scheme such as convolution coding intro-
duces to the wireless system, it is not speci¬cally designed to eliminate burst errors.
Unfortunately, across the air interface errors usually occur in bursts where chunks of data
are lost. Some additional protection is required to cope with the reality of the air interface.
To solve this, blocks of data are interleaved to protect against burst errors. Consider
the transmission of the alphabet. To transmit, it is ¬rst split into blocks, as shown in
Figure 2.24. These blocks are then transmitted column by column.

abcdefghijklmnopqrstuvwxyz

abc jkl stu
def mno vwx
ghi pqr yz

adgbehcfijmp...

adgbehcfijmpknqlorsvytwzux
burst error
abcdefghijklmnopqrstuvwxyz
distributed throughout data

Figure 2.24 Principle of interleaving
42 PRINCIPLES OF COMMUNICATIONS


If, subsequently, there is a burst error in the data, once the interleaving process is
reversed, this error is distributed through the data, and can then be corrected by the
convolution coding mechanism. This concept is illustrated in the lower part of Figure 2.24.



2.10 SUMMARY
This chapter addresses the basic concepts of both packet switched networks and cellular
systems. Crucial to these is the transport of voice over a packet network, and the basic
issues with regard to this are highlighted. For any cellular system, a multiple access
mechanism must be present to allow many subscribers to share the resources of the
network, and the main methods used in cellular are described. Arguably the most complex
aspect of 3G is the use of CDMA as the air interface of choice, and the key principles
of CDMA are described here, as well as the mechanisms to address problems of loss of
data in radio transmission.



FURTHER READING

A. S. Tanenbaum (2003) Computer Networks, 4th edn. Prentice Hall, Upper Saddle River,
NJ.

<<

. 10
( 132 .)



>>