Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Cellular network

Status
Not open for further replies.

EngIntoHW

Member
Hi,

I began to learn about the cellular network.
I read that if you decrease the power (radius) of each cell by 2, then the number of channels is 4 times increased.

Could someone please explain that quadratic relation between power and channels' amount?

Thank you.
 
It may be related to the standard inverse-square relation between the radiated power of a transmitter and the power at a distance.
 
Last edited:
Hey,

Yeah, I read about it.

I just don't manage to relate it to the channels' qunatity.
Do you manage to see how it all relates?
 
If the cells transmit with less power, they cannot reach as far, so there have to be more cells, so more channels in a given area.

Of course that costs more to install because there are more cell towers.

The cellular systems have always had to have variable cell size because they need large numbers of channels in heavily populated areas, so lots of cell towers, while in sparsely populated areas the cost of lots of cell towers would be too large so they need high powers and large areas from each tower.

Cell phones interact with the towers to increase the power when the cell tower is far away. They also compensate for the distance from the cell tower. Timing advance - Wikipedia, the free encyclopedia
 
It's like people standing in a room all having conversations with each other. If everyone talks loudly no one will be able to carry out their own private conversations with the person they are standing next to. So everyone has to talk quietly so that everyone can carry on their own private conversations. The more people you have in one room having conversations, the quieter everyone has to speak.

Volume analgous to the power level, and the total number of conversations that can be carried out is channel quantity.
 
Last edited:
Hi guys,
Thanks a lot.

I read much about it and I understand what you said, that in order to add more cells to a region (for getting more channels), you need to decrese the coverage area (radius) of each cell, and you do that of course by reducing the TX output power of each cell.

What I dont understand is why there is a quadratic relation between the output power of each cell, and the total amount of channels.

Why is it that if you use 1km radius cells, you get 100times more channels than when using 10km radius cells?

Why 10^2 and not 10?
 
Because 1 km radius cells have an area of 3.14 square km, while 10 km radius cells have an area of 314 square km. That is simple geometry

However, to go from 1km to 10km radius, you would normally need 100 times the power, due to the inverse square law.
 
Hey,
Thanks! :)

Because 1 km radius cells have an area of 3.14 square km, while 10 km radius cells have an area of 314 square km. That is simple geometry

So you're actually saying that if you decrease the radius by 10 times, then you can increase the amount of cells by 100 times?

However, to go from 1km to 10km radius, you would normally need 100 times the power, due to the inverse square law.
So if you decrease the radius by 10 times, you need to decrease the power by 100 times, in order to not increase the interference level?
 
Last edited:
Hi Diver,
Thank you so much for your help.

Cell phones interact with the towers to increase the power when the cell tower is far away. They also compensate for the distance from the cell tower. Timing advance - Wikipedia, the free encyclopedia

I carefully read the Timing Advance article.
I'd like to ask you a few questions please about it :)

1. Does the TA value sets how far from the end of the time slot, the mobile phone will start transmitting a burst of traffic?
Meaning, that for TA = 0 (which is not reasonable but for an example), the phone will transmit a burst of traffic right when the time slot is closing?

2. How is the distance between a cell phone and a base station measured, so TA can be adjusted accordingly? can you please refer me to the right term for that? (I'd be happy to read about it, it's so interesting).

3. Where did you read that cell phones can dynamically change there TX output power when they see that the base station is far away?

Thank you very much! :)
 
(1) I'm not entirely sure on the details of TDMA in cellular systems. My class seemed to focus more on the CDMA spread-spectrum method where key codes are used instead and so each cellular phone can transmit whenever it wants. But I googled "TDMA cellular phone synchronization" and it led me to this wiki page
https://en.wikipedia.org/wiki/Time_division_multiple_access
2G systems
Most 2G cellular systems, with the notable exception of IS-95, are based on TDMA. GSM, D-AMPS, PDC, iDEN, and PHS are examples of TDMA cellular systems. GSM combines TDMA with Frequency Hopping and wideband transmission to reduce interference, this minimizes common types of interference.

In the GSM system, the synchronization of the mobile phones is achieved by sending timing advance commands from the base station which instructs the mobile phone to transmit earlier and by how much. This compensates for the propagation delay resulting from the light speed velocity of radio waves. The mobile phone is not allowed to transmit for its entire time slot, but there is a guard interval at the end of each time slot. As the transmission moves into the guard period, the mobile network adjusts the timing advance to synchronize the transmission.

Initial synchronization of a phone requires even more care. Before a mobile transmits there is no way to actually know the offset required. For this reason, an entire time slot has to be dedicated to mobiles attempting to contact the network (known as the RACH in GSM). The mobile attempts to broadcast at the beginning of the time slot, as received from the network. If the mobile is located next to the base station, there will be no time delay and this will succeed. If, however, the mobile phone is at just less than 35 km from the base station, the time delay will mean the mobile's broadcast arrives at the very end of the time slot. In that case, the mobile will be instructed to broadcast its messages starting nearly a whole time slot earlier than would be expected otherwise. Finally, if the mobile is beyond the 35 km cell range in GSM, then the RACH will arrive in a neighboring time slot and be ignored. It is this feature, rather than limitations of power, that limits the range of a GSM cell to 35 km when no special extension techniques are used. By changing the synchronization between the uplink and downlink at the base station, however, this limitation can be overcome.

(2) and (3) THe distance isn't measured. THe base station monitors the power level from each hadnset and constantly tells it to adjust the power level. This is because the basestation is where all the handset signals meet up and are received and if any one is stronger than the others it drowns the others out. I know they do this with CDMA spread spectrum systems because this is a key requirement in the way they work since all handsets share the same frequencies simultaneously and any one signal that is too powerful when it arrives at the basestation will drown the others out. It may or may not be done with TDMA because TDMA is not as sensitive to this issue since you actually have a dedicated slot in time at a particular frequency channel.

You can surmise from (2) and (3) that cellular systems require the handset to not be moving too fast or else contact would be lost in the time interval between timing adjustments and power adjustements.
 
Last edited:
(1) I'm not entirely sure on the details of TDMA in cellular systems. My class seemed to focus more on the CDMA spread-spectrum method where key codes are used instead and so each cellular phone can transmit whenever it wants. But I googled "TDMA cellular phone synchronization" and it led me to this wiki page
Time division multiple access - Wikipedia, the free encyclopedia


(2) and (3) THe distance isn't measured. THe base station monitors the power level from each hadnset and constantly tells it to adjust the power level. This is because the basestation is where all the handset signals meet up and are received and if any one is stronger than the others it drowns the others out. I know they do this with CDMA spread spectrum systems because this is a key requirement in the way they work since all handsets share the same frequencies simultaneously and any one signal that is too powerful when it arrives at the basestation will drown the others out. It may or may not be done with TDMA because TDMA is not as sensitive to this issue since you actually have a dedicated slot in time at a particular frequency channel.

You can surmise from (2) and (3) that cellular systems require the handset to not be moving too fast or else contact would be lost in the time interval between timing adjustments and power adjustements.

Thank you very much!

It's a great post.
I'll be reading about CDMA and TDMA and will then read your post again which would make it much clearer for me.

Thanks a lot :)
 
The cell phones listen for signals from the cell tower before replying. The cell phone synchronises its transmission with the start of the time slot. When negotiating a connection, the burst from the cell phone is short, so that up to 35 km from the tower, the whole burst arrives before the end of the time slot.

The cell tower measures the delay from the start of the time slot to the start of the burst arriving. It then tells the cell phone what this delay is, and that is the timing advance, or TA. When the cell phone is transmitting data, it sends the data burst early, before the start of the time slot. The amount of time by which it sent early is the TA, so that the data burst arrives at the correct time. That allows a full-length data burst to be sent during a call or during data transfer.
 
I read about asynchronous CDMA systems, which use pseudo-noise codes that are to be orthogonal for arbitarily random starting points, am I correct?
Are there really codes that are orthogonal for random starting points?

I learned, as dknguyen said, that all signals need to be received at the receiver at the same power level, in order to not fall into the noise of the ADC when the receiver's AGC (automatic gain control) adjusts the gain so that the ADC won't get saturated.

How come this isn't mentioned for synchronous CDMA?
Synchronous CDMA systems don't use ADC for encoding a user's signal?
 
The power thing is what it is. Lots of details about cellular communicatiosn aren't mentioned in texts and lectures and stuff because there's $#%@&@$ loads of it. You think about it inconsistencies and problem and pry the solution out of your professor and lots of times they won't know the answer either.
------------
The criteria for orthogonality when everything is synchronized is so strict already that making them orthogonal no matter how you phase align different codes is massive. But I did ask my prof about this when I was learning about it and he asked someone else. My understanding of the explanation was this:

The basestation has no difficulty transmitting out a bunch of spread spectrum signals simultaneously that are aligned so the codes work properly.

The handset however uses a different codes for transmission however. It's much much longer than the transmission codes used by the base station and it's length causes it to mostly become orthogonal with the codes of other cellular phones to deal with different handsets transmitting in an unsynchronized fashion. The website below refers to something that has "similarities" but is not the same. But like I said, what I just said was my understanding of second-hand information my professor got from someone who actually knew the answer.
------------------
What do you mean by ADCs for encoding? It makes slightly more sense if you said DAC, though I still wouldn't understand your question.

My understanding of encoding is that a data signal "0" and a signal "1" are mixed/modulated with the code. The code consists of a sequence of bits (in our example let's say the code is 128-bits long). So in the time of a single data signal bit, the actual RF transmission sends 128 bits. THe result is that the bit frequency of the RF transmission is 128x higher than the data signal's bit frequency. This mathematically "smears" the bandwidth of the data signal across a wide spectrum.

A crude conceptual example is just saying that:
0 -> 01101010
1 -> 11100110

So that a data sequence of 0110 is transmitted as 01101010 11100110 11100110 01101010. This is just a conceptual example and has a bunch of problems. The biggest is that one signal requires the knowledge of two codes and orthogonal and nearly orthogonal codes are scarce! In reality I'm pretty sure they just use one code per signal that is carefully chosen to be orthogonal or nearly orthogonal to the others and use it to modulate the bits of the data sequence. Maybe with logic operators like XOR or someone other completely different method. I don't know.

Read this well:
**broken link removed**
But the part directly applicable to what you are inquiring about is under the heading "Generating Pseudo-Random Codes" and some of it has been covered in my post such as very long codes to deal with the synchronization problem, modulating a data bit sequence into the spread spectrum bit sequence.
 
Last edited:
WOW, thank you very much dknguyen.

dknguyen said:
The basestation has no difficulty transmitting out a bunch of spread spectrum signals simultaneously that are aligned so the codes work properly.
Having the basestation (BTS) to transmit spread spectrum signals simultaneusly indeed makes sure that each handset recieves each bunch of spread spectrum signals at the same time, therefore they're in phase and orthogonality is achieved.

dknguyen said:
What do you mean by ADCs for encoding? It makes slightly more sense if you said DAC, though I still wouldn't understand your question.
Even thought we discuss digital communication, the wirelessly received data is analog, isn't it?

The near-far problem discusses that:
The near-far problem is a condition in which a strong signal captures a receiver making it impossible for the receiver to detect a weaker signal.[1]

The near-far problem is particularly difficult in CDMA systems where transmitters share transmission frequencies and transmission time.
In contrast, FDMA and TDMA systems are less vulnerable.

The issue of the dynamic range of one or more stages of a receiver limiting that receiver’s ability to detect a weak signal in the presence of strong signal has been around for a long time.
The near-far problem usually refers to a specific case of this in which ADC resolution limits the range of signals a receiver can detect in a direct sequence spread spectrum (DSSS) system such as CDMA.

The receiver’s AGC must reduce its gain to prevent ADC saturation.
This causes the weaker signal to fall into the noise of the ADC.
This is different from a condition of one signal interfering with another because if the ADC had sufficient resolution it would be possible to recover both signals.

Therefore I wonder whether Synchronize CDMA BTS also uses ADC to interpret the received signals, and if the BTS does use an ADC, then it also needs all received signals to be at the same power level when received at the BTS.

dknguyen said:
So that a data sequence of 0110 is transmitted as 01101010 11100110 11100110 01101010. This is just a conceptual example and has a bunch of problems. The biggest is that one signal requires the knowledge of two codes
Each handset needs to know only one code in order to decode the transmitted signal and detect the data signal, or to encode the data signal as I did right below.
In your example:
Data_signal = [0, 1, 1, 0]
Encode_signal = [-1, 1, 1, -1] (bit 0: symbol -1; bit 1: symbol 1)
Code_signal = [1 , -1] (Say we use a 2-bit code, not 128-bit code)
Trasnmitted_signal = [-1, 1, 1 -1, 1, -1, -1, 1]

As you can see, the transmitted vector was constructed by the knowledge of just one code.

dknguyen said:
Read this well:
**broken link removed**
It's a great link, I'll read it with pleasure, thank you :)
 
Last edited:
Status
Not open for further replies.

New Articles From Microcontroller Tips

Back
Top