Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

multiplexing, channel etc.

Status
Not open for further replies.

PG1995

Active Member

Attachments

  • duplexing.jpg
    duplexing.jpg
    643.1 KB · Views: 343
Q1: In many systems, you don't want what you transmit to end up coming back on your receiver. If you transmit with an antenna through the air, your transmit signal is very strong and is basically present on your receiver. In order for the receiver to reject this strong signal, you need to be receiving on another frequency.

In the case of a directional medium, like a fiber optic cable, the direction of transmission and the method of multiplexing the transmitted and received light will tend to minimize the cross talk between transmit and receive, however, there may still be small reflections and cross coupling in the directional signals, and this may be undesirable.

However, there can be cases where this is not a problem. Optical system can be designed with isolators and circulators, and antireflection techniques to minimize cross talk to acceptable levels.

Anyway, the simplest case is similar to air transmission and it is much easier to just separate the transmit and receive frequencies because directional multiplexing is either difficult or impossible in most cases.

Q2: To me, what you said makes sense and seems to indicate you understand it.

Q3: I'm not exactly sure myself, but I interpret this to mean that if you actually get into the system design, you will start to uncover complications that might not be obvious at first. You may then need to implement special design features and/or accept design tradeoffs with this method. Maybe someone else has actual experience with this, or can see an obvious issue I'm missing.
 
Hi

Could you please help me these queries, Q1, Q2, Q3, Q4 (below), Q5 and Q6?

Q4: What's the difference between multiplexing and modulation? In my opinion, one of the main differences is that multiplexing is used in wired communication and modulation in wireless communication.

For Q5 and Q6, **broken link removed** was used.

I would like to request you that please first go through all of the queries because some of them are very much related and this way you will be able to understand my confusion better. I'm very grateful for your help, your time, and your patience.

Regards
PG
 

Attachments

  • cs_quantization.jpg
    cs_quantization.jpg
    216.6 KB · Views: 352
  • cs_telephone.jpg
    cs_telephone.jpg
    168 KB · Views: 331
  • cs_pots.jpg
    cs_pots.jpg
    297.8 KB · Views: 336
Last edited:
Q1: Looks right to me. Did you double check their calculation? If theirs is correct too, there may be a numerical issue with accuracy. There's not a big difference in the answers anyway.

Q2: I seem to remember 56K, but either way it's faster than expected. Actually, those modems didn't always attain the maximum possible speed due to noise being higher than specified much of the time. Surely noise has an effect, but they also did some neat tricks to maximize bandwidth. I can't remember them exactly, but it probably related to modulation tricks and/or pulse shapes and detection methods.

Q3: Higher sample rate only helps so much. There are benefits to going above the Nyquist rate because nothing is ideal, but beyond a certain point, the extra samples don't really help the quality. However, finer resolution for quantization then becomes the way to improve the signal.

Q4: Multiplexing just means combining more than one channel or signal. Modulation is a transformation of the signals to a new frequency band. It has nothing to do with wired or wireless. You can do both or either with either approach.

Q5: First, the send and receive are just combined as one signal. Did you ever notice that you can hear yourself when you talk? People dont' usually talk at the same time, but if they do, their signals just add. The human ear is very good at picking out individual voices. The other issue is a major flub on your part. If you don't modulate, the lower sideband is the negative frequencies that mirror image the positive frequencies. No extra bandwidth is required for the negative frequencies, because (as we talked about) they are the same thing as the positive frequencies. It is not until you modulate that those negative frequencies slide up into the positive frequency band and add to the bandwidth. So, no need to multiply by 2 or 4 and the 3000 Hz is enough.

Q6: Bandwidth is often expressed in bits/s for digital communication channels. Yes, there is a conversion between the analog bandwidth and the digital bandwidth, but normally we specify whatever is more appropriate for the way the channel is used.
 
Last edited:
Pcm

Thank you very much for the help, Steve.

Could you please also help me with this query? Thank you.

And as you can see, if I start from x[0] then I get five values, i.e. x[0], x[1], x[2], x[3], x[4]. Should I include x[0], or should it be excluded?

Regards
PG
 

Attachments

  • cs_sampling.jpg
    cs_sampling.jpg
    84.7 KB · Views: 332
https://www.princeton.edu/~achaney/tmve/wiki100k/docs/Nyquist–Shannon_sampling_theorem.html

PG, take a look at the above reference. I think this will answer your question. In particular, this excerpt is relevant.

"In essence, the theorem shows that a bandlimited analog signal that has been sampled can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2B samples per second, where B is the highest frequency in the original signal. If a signal contains a component at exactly B hertz, then samples spaced at exactly 1/(2B) seconds do not completely determine the signal, Shannon's statement notwithstanding."

There is nothing wrong with starting the sampling at n=0, and that is typically what is done. Note that Matlab does not allow arrays to start at index 0 (unlike C which does allow indexing at 0). Hence, one can use a separate variable (e.g. x0 and the array x) to hold the initial value x0 separate from the sampled values x. Alternatively, indexing with j=n+1 can be done.
 
Last edited:
Thank you.

"In essence, the theorem shows that a bandlimited analog signal that has been sampled can be perfectly reconstructed from an infinite sequence of samples if the sampling rate exceeds 2B samples per second, where B is the highest frequency in the original signal. If a signal contains a component at exactly B hertz, then samples spaced at exactly 1/(2B) seconds do not completely determine the signal, Shannon's statement notwithstanding."

This is the first text I have seen which requires 'sampling rate greater than 2 times the highest frequency in the sampled signal' because most other sources just require that the sampling rate be at least 2 times the highest frequency contained in the signal.

Do you mind telling me what it really means where it says "If a signal contains a component at exactly B hertz"? For instance, in the query in my previous post I had a sinusoidal signal which consists of a single frequency. In general terms I understand the point being pointed out in that text. Thank you.

Regards
PG
 
Last edited:
Do you mind telling me what it really means where it says "If a signal contains a component at exactly B hertz"? For instance, in the query in my previous post I had a sinusoidal signal which consists of a single frequency. In general terms I understand the point being pointed out in that text. Thank you.

Your example was a case that has only one component and that component is at exactly B frequency. Basically, we are talking about superposition here. Remember Fourier Analysis is for linear system and the full spectrum can be resolved into a sum of sinusoidal signals. Hence, your example, or your example with any additional sinusoidal signals added, with frequency less than B, would qualify.

By the way, I also never understood why the Theorem isn't stated as > rather than >=.
 
Thank you.

Your example was a case that has only one component and that component is at exactly B frequency. Basically, we are talking about superposition here. Remember Fourier Analysis is for linear system and the full spectrum can be resolved into a sum of sinusoidal signals. Hence, your example, or your example with any additional sinusoidal signals added, with frequency less than B, would qualify.

Okay. The sinusoidal signal x(t)=4sin(4∏t) from my previous example has a frequency of 2 Hz and angular frequency ω=4∏. If we find its Fourier transform we will get a spike at 4∏ along frequency, ω, axis like the one shown here for cosine. For example, we can see here that the signal is composed of three different sinusoidal functions and that the highest frequency component of the signal has frequency of 10 Hz. This means that if we want to sample then sampling rate should exceed 20 sample/s, right?

One should note that a bandlimited signal is another requirement for application of Nyquist theorem. A signal is said to be a band limited signal if all of it's frequency components are zero above a certain finite frequency.

steveB said:
Remember Fourier Analysis is for linear system and the full spectrum can be resolved into a sum of sinusoidal signals.
I understand your point. Actually that's the main point of Fourier transform or series. It tells us that if we add up all the sinusoids with right frequencies and amplitudes then we can construct a signal under consideration.

steveB said:
By the way, I also never understood why the Theorem isn't stated as > rather than >=.
Actually I would say stating the theorem as >= is a little wrong because if it is stated as > then it will be more comprehensive and accurate in all applicable cases. Thanks.

Regards
PG
 

Attachments

  • cs_Fourier.jpg
    cs_Fourier.jpg
    55.6 KB · Views: 300
  • cs_Fourier_Expansion_Example.jpg
    cs_Fourier_Expansion_Example.jpg
    73.3 KB · Views: 299
Last edited:
This means that if we want to sample then sampling rate should exceed 20 sample/s, right?

Yes.


Actually I would say stating the theorem as >= is a little wrong because if it is stated as > then it will be more comprehensive and accurate in all applicable cases. Thanks.

I agree. I think it never becomes an issue for engineers because we normally need to sample significantly higher than 2B for practicality reasons related to designing filters. Reconstruction with 2B requires idealized math that is not possible to achieve in practice. I've seen anywhere from 2.2B to 20B used for oversampling.
 
Thanks a lot, Steve.

channel
"In telecommunications and computer networking, a communication channel, or channel, refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel."
- Wikipedia

The definition is good but in my humble view it's not all-inclusive. Let me clarify. If there are three radio stations active in a region then each of them is using a different carrier, or in other words specific bandwidths. In this case there is no physical transmission medium and neither is there any multiplexed connection. Each radio station modulates assigned carrier frequency and transmits it and the medium is, we cay say, vacuum or space (forget the air, and I understand there is a difference between space and vacuum but let's forget it too). If multiplexing is used by the three stations then a single composite signal needs to be created but which isn't in this case. Now its clear multiplexing is not used because each station transmits its individual transmission.

So, don't you think the definition should be stated somewhat as follows?

"In telecommunications and computer networking, a communication channel, or channel, refers either to a physical transmission medium such as a wire, virtual connection through space, or to a logical connection over a multiplexed medium such as a radio channel."

Please help me with this. Thank you.

Regards
PG
 

Attachments

  • cs_fdm.jpg
    cs_fdm.jpg
    44.3 KB · Views: 365
Last edited:
PG,

Don't forget that multiplexing can come in several different flavors. You can time-division-multiplex and make one signal that gets modulated and put on a channel. You can also frequency division multiplex to create several channels in the medium. Hence, the latter case seems to conform to the Wiki article.

Personally, I don't worry too much about these types of definitions. It seems everyone comes up with different words, and then others can find holes in the chosen wording. I know at the learning stage, it's important to use definitions to get a handle on all of the confusing ideas, but eventually, you just know what something is. Many things are hard to define well, yet we know it when we see it. My recommendation for you in these cases is to read many many definitions from various sources. You will like some definitions better than others, and a few you might call inferior. However, this is a good way to start to understand the general usage of the nomenclature. Keep in mind that sometimes different fields use the same term to mean something different, and sometimes a different word is used to mean the same thing.
 
Once again, thanks.

Don't forget that multiplexing can come in several different flavors. You can time-division-multiplex and make one signal that gets modulated and put on a channel. You can also frequency division multiplex to create several channels in the medium. Hence, the latter case seems to conform to the Wiki article.

Yes, you are correct. But time-division multiplexing is used in case of digital transmission, right? So, in that example of radio stations it wasn't relevant to mention.

Personally, I don't worry too much about these types of definitions. It seems everyone comes up with different words, and then others can find holes in the chosen wording. I know at the learning stage, it's important to use definitions to get a handle on all of the confusing ideas, but eventually, you just know what something is. Many things are hard to define well, yet we know it when we see it. My recommendation for you in these cases is to read many many definitions from various sources. You will like some definitions better than others, and a few you might call inferior. However, this is a good way to start to understand the general usage of the nomenclature. Keep in mind that sometimes different fields use the same term to mean something different, and sometimes a different word is used to mean the same thing.

I do understand your point. But sometimes it's always good thing to ask someone with good knowledge because this way you can be sure of your thinking process. Even before asking you I knew there was a little weight in what I was saying but still I went on to seek your nod. Thank you.

Regards
PG
 
Hi Steve

I'm going back to discuss post #3 and your replies to it. Please help me with the follow-on queries. Thanks a lot.

Q1: Looks right to me. Did you double check their calculation? If theirs is correct too, there may be a numerical issue with accuracy. There's not a big difference in the answers anyway.

They had the answer wrong.

Q2: I seem to remember 56K, but either way it's faster than expected. Actually, those modems didn't always attain the maximum possible speed due to noise being higher than specified much of the time. Surely noise has an effect, but they also did some neat tricks to maximize bandwidth. I can't remember them exactly, but it probably related to modulation tricks and/or pulse shapes and detection methods.

Yes, it was 56K which stands for 56000 bps. You are right in saying that they used some tricks to achieve higher data rates. My Q6 was incomplete which I noticed once you had replied. Actually in Q6 I wanted to ask how they achieved such high data rate of 1.5Mb/s on copper phone wire which has limited 'analog' bandwidth of almost 3 kHz. But now I understand to achieve such high data rate they used some techniques in this case too. These references are quite useful here. These references also explain how baud rate plays an important role to achieve high data rates.

Q3: Higher sample rate only helps so much. There are benefits to going above the Nyquist rate because nothing is ideal, but beyond a certain point, the extra samples don't really help the quality. However, finer resolution for quantization then becomes the way to improve the signal.

So, there is a limit to quality improvement using higher sampling rate. Is there also a limit in case of using higher quantization levels? What I have noted is that it is suggested that using more and more quantization levels gives more improvement in quality than using higher sampling rate because there comes a time when increasing sampling rate doesn't help the quality. Please note that a short answer will do, if that's possible.

Q4: Multiplexing just means combining more than one channel or signal. Modulation is a transformation of the signals to a new frequency band. It has nothing to do with wired or wireless. You can do both or either with either approach.

Thank you. I think this diagram explains the difference between modulation and multiplexing well.

First, the send and receive are just combined as one signal. Did you ever notice that you can hear yourself when you talk? People dont' usually talk at the same time, but if they do, their signals just add. The human ear is very good at picking out individual voices.

Sometimes I can even hear myself back on a cell phone! This Wikipedia article also talks about echo suppressors. Suppose, in this case both Carl and Monica are speaking at the same time. Won't their voice signals interfere with each other? How can ears separate individual signals when both persons share the same spectrum?Please help me with this. Thank you.

If you don't modulate, the lower sideband is the negative frequencies that mirror image the positive frequencies. No extra bandwidth is required for the negative frequencies, because (as we talked about) they are the same thing as the positive frequencies.

Let's talk about negative frequencies a little as a side topic. What I say might be somewhat incorrect but if the incorrectness has potential to hamper my understanding in future then kindly guide me. This is my somewhat immature attempt at understanding concept of negative frequencies.

I kind of understand the reason why you call negative frequencies just a mirror of positive frequencies. In other words, even if negative frequencies are ignored in this case it won't cause any issue. Why? Suppose a motor and pulley system which is used to lift a mass. The system looks like this where force is applied by a motor. It doesn't matter if the motor rotates counterclockwise (+ve cycles) or clockwise (-ve cycles), the object will be raised. In short, negative frequencies or cycles don't matter. Just a crude example.

It is not until you modulate that those negative frequencies slide up into the positive frequency band and add to the bandwidth. So, no need to multiply by 2 or 4 and the 3000 Hz is enough.

We use a modification of the system which is used to lift an object but this time we have two motors, motor #1 and motor #2, which work in tandem. Suppose motor #1 has fixed rotation rate of 500 cycles per second and motor #2 has variable rotation rate from 0 cycles per second to 50 cycles per second and has capability of rotating in both directions, i.e. counterclockwise and clockwise. Moreover, rotation cycles in clockwise direction are considered negative. Imagine that when both motors are rotating counterclockwise, the gear network makes the belt pulley rotate counterclockwise, and this belt pulley is used to left an object.

When motor #2 is not rotating, the pulley rotates counterclockwise at rate of 500 cycles per second because it is being driven by only motor #1.

When motor #2 is rotating at rate of 30 cycles per second counterclockwise, the pulley rotates at rate of 530 cycles per second counterclockwise.

When motor #2 starts rotating clockwise at rate 30 cycles per second, the pulley rotates at rate of 470 cycles per second counterclockwise.

In the above example we can say motor #1 is equivalent to the carrier and motor #2 to the modulating signal. And we can see that now negative frequencies or cycles do matter.

I understand that the example above has got many issues but I hope it gets my point across.

Suppose someone asks me that what happens if a copper phone wire is subjected to very high frequencies? In other words, why can't a copper phone wire handle higher frequencies? What happens to it? What should be my straightforward and short reply? Kindly help me. Thanks.

Regards
PG


Helpful links:
1: https://books.google.com/books?id=L...ATnzoGoAg&ved=0CFQQ6AEwBw#v=onepage&q&f=false
2: https://books.google.com/books?id=b...h_oHwAg&ved=0CCoQ6AEwADge#v=onepage&q&f=false
3: https://books.google.com/books?id=5...d=0CDsQ6AEwAzgy#v=onepage&q=baud rate&f=false
4: https://en.wikipedia.org/wiki/56_kbit/s_modem
5: **broken link removed**
6: **broken link removed**
7: https://resonanceswavesandfields.blogspot.com/2007/08/phasors.html#basic-phasor-1
8: **broken link removed**
 

Attachments

  • cs_56k_single.jpg
    cs_56k_single.jpg
    1.2 MB · Views: 542
  • cs_carl.jpg
    cs_carl.jpg
    30.4 KB · Views: 299
  • cs_rotating_phasor.jpg
    cs_rotating_phasor.jpg
    118.1 KB · Views: 371
Last edited:
So, there is a limit to quality improvement using higher sampling rate. Is there also a limit in case of using higher quantization levels? What I have noted is that it is suggested that using more and more quantization levels gives more improvement in quality than using higher sampling rate because there comes a time when increasing sampling rate doesn't help the quality. Please note that a short answer will do, if that's possible.

Yes, of course there is a limit to improvement with higher quantization resolution. Let's use the audio example. A typical 16 bit quantization is very good and will yield a good hi fidelity audio reproduction. However, there are (supposedly) people with exceptional audio/music skill/talent/ability that can hear the improvement in going to 24 bit quantization. I doubt anyone can hear the difference if you go higher though.

You also want to compare the quantization steps with noise levels. Even for digital audio, eventually you have amplifiers and speakers etc. that introduce some noise. Quantizing far below the noise floor is not really doing to help. I don't think. Common sense and experimentation can help get a handle on this in a particular application.


Sometimes I can even hear myself back on a cell phone! This Wikipedia article also talks about echo suppressors. Suppose, in this case both Carl and Monica are speaking at the same time. Won't their voice signals interfere with each other? How can ears separate individual signals when both persons share the same spectrum?Please help me with this. Thank you.
Depends on what you mean by interfere. Interference of similar frequency waves is possible and you would hear beat frequencies in these cases. But this does not typically happen with voice unless two singers deliberately try to sing the same note and one person is slightly off pitch. With talking, there is a broad spectrum and the signals combine without interference.

I don't really know why humans can separate voices so well. Clearly it is a useful ability. Many abilities are hard coded in the brain, and I assume that this is one of them, and then we also have lot's of experience listening, so we learn to focus. In the real world, we can use our two ears and the time delay to identify the position of the people talking and in this way filter the information better. On the phone, we don't have this aid, but we can still do it fairly well.



Let's talk about negative frequencies a little as a side topic. What I say might be somewhat incorrect but if the incorrectness has potential to hamper my understanding in future ... I understand that the example above has got many issues but I hope it gets my point across.
I'll have to try to understand this description with some careful thought. It sounds interesting, but I don't follow it yet. I will say that if it helps you and you feel more comfortable after thinking in this way, it is a good thing.

Suppose someone asks me that what happens if a copper phone wire is subjected to very high frequencies? In other words, why can't a copper phone wire handle higher frequencies? What happens to it? What should be my straightforward and short reply? Kindly help me. Thanks.

It's not shielded, so it picks up noise, and it is not a good waveguide so it has high loss. Also, as frequency goes up, it becomes a good antenna and can radiate energy away. There is also the issue of impedance matching at interfaces and imperfection points. This causes reflections that result in energy loss and interference with the signal.

Copper wire becomes better if you run two wires together, or a wire over a ground plane, because it forms a waveguide that allows lower loss. The line impedance becomes consistent and impedance can be matched at interfaces. There are other effects like bending and perturbations along the length that cause coupling losses.

If you twist the wires, you get a shielding effect, and if you form a coaxial line, you make an even better waveguide with shielding.

That's a quick answer, but quick answers are not always the best answers. There is a whole science behind this and quick answers usually make the most sense to those that already know the details.
 
Thank you very much, Steve.

For this post I have uploaded the images to an external source so if there is a problem you can use the username: imgshack4every1 and password: imgshack4every1. We will try to wind up this discussion now because it's getting us into many different topics which are difficult to cover in a single thread. Thanks.

Yes, of course there is a limit to improvement with higher quantization resolution. Let's use the audio example. A typical 16 bit quantization is very good and will yield a good hi fidelity audio reproduction. However, there are (supposedly) people with exceptional audio/music skill/talent/ability that can hear the improvement in going to 24 bit quantization. I doubt anyone can hear the difference if you go higher though.

So, I conclude from this discussion that the quality depends more on quantization levels than it does on sampling rate; as far as sampling rate is concerned one must satisfy Nyquist criterion. For example, **broken link removed** use 44.1 kHz sampling rate (number of samples per second) where highest frequency in an audio signal is taken to be 20 or 22 kHz and therefore using sampling rate of 70 kHz wouldn't help in any way. Also note that a CD is digital medium of storage.

Could you please help me with this query about bitrate? This link might be useful here. Thanks.

You also want to compare the quantization steps with noise levels. Even for digital audio, eventually you have amplifiers and speakers etc. that introduce some noise. Quantizing far below the noise floor is not really doing to help. I don't think. Common sense and experimentation can help get a handle on this in a particular application.

I don't really get the overall point. Perhaps, you are saying that as quantization levels are increased, the noise increases. If possible, kindly elaborate a little. Thanks.


In post #4 you said, "First, the send and receive are just combined as one signal. Did you ever notice that you can hear yourself when you talk? People dont' usually talk at the same time, but if they do, their signals just add".

Please don't mind my asking but are you sure that the signals get added up? What you say does make sense but I'm just confirming because one can hear oneself back for several reasons.

The links given in set #1 and #2 are quite useful up to this point .

Depends on what you mean by interfere. Interference of similar frequency waves is possible and you would hear beat frequencies in these cases. But this does not typically happen with voice unless two singers deliberately try to sing the same note and one person is slightly off pitch. With talking, there is a broad spectrum and the signals combine without interference.

Different types of interference can occur: destructive, constructive, intermediate, and beat frequencies is just one of the possibilities.

I think we can imagine an experiment here. Suppose, we have two telephone sets located some distance apart. Instead of two persons, we have two speakers connected with signal generators set to sine wave mode placed at mouthpieces and two sensitive microphones connected to oscilloscopes placed at respective earpieces. Suppose amplitude set on each signal generator is same. What will happen when both signals generators are in phase? Will the oscilloscopes show twice the amplitude? Will the displayed amplitude values be zero when signals generators are out of phase?

For the discussion above links in set #3 can be used.

I don't really know why humans can separate voices so well. Clearly it is a useful ability. Many abilities are hard coded in the brain, and I assume that this is one of them, and then we also have lot's of experience listening, so we learn to focus. In the real world, we can use our two ears and the time delay to identify the position of the people talking and in this way filter the information better. On the phone, we don't have this aid, but we can still do it fairly well.

I think the simple reason being that a **broken link removed** can do something similar to Fourier analysis and further it can synthesize related components into one signal.

At this point links given in set #4 are useful.

I'll have to try to understand this description with some careful thought. It sounds interesting, but I don't follow it yet. I will say that if it helps you and you feel more comfortable after thinking in this way, it is a good thing.

Yes, it does help somewhat. When you have time, please try to give it a proper look.

It's not shielded, so it picks up noise, and it is not a good waveguide so it has high loss. Also, as frequency goes up, it becomes a good antenna and can radiate energy away. There is also the issue of impedance matching at interfaces and imperfection points. This causes reflections that result in energy loss and interference with the signal.

You have introduced many important terms (in blue) but I won't ask questions relating to them here because they are in themselves separate topics. For further reference, one can use set #5 in helpful links below. And for the queries below short answers will do and I do understand providing someone with short and straightforward answers can also be a tough job in itself so if a short answer is not possible then you can skip the query(ies) and we can discuss it some other time separately.

I don't see why you consider a copper wire a waveguide. An EM waveguide is mostly a hollow structure and extends over short distances. An optical fiber is optic waveguide (not hollow). So, could you please let me the know the reason for using the term waveguide for a copper phone wire? Perhaps, you just used the term loosely.

Why doesn't a copper wire act like a good antenna at low frequencies?

A twisted pair cable and coaxial cable provide good shielding and they don't become good antennas at high frequencies but I think they also suffer from issues of impedance matching and imperfection points, don't they? In my view, line impedance and imperfection points are dependent upon the consistency and uniformity of the alloys used to manufacture these materials.

Copper wire becomes better if you run two wires together, or a wire over a ground plane, because it forms a waveguide that allows lower loss. The line impedance becomes consistent and impedance can be matched at interfaces. There are other effects like bending and perturbations along the length that cause coupling losses.

I think you have this arrangement in mind where you say "run two wires together".

What do you really mean by "ground plane"?

I don't see what bending and perturbations along the length have to do with coupling losses. Because coupling loss also known as connection loss is the loss that occurs when energy is transferred from one circuit, circuit element, or medium to another. Perhaps, what happens is that bending and perturbations along the length cause the signal leak out into air or surrounding space to great quantity.

Thanks a lot for the help and your time.

Regards
PG


Helpful links:
set1:
http://www.swri.org/10light/cd.htm
http://electronics.howstuffworks.com/cd.htm/printable
http://electronics.howstuffworks.com/analog-digital.htm/printable
http://en.wikipedia.org/wiki/Compact_disc

set2:
http://electronics.howstuffworks.com/telephone.htm/printable
http://en.wikipedia.org/wiki/Duplex_(telecommunications)#Echo_cancellation

set3:
http://www.school-for-champions.com/science/sound_beat_frequencies.htm
**broken link removed**
**broken link removed**

set4:
http://en.wikipedia.org/wiki/Hearing#Mathematics
**broken link removed**
http://hyperphysics.phy-astr.gsu.edu/hbase/sound/timdem.html

set5:
http://en.wikipedia.org/wiki/Signal_reflection
http://en.wikipedia.org/wiki/Newton's_cradle
http://en.wikipedia.org/wiki/Time_domain_reflectometer
http://en.wikipedia.org/wiki/Voltage_standing_wave_ratio
http://en.wikipedia.org/wiki/Antenna_tuner
http://en.wikipedia.org/wiki/SWR_meter
http://en.wikipedia.org/wiki/Impedance_matching
http://en.wikipedia.org/wiki/Coupling_(electronics)
http://en.wikipedia.org/wiki/Coupling_loss
http://en.wikipedia.org/wiki/Waveguide_(electromagnetism)
http://en.wikipedia.org/wiki/Twisted_pair
 
Last edited:
So, I conclude from this discussion that the quality depends more on quantization levels than it does on sampling rate;
I don't like to say one is more than the other when two things are important. It's like saying the heart is more important than the liver for a person to live. Both are important for living, and both must work sufficiently well for good quality of life. Likewise, you need sufficient bitrate and sufficient quantization resolution. I think you understand the importance of both.

Could you please help me with this query about bitrate?
The MPEG format is a compressed format, so it's not possible to extract quantization just from the bitrate.

I don't really get the overall point. Perhaps, you are saying that as quantization levels are increased, the noise increases. If possible, kindly elaborate a little. Thanks.
Yes and no. You are correct that quantization errors can be viewed as noise. But, my point was that you need to compare this quantization noise to other noise sources in your system. Thermal noise, shot noise and 1/f noise are examples of noise in a system. If these noise sources are creating variations much larger than the quantization resolution, it will be hard to hear (or see or measure) the quantization noise. That's all. Just make a common sense comparison. Why improve quantization resolution from 100 μV down to 10 μV, if you have a system with 100 mV of thermal noise?

In post #4 you said, "First, the send and receive are just combined as one signal. Did you ever notice that you can hear yourself when you talk? People dont' usually talk at the same time, but if they do, their signals just add". Please don't mind my asking but are you sure that the signals get added up? What you say does make sense but I'm just confirming because one can hear oneself back for several reasons.
You can hear yourself back for more than one reason, but clearly I'm talking about when you hear your voice from the microphone go onto the line and hear it back through the earpiece. Did you ever have a microphone on your side break? You hear the other person, but something doesn't sound right because your voice does not come through the speaker in your ear. Obviously, you hear your voice through the air and through the bones of your skull, but still the phone sounds not quite right. Then you hear the other person say, "hello? ... hello? ... are you there? .... ".

So, am I sure the signals add? That's kind of a strange question because I think we "assume" that they add. Based on long history and experience from many people over many years, we find that the principle of superposition works well in many cases. When people speak in the air, all the voices add up according to superposition. The separate acoustic waves seem to add nicely. On a telephone line, the two microphones are connected and drive the line, and the voltages seem to add nicely. Is it perfect addition? No, but it's a good assumption.

Different types of interference can occur: destructive, constructive, intermediate, and beat frequencies is just one of the possibilities. I think we can imagine an experiment here. Suppose, we have two telephone sets located some distance apart. Instead of two persons, we have two speakers connected with signal generators set to sine wave mode placed at mouthpieces and two sensitive microphones connected to oscilloscopes placed at respective earpieces. Suppose amplitude set on each signal generator is same. What will happen when both signals generators are in phase? Will the oscilloscopes show twice the amplitude? Will the displayed amplitude values be zero when signals generators are out of phase?
Yes of course. Signals can cancel or reinforce. You can also make two laser beams cancel out or reinforce with phase shifting. Still, two light bulbs don't usually create darkness.

I think the simple reason being that a **broken link removed** can do something similar to Fourier analysis and further it can synthesize related components into one signal.
Your guess is as good or better than mine. I really don't know too much about this. But, it seems reasonable that the human brain might very well be doing many approximate calculations that emulate real mathematical calculations. When we catch a ball, or throw one, we are mentally calculating trajectories as if our brains know Newton's laws and calculus. But, the details of what is actually being calculated is not something I can talk about with confidence.

I don't see why you consider a copper wire a waveguide. An EM waveguide is mostly a hollow structure and extends over short distances. An optical fiber is optic waveguide (not hollow). So, could you please let me the know the reason for using the term waveguide for a copper phone wire? Perhaps, you just used the term loosely.
First, I don't consider a copper wire to be a waveguide. I consider any transmission line to be one example of a waveguide. In special cases, a wire might make a reasonable transmission line, but it would need something else (a ground plane for example) to make it a transmission line. Two wires used in a highly controlled geometry makes a better transmission line.

There are people that don't like the terminology "waveguide" for transmission lines. So, if it's just a terminology issue, then substitute "transmission line" for waveguide in whatever I say. I consider them to be basically the same thing because i have done extensive calculations on RF, microwave and optical waveguides of various types. They all involve exactly the same physics (Maxwell's Equations) and mathematics.

Why doesn't a copper wire act like a good antenna at low frequencies?
Actually, didn't you already do calculations related to this before? At low frequency, the wire would need to be very very long, but that's ok. The same physics is at work, so it can act like a good antenna, with the right geometry.

A twisted pair cable and coaxial cable provide good shielding and they don't become good antennas at high frequencies but I think they also suffer from issues of impedance matching and imperfection points, don't they? In my view, line impedance and imperfection points are dependent upon the consistency and uniformity of the alloys used to manufacture these materials.
Yes, absolutely correct. But, I would tend to say that alloys are not the real issue. Consistency of geometry is more likely to be the issue nowadays.

I think you have this arrangement in mind where you say "run two wires together".
Yes, I do. I was actually thinking of my old 300 Ohm TV twin cable that was popular for attaching to antennas before coax cable became more popular. But, that's exactly what it used to look like.

What do you really mean by "ground plane"?
Any, large area, flat conducting surface. It could be the earth ground itself, water, a large copper area on a circuit board, a wide area of chassis etc.

I don't see what bending and perturbations along the length have to do with coupling losses. Because coupling loss also known as connection loss is the loss that occurs when energy is transferred from one circuit, circuit element, or medium to another. Perhaps, what happens is that bending and perturbations along the length cause the signal leak out into air or surrounding space to great quantity.
That last sentence is correct, but that is an example of coupling losses. Basically, guided modes become coupled to backward guided modes for reflections, or to forward radiating modes for scattering loss. Don't worry about this terminology as it would not make much sense until you get into detailed design/analysis of waveguides/transmission lines. Then, you would get into mode-coupling theory.
 
Last edited:
Hi Steve

This is about post #5. I believe you agreed there that the values for x[0], x[1], x[2], etc. are going to be zero if sampling rate is exactly twice the frequency, 2f, of the sinusoidal signal. But I was just experimenting with Matlab and found that Matlab was able to produce nonzero values at 2f sampling rate for the samples. What is going on?

Code:
t=0:0.01:1;
y=4*sin(4*pi*t);
T=0:0.25:1;
Y=4*sin(4*pi*T);
subplot(211);
plot(y);
subplot(212);
stem(Y);

Regards
PG
 

Attachments

  • cs_nyquist.jpg
    cs_nyquist.jpg
    19.1 KB · Views: 299
PG,

Look at the order of magnitude of those numbers. powers of ten to the minus 15. With double precision numbers, you have about 16 decimal places of precision. So, you can't expect to get exactly zero with complicated calculations. This is a limitation of numerical methods. Hence, you need to learn to recognize the strange results you can get with numerical calculations. Sometimes errors will explode to ridiculous answers. Other times, like now, you need to interpret the answer. When one number is 1e-16 times smaller than another, it is effectively zero for most practical problems. You will usually not know if it should be exactly zero mathematically. Other times, like now, you can know that by other means.
 
Last edited:
Hi

I'm going back to post #11. Although this is not really an important point, there could be a possibility that I'm missing a subtle significant point. Please bear with me. Thanks.

Assuming you have re-read the post.

The definition was:

channel
"In telecommunications and computer networking, a communication channel, or channel, refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel."
.

Previously I was saying that the definition is not all-inclusive but now it seems it's not really correct in my humble opinion. What makes me to say this is the phrase "such as a radio channel". A radio channel does not use multiplexing as I pointed out previously. Each radio channel transmits its own signal separately from other channels which means no composite signal. But perhaps, in space or vacuum the waves transmitted by radio channels interfere with each other 'naturally' to generate a composite signal. I'm stressing on 'composite signal' part because FDM has generation of a composite signal as one important stage. If that's the case (means 'natural' interference gives rise to a composite signal ) then the given definition is correct and all-inclusive. But if this does not happen then how would you re-word the definition? I know I'm just reading too much into that definition. Sorry.

Regards
PG
 
Status
Not open for further replies.

New Articles From Microcontroller Tips

Back
Top