Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Scrambling operation - B.P.Lathi

Status
Not open for further replies.

elexhobby

New Member
I have been referring to Modern Digital & Analog Communication Systems by B.P.Lathi.
I was reading Chap. 7, Principles of Digital Data Transmission.
In Sec 7.4, he discusses scrambling, where he says that it is used to prevent unauthorized access to the data & to remove long strings of 0’s & 1’s.
Can somebody explain intuitively or prove mathematically why this is always guaranteed.
The explanation in the book is followed by a numerical eg, but I can’t explain why long strings of 1’s & 0’s will be removed in general always.
 
I'm using the same book for my communications class. I have some PDF's of the solutions manual that I could send you if that would be of any help :)
 
A long string of 1s and 0s cause the DC level of the transmission line to wander off. The phenominon is called "DC Wandering".

Due to Capacitance of the line, the DClevel is removed and the signal may start becoming BiPolar instead of remaining unipolar.

So the best solution is to use BiPolar signals during transmission. The PCM code to be transmitted is encoded in different manners befor it is put on the line. Viz. Manchester encoding, Miller's Encoding, BiPolar Return to Zero, BiPolar Non Return to Zero, Uniopolar NRZ, Unipolar RZ etc.

These prevent DC level of the signals from wandering and halps in clock recovery at the receiver.

Refer the book "Advanced Electonic Communication Systems" or "Electronic Communication Systems-Fundamentals through Advanced" both by Wayne Tomasi. Or any other book on the topic.

For scrambling for data security, the data is almost always first compressed to remove patterns and then is encrypted. Encryption has a certain element of randomness. In case of a public key-private key encryption, each data to be encrypted is encrypted by a symmetric key. This key is randomly generated and the cipher text of the same data encrypted twice is different.

Does this solve your problem?
 
Does this solve your problem?
I doubt it.
Not if I understand the original post correctly.
When transmitting data for digital satellite TV, for example, (where they don't use Manchester or bi-phase encoding) the explanatory diagrams always show the data bits being "scrambled" to prevent the ocurrence of a long string of zeros or ones. I believe the data stream is sometimes shown being XORed with some pseudo random bit stream.
The question is, how can one be certain that the actual data and the pseudo random bit stream will not XOR together to create a long sequence of zeros or ones?
It is a question that I have wondered about in the past. Surely any scrambling scheme runs the risk of producing a long sequence of zero or ones, unless extra bits are added.
 
JohnBrown said:
The question is, how can one be certain that the actual data and the pseudo random bit stream will not XOR together to create a long sequence of zeros or ones?
It is a question that I have wondered about in the past. Surely any scrambling scheme runs the risk of producing a long sequence of zero or ones, unless extra bits are added.

I can't say I've ever looked into how digital TV does it, there's not much point from a servicing point of view. But, CD players have the same problem (if not more so?), they use a system called Reed-Solomen Cross-Interleave Code, you might try a look at http://www.csie.nctu.edu.tw/~cmliu/audio/error.pdf for details.
 
I was speaking of the DS-1 in the North American Digital Hierarchy.

The string of zeros is complimented to get a series of ones. and the ones are return to zero. So there is no DC wandering.

What if a series of ones occur? That possibility is rear. It is difficult to have a channel at high for a long time, unless someone is screaming into the telephone line without a break. This is from the point of view of Telephone systems. Also the telephone system PCM DS-0 signals are multiplexed. So the adjacent channels are mixed up to prevent real long series of ones or zeros. This can occur if all the customers are either quiet or all are screaming at the same time...

JohnBrown, are you sure they are not using the word "scramble" and "line encoding" synonymously ?
 
CD players use a system whereby each 8 bits are translated to a 14 bit code, the 14 bit codes are picked such that they have a similar number of ones and zeros, this makes the data slicing easy.
Please note that I made up the 8 and 14 figures in the above sentence because I can't be bothered to check them, they may be right or they may not, but the principle is right.
 
The question is, how can one be certain that the actual data and the pseudo random bit stream will not XOR together to create a long sequence of zeros or ones?
It is a question that I have wondered about in the past. Surely any scrambling scheme runs the risk of producing a long sequence of zero or ones, unless extra bits are added.

JohnBrown, you are right. This is what my actual doubt was.
How do you explain that exoring will not create a long string of zeros or ones.
 
I think I know.
They feed the data bits into a shift register, which has feedback taps, so a bit like a linear shift register pseudo-random-number generator. The shift register is "re-primed" with a fixed pattern after so many packets have been encoded. I think the fact that there is a primed shift register means that there is, in effect, extra data added.
 
Though I haven't read the original work cited, I imagine that it is because nearly every form of standard BCD encoding is very wasteful. "About 1/6 of the available memory is wasted, even in packed BCD," to quote the page on BCD at Wikipedia (http://en.wikipedia.org/wiki/Binary_coded_decimal).

Digital communications scrambling typically use some form of data compression for increasing data density such as the CCITT standards (**broken link removed**) or better BCD algorithms such as Chen-Ho encoding (http://en.wikipedia.org/wiki/Chen-Ho_encoding) or the proposed densely packed decimal encoding (http://www2.hursley.ibm.com/decimal/DPDecimal.html) or even both. Nearly all of these techniques are based upon the principles of BCD requiring extra, unneeded 1s and 0s. Once scrambled the data takes up less bandwidth.

An extensive write up on data compression used in stored or communicated data can be found here -- http://www.ics.uci.edu/~dan/pubs/DataCompression.html
 
JohnBrown said:
???? Have I missed something ????
What has BCD got to do with anything?

Like I said, I haven't read the book. However, it isn't entirely unusual for digital data transmission and storage, which the original topic is about, to use BCD or something similar. Compression is relevant whether BCD encoding is used or not.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top