It can be used as a stand alone receiver, or as a bases for a more fully functional UART. No claim is made that this is a professional design and no guarantee is made that it will satisfy any professional requirements. It should provide for very good, basic communications between the PC and a development board, however, as it’s functionality is virtually identical to that of more advanced UART receive channels.
That should be simple enough for anyone.
Why don't you take your static gun and point it to that place the sun doesn't shine and see how many errors you get?
Every real UART I’ve looked at uses the same sampling as my simple receiver does.
Oh, but you are sooooo much more knowledgeable than everyone else! Don't expect anyone to fall on their knees in front of you, chanting, "How great UART!" LOL!
By: Ubergeek63 you might try looking at the TI UART page and explain to me what the "Baud Rate (max) at Vcc = 2.5V and with 16X Sampling (Mbps)" column means in the context of your never having seen a UART that samples more than yours. the fact is that 19 out of the 26 UARTs is 16x oversampled, and yet the only clue that they do in the data sheet is NOT in the pictures but in the 16x clock rate...other than that the fact is not mentioned, it is assumed you know it.
it is, in fact, the only reason RS232 can go at a reasonable speed. The plain fact is that MOST of the UARTs that you have seen are most likely 16x oversampling and you just did not know it since they do not show it in the cartoons.
I can easily explain that to you. But you haven't said you think it means yet. I'll bet it doens't mean what you think it does.
To recap, this module is a simple receiver for connecting to a computer’s RS-232 port and communicating with an FPGA development board that includes an RS-232 comm port. It can be used as a stand alone receiver, or as a bases for a more fully functional UART. No claim is made that this is a professional design and no guarantee is made that it will satisfy any professional requirements. It should provide for very good, basic communications between the PC and a development board, however, as it’s functionality is virtually identical to that of more advanced UART receive channels.
sr_rec_prepare:begin
case (lcr[/*`UART_LC_BITS*/1:0]) // number of bits in a word
2'b00 : rbit_counter <= #1 3'b100;
2'b01 : rbit_counter <= #1 3'b101;
2'b10 : rbit_counter <= #1 3'b110;
2'b11 : rbit_counter <= #1 3'b111;
endcase
if (rcounter16_eq_0)
begin
rstate <= #1 sr_rec_bit;
rcounter16 <= #1 4'b1110;
rshift <= #1 0;
end
else
rstate <= #1 sr_rec_prepare;
rcounter16 <= #1 rcounter16_minus_1;
end
sr_rec_bit : begin
if (rcounter16_eq_0)
rstate <= #1 sr_end_bit;
if (rcounter16_eq_7) // read the bit
case (lcr[/*`UART_LC_BITS*/1:0]) // number of bits in a word
2'b00 : rshift[4:0] <= #1 {srx_pad_i, rshift[4:1]};
2'b01 : rshift[5:0] <= #1 {srx_pad_i, rshift[5:1]};
2'b10 : rshift[6:0] <= #1 {srx_pad_i, rshift[6:1]};
2'b11 : rshift[7:0] <= #1 {srx_pad_i, rshift[7:1]};
endcase
rcounter16 <= #1 rcounter16_minus_1;
end
sr_end_bit : begin
if (rbit_counter==3'b0) // no more bits in word
if (lcr[`UART_LC_PE]) // choose state based on parity
rstate <= #1 sr_rec_parity;
else
begin
rstate <= #1 sr_rec_stop;
rparity_error <= #1 1'b0; // no parity - no error :)
end
else // else we have more bits to read
begin
rstate <= #1 sr_rec_bit;
rbit_counter <= #1 rbit_counter - 1'b1;
end
rcounter16 <= #1 4'b1110;
Actually you are boring me and I was getting tired of looking for something that Big Business (BB) does not want you to know ... particularly when it is cutting into my time with my wife, my games, and my job...Now, show me where 16 samples are being collected and used in any error reduction scheme. If you can't then your whole tirade has bee nothing more than an attempt to rip up a perfect design. Also, I'm still waiting for you to explain to me how a UART cannot satisfy a standard that you've claimed does not cover the design of the UART?
OH, and one more thing, you're the one who started calling people 'fools' so don't cry when someone ruffles your feathers.
1. At the falling edge of the start bit, an internal timer starts
counting at 16X clock. At 8th 16X clock, approximately
the middle of the start bit, the logic level is sampled. If a
logic 0 is detected the start bit is validated.
2. The validation logic continues to detect the remaining
data bits and stop bit to ensure the correct framing. If an
error is detected, it is reported in LSR[4:2].
3. The data frame is then loaded into the RBR and the
Receive FIFO pointer is incremented. The error tags are
updated to reflect the status of the character data in RBR.
The data ready bit (LSR[0]) is set as soon as a character
is transferred from the shift register to the Receive FIFO.
It is reset when the Receive FIFO is empty.
What? that is 16 samples PER BIT not per BYTE... and were it per byte you would not be getting everything as there are a minimum of 10 bits per full byte(1 START, 8 DATA, 1 STOP)So, you're not telling me anything I didn't already know. You're claim is that in order for a UART to be a real UART, it must take multiple samples. Now, you've found one that does, but that doesn't mean it's the only way for a real UART to be made. It also said may be sampled by 7 or 3 of the 16x clocks. But you said above that all 16 samples are used, and that that's how real UARTS work, and anything else isn't reliable. 7 bits would seem to me to be the maximum bits from a 16x clock that could reliably be captured; otherwise, the samples would be taken from unreliable portions of the data period.
I never said you did not get it working, I said it was not robust and would be unreliable in a noisy environmentSorry, you've still failed to prove anything claim you made about my little design. Finding one UART that uses the sampling technique does not mean that's what a real UART must use. You choose to ignore the real UART that I posted above, and that's OK. But my point is proved, that it's still a real UART if it uses the sampling technique that I've used, and is a reasonably reliable way to communicate.
What I said was that is it more common than not for oversampling to be used to filter out noise. What I described was only the beginning of what would be used in a mission critical military application, does that mean that I expect you to implement it? Certainly not. All that really needed was the realization that it would indeed be more reliable. None of it is needed if you are on PC board, a short cable, or running at low speed... your current situation is the latter two in a low EMI environmentFar from boring me, you've provided me with much entertainment over the last couple days, as you've tried to distort and make up phony requirements, even attempted to show an over-complicated, unnecessary scheme that wouldn't work. I've had plenty of smiles reading your desperate posts, including this one in which you think you've proved that all real UARTS much use multiple samples/majority voting just because you've found one that does. BTW, that's still way more simple than the convoluted scheme you described I would need to implement before I can feel proud of my creation.
On the contrary, manufacturers do all that they can to claim and maintain perceived superiority. A more common example would be in jewelry... QVC's "Diamonique" is just cubic zirconia and HSN's "Technibond" is just vermeil (gold plated silver). They do all they can to maintain the perception that they are better ... if that means coming up with some cock and bull trade name that is what they will do. Case in point is National Semiconductor's "Solar Magic": all sorts of demonstrations and gymnastics all to keep out of the literature what it really is: Maximum Power Point Tracking.I don't buy the "trade" secrets, since majority voting and such are well known techniques. Manufacutres don't hide features, they market them! That they are used in some UARTS does not prove they must be used in all real UARTS.
What? that is 16 samples PER BIT not per BYTE... and were it per byte you would not be getting everything as there are a minimum of 10 bits per full byte(1 START, 8 DATA, 1 STOP)
I never said you did not get it working, I said it was not robust and would be unreliable in a noisy environment
On the contrary, manufacturers do all that they can to claim and maintain perceived superiority. A more common example would be in jewelry... QVC's "Diamonique" is just cubic zirconia and HSN's "Technibond" is just vermeil (gold plated silver)......
Perhaps I will sit down this weekend and puke out the code ... as to the complexity it starts with two 4 bit counters clocked off the 16x clock with the second one being gated by the incoming data. the first clears the second on overflow and clocks a shift register whose input is bit 3 of the second counter. Bit quality is also ridiculously simple: an EXOR gate on bits 2 and 3 of the second counter. A third counter tracks bit position to verify the start and stop bits. This last one could be made loadable for a programmable word length which it would normally be since the parity might not be there there and there could be 2 stop bits as well...
hich brings us back to the beginning, you were in error when you titled this thread a Verilog RS232 receiver, RS232 ONLY specifies the physical layer. It has nothing to do with the data format, length or synchronization. Additionally, Verilog has nothing to do with a PHY ONLY with the data format, length or synchronization in this case.
Actually it would work fine... the second counter simply counts when the input data is high... the end result is that bit 3 is high only when then the input is predominately high. Now my bit quality detect, on further thought, needs to take in bits 3..1 as opposed to just bits 3..2 since that leaves 8 counts out of 16 flagged as unreliable. Adding bit 2 reduces that to 4 counts out of 16 flagged as unreliable and all the while "sampling" dead center of the incoming data bit.I doubt very seriously that will work. The majority of samples will be taken past the margins of the correct, stable bit value. Early samples will be gauranteed to be incorrect, as they will be taken at the transistion of the signal. As the bit rate goes up, the signal eye gets smaller, and errors accumulate. This is the prupose of using 16x in the first place, so that the samples can be collected at the optimum point of the data interval. There is most likely dimished returns for sampling above 3 bits, as the samples move further into the margins. In this case, advanced downstream methods are most likely a better choice.
technically it IS an AR (Asynchronous Receiver), but no one would recognize "AR" as meaning anythingFair enough. I think it's OK in literature to speak of RS232 systems though, and I am trying to help other members communicate to their boards through their computer's RS2323 ports. Though technically outside the standard, I maintain it's part of the comm system. I could have called it a UART, but someone would have griped anyway, because it's not a complete one. Do a google search for "RS232" and "verilog" and see how many hits you get. If you want to go on a crusade to make sure everyone knows the difference, better get busy
yes but what actually happens is that company B markets the features as what they are while company A tacks on a trade name, obscuring the fact that they are indeed identical, and lies to the customer that it is better.As for the marketing, I think manufactures would be running a risk of insulting engineers if they are trying to give them a tapdance. There are catchy names, like oh, I dunno 3-D Now? But they actually mean something, and engineers are prone to look under the hood. What I was saying is, for example, consider this:
Company A and B have products that are identical in operation.
Company A markets the features of it's product that reduces errors and increases it's robustness
Company B does not market those features, although they are well-known techniques not covered by patents or company secrets
Company B has put itself at a disadvantage in the market place. I just don't think they would want to do that.
Actually it would work fine... the second counter simply counts when the input data is high... the end result is that bit 3 is high only when then the input is predominately high. Now my bit quality detect, on further thought, needs to take in bits 3..1 as opposed to just bits 3..2 since that leaves 8 counts out of 16 flagged as unreliable. Adding bit 2 reduces that to 4 counts out of 16 flagged as unreliable and all the while "sampling" dead center of the incoming data bit.
yes but what actually happens is that company B markets the features as what they are while company A tacks on a trade name, obscuring the fact that they are indeed identical, and lies to the customer that it is better.
I see what is confusing things... I am counting EVERY sample (the input data is the clock enable of the sample counter) and flagging the totals that are 50% +/-13% as being statistically unreliable. The end result is that the apparent sample point is dead center, but the entire bit width is taken into account. In a fault tolerant configuration if only one bit is flagged as such, and there is a parity bit, it can be corrected with 100% certainty.You're only counting early samples and discounting late ones. You need to decode the 16x counter to ensure the samples come from the dead center of the bit.
I have known many that it would work on... a wise old prof said that it would be better to hire a "C" student than an "A" student. Working for a "C" yields a better understanding than getting an "A" does. Unfortunately, it is getting more common these days that neither is worth bothering with, more often than not basic transistor theory is not taught, it is assumed that you will not be using basic elements anymore unless you take specialized courses.In all of the presentations I've sat on over the years, I've never see that strategy work successfully. An engineer worth his salt won't fall for gimmicks.
I have known many that it would work on... a wise old prof said that it would be better to hire a "C" student than an "A" student. Working for a "C" yields a better understanding than getting an "A" does.
the indeterminate data would not bother it at all ... metastability, on the other hand would ... a synchronizer would have to be added to the incoming data stream to ensure there were not metastability problems...That's clever, but it's fraught with peril. You're sampling indeterminate data at the edges. You might think you're masking out those samples, but that depends on the "good" samples being correct, which can't be assumed in a noisy system. Better to not sample the data at the edges, as it does you no good, and can cause more havoc.
but that is where the bit quality comes in. If you already KNOW there is ONLY one questionable bit in the frame thanks to the sample counter, you can verify and correct it. It is not like a simple parity bit that says this frame MIGHT be corrupt. You know from the quality of the sample exactly which bits, if any, are questionable.Also, you can't use parity to correct bits. Parity won't report single bit errrors, only a undetermined odd numberof errors. Also, even numbers of errors aren't reported at all.
well statistically it should not matter but if you insist the input data could be overrode at the beginning and end of each bit... it takes far less logic to override a couple counts at the beginning and end than it does to do math after the fact. Hence my 50+/-13% dead zone, the only thing required is to ignore the last bit for the count.It's best to take a single or only a few samples at the optimum data point. There are much better methods of detecting and correcting data errors, in those critical situations. In all of the systems I've worked on over the years, I've never seen one that required more than a single samle.
LOL actually the point is more that a grade is just a number and a degree is just a piece of paper ... they only say that you have completed something and not what you are capable of. Comprehension and creativity are far more important, unfortunately personnel weenies tend to be self absorbed twits with no clue of actual company needs.I wouldn't know about that. I was a "B" student, so I guess I missed out on all accounts
It actually does a little of both but you are right in that the serial data is not repeatable... In fact the old transfer protocols and even the modems would request retransmission of bad packets. Synchronizers are only effective at higher data rates... Actually I expected the indeterminate data at the edges to come through, the only thing that would "break" it would be the eye diminishing to the point that it was over 50% noise.The indeterminate data at the edges of each bit is gonna hose the counter method if you choose to count those samples, I gaurentee that. Metastability isn't going to be your problem, rather your counter won't discriminate between vaild samples taken during the stable data interval, and invalid data taken at the edges where the signal is in transition. Synchronizers won't help the situation at all. Even if you come up with a better scheme, you sill cannot gaurentee you'll only have good or damaged bits. Multiple samples can only mitigate the corruption effects of noise, not eliminate it. So, no correction method using the parity bit will be 100% effective. If data delivery is that critical, I would opt for proven methods that greatly diminish the chance of bad transmissions.
16X oversampling in DVD's does not reduce errors. It's not comparable to serial communication. It's a whole different function involving restoring inter-sample information, or really faking data, and not applicable to serial data in any way.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?