cant get enough sample of sinewave why...?

Status
Not open for further replies.

@m@

New Member
hello...
my simple code which takes input at channel 0 of adc and send to hyperterminal but the problem when I am applying Sine wave I am not getting much samples to reconstruct it even at lower frequnecies..
I am really lost to find out the reason please help me ...
Code:
unsigned short temp_res;
char buf[6];
int i;
void main() {
 UART1_Init(9600);  // Initalize USART (9600 baud rate, 1 stop bit, ...
  ADCON0.ADCS0=0;   // 32TOSC  :::Using External Crystal Oscillator of 20MHz:::
  ADCON0.ADCS1=1;
  CMCON = 0x07;      // Comparator OFF
  TRISA  = 0xFF;     // PORTA is input
  TRISC  = 0;        // PORTC is output
  TRISB  = 0;        // PORTB is output
do
   {

   // Read ADC results of channel number 0 and send the upper byte via USART
   temp_res = ADC_Read(0) >> 2;
   PORTB = temp_res;         // Send lower 8 bits to PORTB
   WordToStr(temp_res,buf);
   for(i = 0; i < 6; i++)
  {
  // Write channel 1
     UART1_Write(buf[i]);
     Delay_us(1);
 }
  UART1_Write(13); // New line
   }
     while (1);       // endless loop
}
 
I am not real good with code but:
Try a faster USART rate. I think you are spending too much time sending data at 9600.
 
The fastest I ever been able read was just under 60khz.The throughput of the on-board ADC will physically not sample any more. You need at least 4 samples to reconstruct a square wave and I would say about 8 for a sine wave. Your maximum sine wave will be in the order of 7khz... To send and display you need to sample about 128 times, send to the PC then let the PC re-assemble while you sample the next 128 ....etc.. As Ron said.. bang the UART up to 115kbs and try to use the interrupts to shift out the data while you re-sample.
 
I do not know your compiler........
My compiler wants to:
Tell the ADC to convert, this takes 1/60khz of time.
Now do something with the data at 9600 and when that is done move on.
_______
To run faster:
Get old data from last conversion and save.
Start next conversion.
While the ADC is converting send out the saved data.
Test, is ADC done? and if so loop (do it again)

First; there is no reason to wait for the ADC to finish when there is work to be done. (easy fix)
Second; There is no reason to wait for a one byte to be sent by the UART when some work could be done. (harder to fix)
 
I am using MikroC .. you saying that to save sampled data then to call it rite...?
means saving samples in Ram then sending to terminal
 
The ADC takes time to make a measurement. Most ADC functions are not built for speed. They waste time. Within the function, they start the conversion, then do nothing until the conversion is done then return with the value.

Make a function.
Start the conversion,
(do not waste 16uS waiting for the conversion to end)
Process the data from the conversion before this one. (send out data)
By the time the UART is done the next data is ready go grab and hold in RAM.
loop

Most compilers: (time to convert A to D) + (time to send data) + overhead = total time
I want: (time to convert and time to send data happen at the same time) + overhead = time

Also: Set UART to fast, and set the ADC time to fast. I am a hardware guy. I can see the ADC speed and be set much like the UART speed can be set. UART1_Init(9600) I do not know what speed is default for the ADC.
 
I was thinking about non-interrupt but this is true for interrupt code.

Assuming you UART can send data faster than the ADC can convert, then the ADC time is the limiting factor. Assuming post 3 is right then you should be running at 60khz (more or less). Also assuming the ADC is set to run fast. You get an interrupt on ADC-done-converting. 1)Get the data 2)Start the next conversion. Make this code small and fast! (send data to UART can be done here or in the background) Return From Interrupt.

The time from INT to Start conversion needs to be FAST.
 
I do not have MikroC!
I am talking about programming and knowing what is slow and what is fast. Many micros will convert 8 bit faster than 10 bits.
What you send to the PC makes a difference.
0111 1111b = 2.5V
You could send a byte (01111111)
You could send 8 bytes 0 1 1 1 1 1 1 1
You could send 2 bytes 7 F
or 3 bytes 2 . 5
or "2.5 volts"
or "two point five Volts (CR) (LF)" (very slow)
If your speed problem is the UART then find a way to send less data and let the PC do the math.
 
Not to be a wet blanket, but doing this kind of real-time stuff pretty much requires assembler. Even a compiled language makes a Lot of assumptions that cripple execution speed. Print the assembler level listing and see how much time is wasted in push/pop of registers you aren't using, etc. Good Hunting... <<<)))
 
I agree. Get out the data sheet and get real accounted with the ADC block, the UART and timers that run them.

Understanding what happens in an interrupt is no small project. Some of the PICs are difficult because of memory bank switching. On interrupt you need to know what you RAM looks like and get back to that bank for RFI. I know the compiler does that but can be very slow. I found some code examples where the RAM pointer was saved and brought back in a strange but fast way. I just Cut-N-Past with out fully understand the code.
 
YOU are saying rite.. Interrupt are hard to understand.. it will make my code more complex which i don't want
 
Interrupts are a good tool. It is one more piece to learn.
I do not know what micro you are using. I do not know what you want to do. I can not advise on the interrupt question. I am not saying use C or assembly. I just want you to know there are options and each has speed and complexity issues.

Compilers are made so you don't have to know how the hardware works. You can switch form one computer to another and it is just "C".
If you want to push your computer very fast the compiler will get in the way. Not knowing how the hardware works is a real issue. The real speed is governed by hardware (supply voltage, clock speed, timer values, 8bit/10bit ADC, etc). It is real easy to right a program that is running at 1/10 the speed of the hardware because the flow is not good, the computer is doing a job that is not needed, the compiler was set to "save memory" and not to "run fast" mode, etc. We are just trying to point out places where micro-seconds can be saved.
 
Simple differentiation: Standard stuff of human interface speed: high level language is way sufficient.

Other end of spectrum:

High speed machine/electronic signal tracking required: high level language acceptable for user interface, BUT well crafted assembler recommended (required) for speed. If Really fast: dedicated hardware as recorder, then replays back at system acceptable speed (e.g. IR sequencer).

P.S. interrupts & tracking/sampling VS user interface gets VERY interesting... BTDT & still do... G.H. <<<)))

P.P.S.> What is your upper tracking frequency?
 
Last edited:
Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…