So i've been getting quite good at microcontrollers recently, but the one thing i don't quite understand is, how can i calculate how fast a task is carried out? Take the following code for example;
Code:
#define F_CPU 1000000UL /* 1 MHz CPU clock */
#include <avr/io.h>
int main (void)
{
DDRC =0xFF; //set port c to outputs
while (1)
{
PORTC=0x00; //Turn LED off
PORTC=0xFF; //Turn LED on. Notice there was no delay.
}
return (0);
}
So, assuming the 1MHz clock, how would one calculate how fast the LED will turn on/off? It can't be 1MHz, can it?
Obviously the number 17500 had to come from somewhere... So how was that calculated? How did someone know that it would take F_CPU divided by 17500 loops of decrementing ms, to equal 1ms?
Do you have to look at the compiled assembly and add up all the instructions or what? Let's assume compiler optimizations are turned on, if it will matter (which i'm sure it will.) For the record i did not write the examples i've posted. Here is where i got most of them Project 1 - Basic Blinking. i was too lazy to write my own code
In ASM on instruction generates one machine instruction. As long as all instructions take the same number of machine cycles you can just count them. This is not universally true.
You already know that any compiler is free to generate machine code as it wants. There is no way of knowing without examining the output.
My logic analyzer is an HP 1631D. I only know how to use the timing setting. But now that you said that, i think i understand it. In the timing mode, the logic analyzer samples the inputs when it feels like it (when its internal clock tells it to), but in state mode, it is based on an external clock, correct?
Since i have never used the state mode on my LA, i'm not sure if it will tell you the time between two signals, but assuming it does, does it just compare the external clock to the internal clock which it knows the time of, and mathemetically calculate the time between any two points on the signal?
For c consider using the simulator to time operations. BoostC for example has a timing plug in. However, if there are branches in the code, or dependencies on external events then things become more complicated.
Hmm. Well i'm using AVR-GCC (under eclipse 3.5 galileio). I'm not sure if the simulator is included in that. If not i can always pull up AVR Studio on my window$ partition.
I guess i never thought of looking the manual up for it
But my question is how does the HARDWARE work? How can the the analyzer accurately measure a rising edge, then a falling? Now obviously a very well funded company made these beasts, but it can't be magic, so there has to be a secret.
another question - how can the analyzer accept ±40volts on the probes?? What kind of circuitry would allow that?
I guess i never thought of looking the manual up for it
But my question is how does the HARDWARE work? How can the the analyzer accurately measure a rising edge, then a falling? Now obviously a very well funded company made these beasts, but it can't be magic, so there has to be a secret.
another question - how can the analyzer accept ±40volts on the probes?? What kind of circuitry would allow that?