Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Understanding Microcontroller Timings

Status
Not open for further replies.

Marks256

New Member
So i've been getting quite good at microcontrollers recently, but the one thing i don't quite understand is, how can i calculate how fast a task is carried out? Take the following code for example;

Code:
#define F_CPU 1000000UL  /* 1 MHz CPU clock */
#include <avr/io.h>

int main (void)
{
        DDRC =0xFF; //set port c to outputs
        while (1)
        {
                PORTC=0x00;  //Turn LED off
                PORTC=0xFF; //Turn LED on. Notice there was no delay.
        }
        return (0);
}

So, assuming the 1MHz clock, how would one calculate how fast the LED will turn on/off? It can't be 1MHz, can it?


What about this code?

00016 void delay_ms(uint8_t ms) {
uint16_t delay_count = F_CPU / 17500;
volatile uint16_t i;

while (ms != 0) {
for (i=0; i != delay_count; i++);
ms--;
}
}

Obviously the number 17500 had to come from somewhere... So how was that calculated? How did someone know that it would take F_CPU divided by 17500 loops of decrementing ms, to equal 1ms?


Do you have to look at the compiled assembly and add up all the instructions or what? Let's assume compiler optimizations are turned on, if it will matter (which i'm sure it will.) For the record i did not write the examples i've posted. Here is where i got most of them Project 1 - Basic Blinking. i was too lazy to write my own code :)

Thanks!
 
So it is not possible to understand timings in C? So basically timing critical timing operations need to be written in ASM?

Oh, the posted source code is for AVRs.


Another question, how do logic analyzers calculate timing? As in, how can they figure that it is 500ms from one peak to another peak of a signal?
 
So it is not possible to understand timings in C? So basically timing critical timing operations need to be written in ASM?
In ASM on instruction generates one machine instruction. As long as all instructions take the same number of machine cycles you can just count them. This is not universally true.

You already know that any compiler is free to generate machine code as it wants. There is no way of knowing without examining the output.


Another question, how do logic analyzers calculate timing? As in, how can they figure that it is 500ms from one peak to another peak of a signal?

LA's have two basic modes. State and timing. In the state mode they sample once per clock period.

What LA are you using ?

It has been a long time and I spend a lot more hours using one then configuring it.

This is what logic tells me.

You could input the frequency or period.

You could provide a clock input.

There is the possibility of and an autobaud method but I think that would only work in timing mode.

3v0
 
My logic analyzer is an HP 1631D. I only know how to use the timing setting. But now that you said that, i think i understand it. In the timing mode, the logic analyzer samples the inputs when it feels like it (when its internal clock tells it to), but in state mode, it is based on an external clock, correct?


Since i have never used the state mode on my LA, i'm not sure if it will tell you the time between two signals, but assuming it does, does it just compare the external clock to the internal clock which it knows the time of, and mathemetically calculate the time between any two points on the signal?
 
For c consider using the simulator to time operations. BoostC for example has a timing plug in. However, if there are branches in the code, or dependencies on external events then things become more complicated.
 
Hmm. Well i'm using AVR-GCC (under eclipse 3.5 galileio). I'm not sure if the simulator is included in that. If not i can always pull up AVR Studio on my window$ partition.
 
I am fairly sure in state mode you need a clock from your circuit.

There should be a setting to capture data on the rising edge, falling edge, or both.

I expect that it measures between clock edges to determine the period so it can provide the time between points.

It is time for you to start reading the manual :) too.

3v0
 
Marks256

This may be of some help to you :)
 

Attachments

  • 1631D_manual.pdf
    6.6 MB · Views: 204
Hey thanks gaspode42! :)


I guess i never thought of looking the manual up for it :)

But my question is how does the HARDWARE work? How can the the analyzer accurately measure a rising edge, then a falling? Now obviously a very well funded company made these beasts, but it can't be magic, so there has to be a secret.

another question - how can the analyzer accept ±40volts on the probes?? What kind of circuitry would allow that?
 
How can the the analyzer accurately measure a rising edge, then a falling?
It is done with logic gates. Very fast logic gates.

another question - how can the analyzer accept ±40volts on the probes?? What kind of circuitry would allow that?

Not sure what they are doing. I expect there are a few ways to do it.

3v0

Hey thanks gaspode42! :)


I guess i never thought of looking the manual up for it :)

But my question is how does the HARDWARE work? How can the the analyzer accurately measure a rising edge, then a falling? Now obviously a very well funded company made these beasts, but it can't be magic, so there has to be a secret.

another question - how can the analyzer accept ±40volts on the probes?? What kind of circuitry would allow that?
 
Status
Not open for further replies.

New Articles From Microcontroller Tips

Back
Top