• Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

delay question

Status
Not open for further replies.

wejos

Member
this code was taken from nigel's tutorial (lesson 7.5) about RS232.

Code:
Start_Delay MOVLW   0x0C
            MOVWF   Delay_Count
Start_Wait  NOP
            DECFSZ  Delay_Count , f
            GOTO    Start_Wait
            RETURN

Bit_Delay   MOVLW   0x18
            MOVWF   Delay_Count
Bit_Wait    NOP
            DECFSZ  Delay_Count , f
            GOTO    Bit_Wait
            RETURN
i'm using 16f84

using 9600 bps, i have read that 1 bit should take 104 uS and half of that 52 uS. i'm confused why 0x0c was used and 0x18 since when we converted these two to dec they give the value of 12 and 24. What's going on?
 
Last edited:

Pommie

Well-Known Member
Most Helpful Member
The loop takes 4 cycles to execute and is executed 24 times = 96 cycles. Plus 2 cycles for the initialisation, plus 4 for the call .. return. And I assume some overhead in the calling routine will all add up to around 104 cycles.

Mike.
 

wejos

Member
The loop takes 4 cycles to execute and is executed 24 times = 96 cycles. Plus 2 cycles for the initialisation, plus 4 for the call .. return. And I assume some overhead in the calling routine will all add up to around 104 cycles.

Mike.
4 cycles because of 4 lines or 4Mhz crystal?

Code:
Start_Wait  NOP
            DECFSZ  Delay_Count , f
            GOTO    Start_Wait
            RETURN
and where did you get executed 24 times?

how do you know all this?

TY
 

Pommie

Well-Known Member
Most Helpful Member
It's 4 cycles because the first 2 instructions take 1 cycle each and the goto takes 2. It is executed 24 times because Delay_Count is previously loaded with 0x18 which is 24 decimal. I know this because I studied the data sheet.

Mike.
 

wejos

Member
Mike ty for the speedy response. im googling the data sheet now to read about them. you have answered my question thank you again.
 

wejos

Member
mike is this the overhead you're talking about?

Code:
Rcv_RS232   BTFSC   SER_PORT, SER_IN      ;wait for start bit
            GOTO    Rcv_RS232
            CALL    Start_Delay	          ;do

Code:
Start_Delay MOVLW   0x0C
            MOVWF   Delay_Count
Start_Wait  NOP
            DECFSZ  Delay_Count , f
            GOTO    Start_Wait
            RETURN

Bit_Delay   MOVLW   0x18
            MOVWF   Delay_Count
Bit_Wait    NOP
            DECFSZ  Delay_Count , f
            GOTO    Bit_Wait
            RETURN

it counted 1 extra cycle, 105... is this acceptable?
 
Last edited:

wejos

Member
now i count 103 cycles without the goto Rcv_RS232 since i realized this part was skipped when ser_in became low
 
Last edited:

nike6

Banned
i would not put too much effort into hardwired code, unless you want a reference design or want to save costs.

normally PIC with integrated serial port would be used, or at least, timing will be derived from interrupt/TIMER register.

hardwired code is difficult to maintain. it may work in the end but unless you know exactly what you are doing you make your life more difficult than neccessary.

personally I almost never use hardwired delay, execept to insert a few NOPs.

90% of all PIC programs do not consider* the exact number of cycles (while of course they consider the number of TIMER cycles, and interrupts).
or at least, a majority of advanced user's programs.

*this is made up but also on PC the only language which is using such delays is old MS-DOS based BASIC. now replaced with software timers.

however, you should also feel free to use whatever technique you want, and not to rely legally binding on my advice.
 

wejos

Member
ty nike. thanks for your advice, the moment i get my hands on more sophisticated PICs i will use them, right now i hold on to my 16f84.
 

nike6

Banned
I've read everything and have, I hope, understood the type of question that you have.

initialization of serial port control registers also is complicated, I have question myself, there are many different modes.

so if your hard-coded program works, good. I just follow the one or the other thread.
 

Nigel Goodwin

Super Moderator
Most Helpful Member
i would not put too much effort into hardwired code, unless you want a reference design or want to save costs.

normally PIC with integrated serial port would be used, or at least, timing will be derived from interrupt/TIMER register.

hardwired code is difficult to maintain. it may work in the end but unless you know exactly what you are doing you make your life more difficult than neccessary.

personally I almost never use hardwired delay, execept to insert a few NOPs.

90% of all PIC programs do not consider* the exact number of cycles (while of course they consider the number of TIMER cycles, and interrupts).
or at least, a majority of advanced user's programs.

*this is made up but also on PC the only language which is using such delays is old MS-DOS based BASIC. now replaced with software timers.
I would disagree entirely, by using software delays, and software UART's it gives you far more choice - you're not restricted to particular pins for a UART, and you can have multiple UARTs if you need them. As for software delays, they are far more versatile and often more accurate than using timers - you can adjust software delays to give timing to one instruction cycle, you can't do that with timers.

Using in-build hardware has it's uses, and for some purposes a hardware UART is absolutely essential - but getting 'hung up' on using the hardware when it offers no advantage (and often is a disadvantage) isn't a good idea.

Evaluate each task individually, and decide which method provides the best solution for that particular task.
 

nike6

Banned
yes I give in- you're right, I did not consider multiple, or additional software UARTS, at least, not such UARTS which are based on a fixed baud rate.

software UARTS can also work supported by additional clock signals, be it to simplify programming.

it is maybe just a personal choice that I do not want to hardwire too much code to a certain frequency.

even my I2C code can work over a broad frequency range, with some minimal adjustment. I never rely on a certain execution frequency.

for a baud-rate based software UART, of course, somehow, you'd have to.
 
Last edited:

Nigel Goodwin

Super Moderator
Most Helpful Member
even my I2C code can work over a broad frequency range, with some minimal adjustment. I never rely on a certain execution frequency.
As I2C is syncronous of course it works over a wide range - there's no internal clock.

Regardless of how you doing timing delays it's entirely dependent on clock frequency, doesn't matter if it's hardware or software.
 

nike6

Banned
yes Nigel Goodwin, but DELAY will lock up the PIC, while TIMER won't.
that's what i wanted to say.

if you must refresh a display (for instance) you can not delay too long, not even one 1/100 second. some approaches do not use 20 MHz or even 4 MHz but just 500 KHz, for instance in order to save power.

As you explain, each case should be considered carefully.
 

Nigel Goodwin

Super Moderator
Most Helpful Member
yes Nigel Goodwin, but DELAY will lock up the PIC, while TIMER won't.
that's what i wanted to say.

if you must refresh a display (for instance) you can not delay too long, not even one 1/100 second. some approaches do not use 20 MHz or even 4 MHz but just 500 KHz, for instance in order to save power.

As you explain, each case should be considered carefully.
Yes - but the obvious solution to that is a timer interrupt - simple delays, even using a timer, sit in a loop waiting for it to time out. In that case there's no advantage in using a timer, and plenty of disadvantages.
 

nike6

Banned
you maybe don't know, i always crunch random numbers.

never use a deadlock GOTO loop, never used it anywhere, anytime.

also i almost ever use a TIMER based skeleton for the main loop.
 

wejos

Member
guys what does the "bsf status, c" do here?

and the rrf rcv_byte, f?

Code:
BTFSS   PORTA,0x00
            BCF     STATUS,C
            BTFSC   PORTA,0x00
            BSF     STATUS,C
            RRF     Rcv_Byte,f
            DECFSZ  Bit_Cntr,f           ;test if all done
            GOTO    Next_RcvBit
            CALL    Bit_Delay
            MOVF    Rcv_Byte,W
            RETURN
 

wejos

Member
please correction me

Code:
  BTFSS  PORTA,0x00
            BCF     STATUS,C
            BTFSC   PORTA,0x00
            BSF     STATUS,C
            RRF     Rcv_Byte,f

lets just say rb0 was high. so next instruction "bcf status,c" is skipped. since rb0 is high "btfsc porta,0x00" instruction will cause execution of next line which is setting status bit 0 to 1. and this is the part i don't get. the line "rrf rcv_byte,f"

the way i see it. rcv_byte value is eight zeros (00000000). when rrf line get executed the value of c was appended to rcv_byte, starting from left.

10000000

if c becomes 0 in the next round

rcv_byte becomes

01000000

and c is 1 next round

10100000

is this whats going on?

and what is the work of f in rrf Rcv_Byte,f ???

thank you.
 
Last edited:
Status
Not open for further replies.

Latest threads

EE World Online Articles

Loading
Top