This is just a "how to" question. For background, I am playing with an I2C interface using a PIC 16F1829 chip. There are several bit-banged examples, but I have decided to try to use the MSSP module with my chip. Microchip's AN734C seems right on point. The sample code that I have been studying is "I2C Master Driver version 2 (assembly)" by Chris Best and is included in that application note. It makes sense to me, but seems to use a lot of conditional jumps based on flag bits in the ISR.
Instead of using separate registers for each flag bit, I set up one register (flag0) for the write flag bits and flag1 for the read flag bits. I suspect one could calculate an offset from those registers to get to the right subroutine without all of the conditional jumps. However, one would need to know the PCL (maybe PCLATH too) for each subroutine. Hard coding those values based on a disassembly listing would seem like asking for a disaster, so I tried this:
The PCL for label "Set_Read" (actually the first instruction after the label) is returned and the simulator goes there. Is there a better or more accepted way to do that? Are there any obvious risks in doing it that way?
Regards, John
Instead of using separate registers for each flag bit, I set up one register (flag0) for the write flag bits and flag1 for the read flag bits. I suspect one could calculate an offset from those registers to get to the right subroutine without all of the conditional jumps. However, one would need to know the PCL (maybe PCLATH too) for each subroutine. Hard coding those values based on a disassembly listing would seem like asking for a disaster, so I tried this:
Code:
TEST_AREA
nop
movlw 2
call Table
movwf PCL
Table ;simply a list of subroutine labels
brw
dt LongDelay
dt Main
dt Set_Read
dt Read_Sequence
The PCL for label "Set_Read" (actually the first instruction after the label) is returned and the simulator goes there. Is there a better or more accepted way to do that? Are there any obvious risks in doing it that way?
Regards, John