Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

uart bit banging

Status
Not open for further replies.
Nigel Goodwn has good simple tutorials. Short and to the point!!
 
trying to get this working on non-uart pin(RC1) for LCD NHD-0216K3Z-NSW-BBW-V3
with RS-232 (TTL) protocol, 8-bit data, 1 Stop bit, no parity, no hand-shaking. BAUD rate is 9600

I am getting results but they are a "bit" or 2 off, I think my problem is with delay time, or maybe inversion issue.
Maybe i need to tweak my clock? or set clock up wrong? Maybe im not factoring in properly the time it takes for all the if checking?
tried lots of small adjustments but just not getting it..
please advise if im being over or under sensitive about these things!
thanks!

bad function:
Code:
void sendUart(unsigned char datdat)
{
PORTC |= 0b00000010;        // start bit
__delay_us(99);
nextstep:
if (datdat&1){PORTC |= 0b00000010; }else{PORTC &= 0b11111101;}   set output
__delay_us(99);
datdat = datdat>>1;   
if (cnt<8){cnt++;goto nextstep;}                  // do 8 times
PORTC &= 0b11111101;    // stop bit
__delay_us(99);
}

setup code:

Code:
#include <xc.h>
 #define _XTAL_FREQ 8000000

main(){
    OSCCONbits.SCS = 0;        // Clock source defined by FOSC<2:0> of the Configuration Word
    OSCCONbits.IRCF0 = 1;   // 8 Mhz pls
    OSCCONbits.IRCF1 = 1;
    OSCCONbits.IRCF2 = 1;
    OSCTUNE= 0b00000000; // no drift
    INTE = 1; // interrupt used
while(1){}
}


void interrupt  INTERRUPT()
{ 
    if(INTF == 1 & INTE == 1)     // on interrupt
{
 unsigned char  dat = 0xAA;    // test byte to send to LCD
 unsigned char  cnt=0;
 sendUart(dat);
 delayMS(1000);                       // long pause
 INTF = 0;
}

}


Hi,

Try doing this in asm instead of a higher level language. It's almost certain it will work if you calculate your delays right and check any frequency calibration byte needed.
I've done this using the mid range PIC chips and it always works even using the internal tuned RC oscillator.

With the PIC chips you can use MPLAB and you dont need anything else except the programmer board. You can type asm right into the IDE and then if you set the clock to your real life clock you can actually test the timing before you even program the chip by using the simulator that is built in. If the timing comes out right in the sim, then it will be right in the real life chip.

For me the only way to do this is in asm :)
 
Try doing this in asm instead of a higher level language. It's almost certain it will work if you calculate your delays right and check any frequency calibration byte needed.
C, asm, pascal, basic... It will work..... However!! If there is an OSCTUNE register without calculating an offset, there is little chance of getting a specific speed... There are little programs that will generate the offset required to get the crystal accurate..

Here is an example... I have just completed a wireless radio unit 9600 baud... I used the internal OSC of a pic18f4520... Nope!!! Put a 4Mhz crystal on..... Works like a charm..

Just to throw in another spanner... If the timing is a little out AND there is another little bug!! You'll be there forever... As there is nothing worse than two unknown issues.... Get the timing right and then you'll see if something else is screwing up the works..
 
C, asm, pascal, basic... It will work..... However!! If there is an OSCTUNE register without calculating an offset, there is little chance of getting a specific speed... There are little programs that will generate the offset required to get the crystal accurate..

Here is an example... I have just completed a wireless radio unit 9600 baud... I used the internal OSC of a pic18f4520... Nope!!! Put a 4Mhz crystal on..... Works like a charm..

Just to throw in another spanner... If the timing is a little out AND there is another little bug!! You'll be there forever... As there is nothing worse than two unknown issues.... Get the timing right and then you'll see if something else is screwing up the works..

Hi,

Sounds good Ian.

What i meant by the calibration byte is that OSCTUNE value. That's the byte that gets the internal oscillator at the right frequency. I never did a design where i did not check that right away and make sure it was right. Once done, i never had a unit fail to communicate and that is after using some 20 different packages which spanned two different models of the PIC chip (mid range class).
There are issues though, about the value changing during programming. I cant remember what this was about, but the way i did it was to check the frequency and find the right value, then use that value as one of the first things done in the program.

With RS232 there should not be a problem if the value is set right because then the frequency has to be right too. If the timing is right then the comm works as planned. That is of course if the timing is done right.

There's also the trick of sending small data streams instead of everything at once. That helps the receiver stay sync'd to the transmitter frequency.

But back to the main point, i had not seen even one failure after properly 'tuning' the on chip oscillator (finding the right value using a program that measures the frequency). And the timing was done all in asm. Maybe that is part of why i am so stuck on asm for some things :)
 
Here is an example... I have just completed a wireless radio unit 9600 baud... I used the internal OSC of a pic18f4520... Nope!!! Put a 4Mhz crystal on..... Works like a charm..

Doesn't the 18F4520 come with the correct osctune value already loaded?, most PIC's do, which is why you have to ensure you don't delete or alter it (unless you mean to).

Internal oscillators should be fine for 9600 baud, which is one reason it's my normal choice :D

Having said that, I've happily used the internal oscillator for 115,200 baud on 16F1827's in order to change the default baudrate down to 9600 baud on SMS modem chips.

Initially I tried to be 'too clever', and wrote code to only do the change the first time the program runs, then wrote code to see if it's already set to 9600 and if not then do the change, but then I realised it didn't matter - simply send the change baudrate command at 115,200 every time it runs, if it's already set to 9600 baud the command will be ignored.
 
Doesn't the 18F4520 come with the correct osctune value already loaded?,
No... It was set to zero.. I didn't check whether it was stored elsewhere... I have checked four or five of them... I couldn't be bothered messing so I just put a crystal on...

Its far more stable anyway.... I'm using more and more extended range Pic's and the internal OSC is always spot on.... The pic16f1825 is very good...
 
Just re-read the datasheet.

datasheet said:
2.6.5.1
Compensating with the USART
An adjustment may be required when the USART
begins to generate framing errors or receives data with
errors while in Asynchronous mode. Framing errors
indicate that the device clock frequency is too high; to
adjust for this, decrement the value in OSCTUNE to
reduce the clock frequency. On the other hand, errors
in data may suggest that the clock speed is too low; to
compensate, increment OSCTUNE to increase the
clock frequency.
 
Hi,

Now i remember one of the reasons i liked asm for this. That's because with asm we can calculate the time delays precisely knowing the frequency of the oscillator if the oscillator is running at the right frequency. That eliminates the code errors right off the bat.
The code timing can be checked to perfection in MPLAB too so there comes a time when the code timing just has to be perfect, as long as the oscillator is calibrated. Using the simulator that comes in MPLAB, we can run the code that generates the timing and if we need say a 1ms pulse and it is 1ms plus one instruction cycle time, we know we still dont have it right. So we can get the code down to where the timing is accurate to the cycle clock time, which may or may not be the exact time, but we can get really close if we cant find an integer number of cycles that meets the exact timing spec. For example, with a 4MHz oscillator we can get a timing resolution of 1us if i remember right. That means if we generate a delay of exactly 1000 instruction cycles we have exactly 1ms. If we simulate in MPLAB and find it is really 1001 cycles then we know right away it is not right yet and we have to get rid of that extra cycle. Before long, it's perfect, and all we have to do then is make sure the frequency cal byte is actually making the right frequency.

I think what i was referring to was actually the OSCAL value? It's been a while since i had to do this so i forgot exactly which variable it was, but one example chip was the 12F675 and that had the value i was talking about.
 
I think what i was referring to was actually the OSCAL value? It's been a while since i had to do this so i forgot exactly which variable it was, but one example chip was the 12F675 and that had the value i was talking about.

Certainly those come pre-aligned, and you need to save and restore it's value when you program the chip - 'hopefully' MPLAB does that automatically?.
 
Oops.. up till my attempts today , i have been cranking the clock to max, then osctune to max, faster=better right?
then I run this code to find what the baud delay i need is:
(the debug code is commented out here, baud is the value i ended up needing)
Code:
void delayUS(unsigned int t)
{
    while (t>999){__delay_us(10000);t-=1000;}
    while (t>99){__delay_us(1000);t-=100;}
    while (t>9){__delay_us(100);t-=10;}
    while (t>0){__delay_us(10);t-=1;}
}

void LCDascii(unsigned char dat)
{  
    unsigned char cnt;
    unsigned char datdat;
    unsigned int baud=0x133;
//    for (baud=0x1;baud<2000;baud++){
//    dat = 0xAA;
//    unsigned char datdat;
        datdat =  dat;
        cnt=0;
    PORTC &= 0b11111101;
    delayUS(baud);
    nextstep:
    if (datdat&1){PORTC |= 0b00000010; }else{PORTC &= 0b11111101;}
    delayUS(baud);
    datdat = datdat>>1;    
    if (cnt<8){cnt++;goto nextstep;}
    dummy |= 0b00000010;
    PORTC = dummy;
    delayUS(baud);    delayUS(baud);    delayUS(baud);    delayUS(baud);    delayUS(baud);
    delayUS(baud);    delayUS(baud);    delayUS(baud);    delayUS(baud);    delayUS(baud);
//    eeprom_write(1, ((baud>>8)));
//    eeprom_write(2, (baud)&0xFF);
    delayMS(100);
//    }  
}

My attempt with this post was to do things proper programmatically without so much guess work,
first: #define _XTAL_FREQ 8000000
2)setup osccon
2.5)postscaler = 1:1
3)put pulse on scope and configure osctune
4) missing things i "didnt know"/"forgot" about?
5)now __delay_us(10) acutally is 10us delay!

I am surprised(a little) with the responses about glitchy compilers, not so much about how i am stalling the clock, I see now why we like asm so much.. good sample codes @Nigel's , I am almost able to see how the calls work...
but i need more fundimental understanding and thats where it hit me, top line of data sheet says only 35WORDS to learn... could these be asm words? table of contents... yup! chapter 12: Instruction table, of course~!

so her is my first attempt to add:
ADDWF f,d

... scratch that, going by a snipit here:
movlw 0xff // move literal to register w,
movwf PORTA // move register w to porta
movwf PORTB
... so a literal is a hard value?

I can see if f & W are registers i want to work with but what is b or d...output value?
.... maybe it may help if there is explanation somewhere to explain how/what i am working with here in terms of under the hood, registers and how it works as oppose to what each instruction does?

ie. do i move in valA to reg1, valB to reg2, reg3=output after operation? I assume with asm it means single step operations..?
 
:cool: still works! , code posted: osctune =0; max clock, 1:1 , 2ms = delayUS(0x133) ..... just interesting to see how things are done the right way sometimes!
 
Certainly those come pre-aligned, and you need to save and restore it's value when you program the chip - 'hopefully' MPLAB does that automatically?.

Hi,

Yes i think that comes preset at the factory, but i always checked it anyway and reset it with a small Windows program from MC that does that.

One of the first things i had to do in every program was this:

call h'3FF'
movwf OSCCAL

and that sets the frequency as it starts up.
 
I have discovered another mistake in my code!
cmcon = 0; but i have just discovered that the AN4 I use is on C2IN+ , the rest of comparator pins are used for digital I/O... should cmcon be 0 or 7?


I am looking at Nigels sample and i see the line: Loop .... where does the machine convert that to an address, does machine capture program counter somehow?
I notice in video he has buttons for auto or stepping the clock, entering programming mode, resetting and such... how does pickit send this?
which is the part that turns the asm, ie bcf to binary instruction(is that linker file)?
is microstepping always the same for all... can I mostly forget about it?
About control word, i see how it connects to each register, is there only 2 pins for each, read/write?
is this bus different from a PCI bus / or is there a header on my PC motherboard that i can wire in to this bus?


And I also wonder , how different are the architectures , if my PIC16F688 (no LATS) has 35 instructions, does it change with pic 18? ... and what does that mean about x86? ... and Arduino? different instructions/hex mapping .. hex code changes between devices right?


Sorry... I'v never seen under the hood like this before .... ITs BEAUTIFUL!!!
 
And I also wonder , how different are the architectures , if my PIC16F688 (no LATS) has 35 instructions, does it change with pic 18? ... and what does that mean about x86? ... and Arduino? different instructions/hex mapping .. hex code changes between devices right?

PIC's are RISC processors so have very few instructions, but as result the few they have execute very quickly. The PIC18 is still a RISC processor, but introduces further extra instructions, and the 24 series (and 33, 32) introduce still more.

AVR's have a more instructions, and full microprocessors (like x86) have a great many instructions.

Machine code between all device is completely different, and it's the assembler that converts the assembler code to machine code.

My first programming experience was on the 6502, manually converting to machine code, and manually entering the hex bytes one by one :D
 
manually entering the hex bytes one by one
Welcome to our world... Hitachi H8/500 16 bit.... The developer couldn't understand S records so thats how he did it!!! Nightmare!!!
 
Welcome to our world...

It was along time ago :D

I've still got the computer, a Tangerine Microtan65.

Initially a single board with 1K of RAM, video output, and a HEX keypad.

From that I added the second board called Tanex, then plugged an extra EPROM in that for a single pass assembler, and extra RAM as well, plus a full ASCII keyboard.

Later on I added more EPROM's to add a10K Microsoft BASIC.
 
Hi,

For a really really good example of a non RISC processor from the same era see the Z80. Instructions, instructions, instructions. I loved that thing and made a microcontroller board out of one :)
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top