Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

unsigned char data type

Status
Not open for further replies.

Parth86

Member
Hello
what is use of unsigned char data type in microcontroller programming. does it store only characters. If it store characters than how does 8051 understand characters. its impossible because 8051 understand only binary language. can anyone help me with example to understand when and why do we use unsigned char?

C:
#include <reg51.h>
int main(void)
{
unsigned char a;
for(a=0;a<1000;a++)
}
 
Last edited:
- Use char-datatype only when you want to represent characters. They are binary values that are interpreted as actual characters when you send them to a display that can show them - usually a terminal (Putty) or an LCD-display.
- Use standard types for other types of variables.

https://www.electro-tech-online.com/threads/variables-101-char.134416/
https://www.nongnu.org/avr-libc/user-manual/group__avr__stdint.html
I have seen that link what is difference between this two example.
unsigned int i = 12;
unsigned char i = 12;
i don't see any difference because both are storing decimal value
 
I'm beginning to hate the terminology...

char, int, long.... Its all wrong.... An integer is a whole number 0 ~ infinity... Programmers are now realizing this and redefining the data types..

INT8
INT16
INT32
and
UINT8
UINT16
UINT32

The reason the int is used as a 16 bit is primarily the oiks in programming circles in the dawn of the 16 bit processor..
An int has always been the size of the working register, but how to naff it right up is by taking the then called char ( 7 bit ascii ) and using it as an unsigned 8 bit variable and extending the ASCII character set to 255... ( That's why most terminal programs still support 7 bit protocols and also provide the mask for compatibility )..

They should have dropped the char and just made it a signed or unsigned 8 bit integer..
 
Int is 16 bit (so 0-65535), Char is 8 it (so 0-255).
does it means that unsigned int is use to store integer value and unsigned char is used to store ASCII values. if the register is 16 bit then it can store integer (0-65535) and unsigned char is 8 bit than it store ASCII values (integer value 0-255) to display letters ,symbols

I think if i create delay then i will use unsigned int data type and if i want to display letters, message than I will use unsigned char in program . suppose i want to display letter a so i need to assign value 97 ASCII values. example unsigned char a= 97;
 
Last edited by a moderator:
I think if i create delay then i will use unsigned int data type and if i want to display letters, message than I will use unsigned char in program . suppose i want to display letter a so i need to assign value 97 ASCII values. example unsigned char a= 97;
Think of it this way...

If you have a C program and you have functions that return a value... On an 8 bit micro, its a whole lot faster to return a char ( byte) or ( 8 bit int ) as the return will normally be in the working register, which is.. well 8 bit..
To return a 16 bit you need to start stacking... The same is with the first value passed to a function..

If you only need 0~ 255 the use a byte ( char) (8 bit ) number... If you are going to use math on the variable... then use larger variables.... I you define an unsigned char fill it with 255 and add 5, you won't get 260!!!
 
I'm beginning to hate the terminology...

char, int, long.... Its all wrong.... An integer is a whole number 0 ~ infinity... Programmers are now realizing this and redefining the data types..

INT8
INT16
INT32
and
UINT8
UINT16
UINT32

The reason the int is used as a 16 bit is primarily the oiks in programming circles in the dawn of the 16 bit processor..
An int has always been the size of the working register, but how to naff it right up is by taking the then called char ( 7 bit ascii ) and using it as an unsigned 8 bit variable and extending the ASCII character set to 255... ( That's why most terminal programs still support 7 bit protocols and also provide the mask for compatibility )..

They should have dropped the char and just made it a signed or unsigned 8 bit integer..
Agreed, and the fact that some compilers define words as different things can make porting a really annoying.

I tend to use u8, u16, u32, s8, s16, s32 etc, unambiguous. Then utilise a "types.h" file for defining each, sorts out the potential portability hurdle.
 
Agreed, and the fact that some compilers define words as different things can make porting a really annoying.

I tend to use u8, u16, u32, s8, s16, s32 etc, unambiguous. Then utilise a "types.h" file for defining each, sorts out the potential portability hurdle.

Yes, it is good to know your compiler. Especially when working with microcontrollers. For example avr-gcc does not support the standard double. Both float and double are treated as 4 byte float. The reason I use avr-gcc (and atmel microcontrollers) is that the documentation is really good and it is free.
 
All C compilers that support C99 have stdint.h which defines the fixed width integer types (uint8_t, int8_t, ...) These should be used when possible and are the new standard.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top