Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

PICASM / XC8 PIC C compiler bug

Status
Not open for further replies.

dougy83

Well-Known Member
Most Helpful Member
The numeric processor somewhere in the PICC / XC8 compiler toolchain produces different values to the preprocessor. This means that a defined value can be e.g. 7 as far as the preprocessor is concerned, and 0 (zero) from the compiler/assembler's point of view.

The PICC assembler is known to be weak in regards to evaluating numeric expressions, as it pays no heed to operator precedence. **broken link removed** I'm not sure if this has been fixed in subsequent releases, but when I raised a support ticket with HiTech in 2005, they weren't concerned that the assembler wasn't correct, and that the C preprocessor should be used to do any calculations of compile-time constants (which of course isn't possible).

The following code snippet should not compile if MIN_MISSED_PULSES != 7, yet it does, and it compares the variable missedPulseCount against 0 rather than 7.
Code:
    while(1)
    {
        missedPulseCount++;

#if MIN_MISSED_PULSES != 7
#error Preprocessor broken
#endif
        if(missedPulseCount == MIN_MISSED_PULSES)
        {
            missedPulseCount--;                    // value of missedPulseCount is zero, when it should be 7
            missedPulse = 1;
        }
    }
From a few quick tests, it appears that the preprocessor uses a different precision variable to evaluate expressions than the compiler/assembler.

The attached code illustrates the bug. The variable on line 29 will have a value of 0, when it should be 7. Uncommenting line 11 and commenting line 12 will provide correct program function.

The preprocessor also cannot handle expressions containing floating-point numbers, which is pretty lame, although not a bug.
 

Attachments

  • bug.c
    1,016 bytes · Views: 191
The preprocessor also cannot handle expressions containing floating-point numbers, which is pretty lame, although not a bug.

Isn't XC8 based on GNU C? Your example works fine with gcc on Linux.

By the C standard, preprocessor can only handle int values. May be it's only 16-bit values in XC8 since int is 16-bit? I wouldn't rush to declare this a bug without looking into details of the C standard.
 
XC8 is based on Hi-Tech PICC.

So you're saying that the preprocessor and compiler are allowed to have a different sized ints?
 
Just define your large numbers (larger than 8-bit) as:
#define PULSE_FREQ 10000UL

I'm with Northguy. Not sure if the problem is a bug at all.
 
I had this same problem with the C18 compiler. Microchip in their wisdom decided to assume constants are 8 bit and so this,
Code:
#define PulseLength 10
#define NumPulses 60
   
    var = numpulses*pulseLength;
results in var containing 88 and not 600!!

There's a thread on here somewhere where I mentioned it.

Edit, found the thread and I'm still amazed that people defended it!! https://www.electro-tech-online.com/threads/can-anyone-explain-this-c18-feature-bug.125890/

Mike.
 
Last edited:
Microchip in their wisdom decided to assume constants are 8 bit and so this,
Edit, found the thread and I'm still amazed that people defended it!! https://www.electro-tech-online.com/threads/can-anyone-explain-this-c18-feature-bug.125890/

I agree, you should be able to use the preprocessor to calculate accurate constants. Including floating point. The propocessor shoul cast the constant to right type and size as the last step.. not before.
For example: (400/1.6) should give you an 8 bit constant 250!

EDIT:
Of course this gives me an error because floating points are not supported (in expressions). I'm actually surprised that gcc does not have a switch to break this standard :)
#define NUMBER (400/1.6)

But at least this works with gcc:
#define NUMBER (4000/16) // result is 250

Could you dougy try that simple expression with your compiler?
 
Last edited:
results in var containing 88 and not 600!!
So the standard integer promotion was not adhered to.
For example: (400/1.6) should give you an 8 bit constant 250!
This won't be evaluated by the preprocessor unless you try to use it in a preprocessor conditional statement (which would fail). The compiler would be responsible for evaluating it.
Could you dougy try that simple expression with your compiler?
You mean 4000/16? I can try when I get home, though there's no reason that it won't be 250.
 
I'm with Northguy. Not sure if the problem is a bug at all.
So why is it acceptable to have two standards within the same compiler? The preprocessor arrived at the correct answer, the compiler/assembler did not.
 
According to the manual, Numeric constants should be evaluated as 32bit signed..

6.4.5.1 NUMERIC CONSTANTS
The assembler performs all arithmetic with signed 32-bit precision.
The default radix for all numbers is 10. Other radices can be specified by a trailing base

So If you need to specify the "1L" it must be a bug..... Incidentally XC8 is supposed to assume signed unless specified...I know for a fact it doesn't.. We has some code yesterday..
C:
char x=10;
while(x-- >0);
didn't work..
C:
signed char x=10;
while(x-- >0);

Did work!!!
 
According to the manual, Numeric constants should be evaluated as 32bit signed..
So If you need to specify the "1L" it must be a bug..... Incidentally XC8 is supposed to assume signed unless specified...I know for a fact it doesn't.. We has some code yesterday..
C:
char x=10;
while(x-- >0);
didn't work..
C:
signed char x=10;
while(x-- >0);
Did work!!!
That quote you provided is about the assembler, so the compiler is doing what it is supposed to in my case regarding the integer promotion. The preprocessor is going by a different set of rules though -- it this normal or allowed by any C standard? EDIT: The compiler is not necessarily supporting integer promotion in my example because values >255 are included in the expression, so it may or may not be complying
XC8 uses unsigned chars by default, ref section 2.4.7.2. There was (is?) a flag under the compiler settings to default to signed chars.
That code you provided should work for either case shown. The number of instructions may be different, however.
 
Last edited:
From the manual (ref **broken link removed**)
The type and conversion of numeric values in the preprocessor domain is the same as
in the C domain. Preprocessor values do not have a type, but acquire one as soon as
they are converted by the preprocessor. Expressions can overflow their allocated type
in the same way that C expressions can overflow.
Overflow can be avoided by using a constant suffix. For example, an L after the number
indicates it should be interpreted as a long once converted.
So, for example
#define MAX 1000*1000
and
#define MAX 1000*1000L
I think I've shown in my initial post that this is in fact false. The type and conversion of numeric values in the preprocessor domain and the C domain are clearly different.
 
Of course this gives me an error because floating points are not supported (in expressions). I'm actually surprised that gcc does not have a switch to break this standard :)
#define NUMBER (400/1.6)
But at least this works with gcc:
#define NUMBER (4000/16) // result is 250
Could you dougy try that simple expression with your compiler?
MisterT, both work fine.
Code:
#define NUMBER1 (400/1.6)
#define NUMBER2 (4000/16)
volatile uint8_t num1, num2;
Code:
294:                       num1 = NUMBER1;
   022    30FA     MOVLW 0xfa
   023    00A6     MOVWF 0x26
295:                       num2 = NUMBER2;
   024    00A7     MOVWF 0x27
 
So why is it acceptable to have two standards within the same compiler? The preprocessor arrived at the correct answer, the compiler/assembler did not.

In ANSI C, #if evaluation by pre-processor is different from the compiler. Here's what it says about #if evaluations:

"For the purposes of this token conversion and evaluation, all signed integer types and all unsigned integer types act as if they have the same representation as, respectively, the types intmax_t and uintmax_t defined in the header <stdint.h>."

So,

C:
#define WITHOUT_L 1000*1000
#define WITH_L 1000*1000L

Now

C:
#if WITHOUT_L < SOMETHING

is the same as

C:
#if WITH_L < SOMETHING

but

C:
if (WITHOUT_L < somehing)

and

C:
if (WITH_L < somehing)

might be different.

What is in stdint.h?

Another thought is that they don't necessarily have to adhere to ANSI C unless they claim full compliance.
 
I thought that #define was simply text substitution and doesn't have any type associated with it..

Mike.
 
I thought that #define was simply text substitution and doesn't have any type associated with it..

It sure is. The problem here is not with #define, but with #if. The OP's original sentence could be re-written without #define:

C:
#if (10000 * 70 * 33 / 300 / 10 / 1000) != 7
 
In ANSI C, #if evaluation by pre-processor is different from the compiler.
That's a bit of a trap. Thanks for pointing that out. I'll have to be careful in future when using the preprocessor to check compile-time constant values in avr-gcc, and potentially 32-bit gcc. XC8 says it should operate with both being the same however.
What is in stdint.h?
The intmax is int32 for xc8.
 
From the ANSI standpoint, it should be OK, right? The int size in xc8 would be 16 bits, therefore the compiler would process all 8- and 16-bit values as 16-bits (which it appears to do). It's just the preprocessor that uses the intmax_t type.

From the xc8 standpoint, it conflicts with its own manual.
 
From the ANSI standpoint, it should be OK, right? The int size in xc8 would be 16 bits, therefore the compiler would process all 8- and 16-bit values as 16-bits (which it appears to do). It's just the preprocessor that uses the intmax_t type.

IMHO, from the ANSI C standpoint, the #if statement, which compares to 7 shoud work Ok. That is, the preprocessor should use intmax and thus evaluate macro as 7. However, when the macro is used later inside the C code, the compiler should use 16-bit arithmetic, so it should be replaced with 0.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top