Well, I tried the 2013 instead of 20136 (divided by 10), so I'm not exceeding 32-bit in any step of calculation, but I still got wrong result.
Why is that?
The result should be 3F57 rounded. According to my calculator it's 0x3f57.e381c596c2 so the problem may be in the divide routine. If it returns a float, maybe it's not converting it properly.
Thanks,
So how can I overcome this problem?
In simple calculation, like 0x5/0x2 it returns 0x2, which is as expected (since it does round results), But how can I make it return a correct rounded integer?
Well, I tried the 2013 instead of 20136 (divided by 10), so I'm not exceeding 32-bit in any step of calculation, but I still got wrong result.
Why is that?
You are using 16 bit math. 0x123456 exceeds 16 bits, and so the result of the multiplication. The constants are type-casted to short int and then multiplicated/divided, hence the 'wrong' result.
If you want 32-bit math, use a suitable data type (unsigned long int) and make sure that the constants and the results of operations don't cause overflows.
Well, I corrected it as you said: typedef unsigned long uint32; // uint32 can hold 32-bit number.
uint32* Pointer;
*Pointer = (0x123456*2013)/0x169AF3); // (0x123456*2013 = 32-bit number)
And I still got wrong result - 0xFB-20 - instead of the correct result - 0x655.
How do you explain it please?
The result should be 3F57 rounded. According to my calculator it's 0x3f57.e381c596c2 so the problem may be in the divide routine. If it returns a float, maybe it's not converting it properly.
The C language requires that this calculation be done with integers, specifically the integer with the sizeof(int). After the calculation is complete then the value is cast to the unsigned short. The actual value is dependent on what the size of an int is with this compiler.
Well, I corrected it as you said: typedef unsigned long uint32; // uint32 can hold 32-bit number.
uint32* Pointer;
*Pointer = (0x123456*2013)/0x169AF3); // (0x123456*2013 = 32-bit number)
And I still got wrong result - 0xFB-20 - instead of the correct result - 0x655.
How do you explain it please?
In the statement you show above, the evaluation of the value is done using integer variables of size 'int', not 'long'. The conversion to 'long' doesn't take place until the assignment. If you want the calculation to be done with 'unsigned long' variables, then you need something like:
The cast I show is required even if 'int' is 32 bits for the IAR compiler. This is because 'int' is signed and (0x123456*2013) doesn't fit in 31 bits. (It is compiler dependent what happens with this - some compilers will get the right answer if 'int' is 32 bits, some won't.
That depends on the compiler. Some support a 'long long' (or 'unsigned long long') integer type which is usually 64 bits. You could also consider using 'double' which is usually a 64-bit floating point type. If you are working with integers, converting to 'double' to do calculations and then back to integers should not lose any precision if the results fit in 32 bits and the intermediate values are only 36 bits.
Alright, I did the next thing: double temp;
temp = (double)(0x123456)/0x169AF3;
*Pointer = (unsigned long)(temp*2013*10);
It worked, but I still dont understand why before it returned zeros.