Why do you optimize for size? Do you often find yourself in a situation where your program doesn't fit MCU?
Same question can be asked about optimizing for speed.. "do you often find yourself in a situation where your MCU is not fast enough?"
I want to keep my code tight. At debug phase I use lot of UART communication reporting back things and those strings add up. Sometimes I need to add floating point library just for debugging etc. I want to use the memory efficiently even if I have plenty.
And I do have the resource manager always in view. I wan't to know how much each module "wastes" memory.
And my modules are meant to be re-usable. Someday I may use it in a very small device with very limited memory. If some "very general purpose" library gets "unreasonably large", I usually try to find out why.. and try to re-design.
Anyway.. has anybody even tested how much larger your code gets when optimizing for speed. Or how much faster your code gets when optimizing for speed. I have, the difference between size/speed is usually irrelevant, and the selection between optimization levels is more psychological than practical (when compiling for the "final product", debugging is different case).
Again I say that the design of your program, the "software architecture", is what really matters.
Example: Calculating a function using series expansion and fixed point can make very tight code, but is not very fast compared to look-up tables. But, look-up tables use lot of memory.. that kind of choice makes such a difference that no compiler optimization can match. Maybe you want to use look-up tables an optimize for size so that you have more memory for tables. Or maybe you want to use series expansion and fixed point and optimize for speed..