It's either really early, or I'm still not following. Where is 102.3V coming from (other than 1/10th of 1023)? What relevance does it have on the 0-25V input signal?
It's scaling the input to match the resolution - so make the maximum input to the analogue input equal to 102.3V at the input of the attenuator. This gives you a nice display, with no scaling required (other than inserting the decimal point in the correct place).
When you input 25V the output of the ADC will be 250, you simply stick the decimal point between the 5 and the 0 to give the correct reading. No maths involved, and it gives you 0.1V resolution.
I have never understood the reasonings put forth on ADC scaling to drop digits within the displayed value. I find it odd this topic only ever seems to come up on the forums. I have never had this discussion in the workplace or with any customers.
When you use your multimeter does the last digit only display some values? - I thought not
If you don't understand the basic principles it's unlikely you would be discussing it with customers, or indeed if you should be talking to customers at all?.
WHy do you keep mentioning 1%? Where does the 1% come from?
Because it's difficult to manage 1% accuracy (and calibration), and my suggestion gives better than 1% accuracy (depending on the rest of the circuit). In fact for your 25V requirement it gives 0.4% accuracy.
P.S. There are multibillion dollar companies that don't think 0 and 5 as LSD look horrible.
Pretty crappy companies then - and bear in mind that's only ONE example of poor scaling, and the most useable one - if you're scaling by other values it gets worse.
As I said before, get your calculator out and do some sample readings - as we're talking about a 10 bit conversion, try the 13 values from 500 to 512 and apply your scaling maths to them, and see what results you get - perhaps then you'll understand the issues.