The voltage "scaling" factor (like 1V/meter or 1V/degree of temperature) is usually pre-determined by the sensor and sometimes can be tuned to fit your needs, but only up to a limit. This factor is usually on the scale of millivolts. You can also stick amplifiers and stuff so help make the voltage output better fit to the voltage range used by the microcontroller.
How you interpet that voltage is something you do when you write the code to program the software that will run the microcontroller. You can interpret the voltage from the sensor however you want (you could treat 1V = 5m even if the sensor outputs 1V = 1m. It's probably not the most accurate way to go about it though).
A general-purpose microcontroller (the specific term for a computer on one chip, the processor, memory and everything else on one chip) can usually only take 5V maximum (that's the chip's required power supply and for simplicity, 5V is the voltage that represents a digital 1 and 0V represents a digital 0).
So you can see that practicalities come into play here. If you want have a sensor that can measure a very long range, it must divide the 5V total into many very small slices to squeeze the large distance into 5V. If you dont need to measure such a long range, these slices can be bigger. The smaller the slices are, the harder and more expensive it is to accurately measure them (noise also makes it harder to measure very small slices). Also, ADCs have a minimum resolution so if the slices get too small, the ADC can't read that low (ie. if the slices are down to 1mV, but the ADC can only read down to 5mV, then 1,2,3,4 mV will all read as 5mV).
So you can either decrease the sensor's maximum readings or you can use a higher voltage to increase accuracy and easiness to read the voltage, but then you have trade offs for distance (and too high-voltage can't be used by computer-type electronics)