Hi,
Lets do a simple experiment...
We'll take three voltage sources, each exactly 3 volts each, and connect a 1 ohm resistor in series with each ideal source. We'll then connect all three of these sets in parallel. This is our rough starting model of three 3v LEDs in parallel.
Now with another fourth voltage source of 6v we connect that in series with 50 ohms, and connect the other end of the 50 ohm resistor to the junction of all the 1 ohm resistors. What we have now is similar to three LEDs being powered by a 6v source through a 50 ohm resistor.
Now we measure the current through each LED, and we find that each LED has very nearly 20ma through it, and a total of nearly 60ma flows through the 50 ohm resistor. This is what we wanted.
Now we change the 'characteristic' voltage of one of the LEDs, to 2.9 volts, which means we change one 3v source to 2.9v. That's a change of -0.1 volts. Now we measure the currents in the LEDs again and we find that for the two LEDs that we did not change the current went DOWN to around 13ma, and the current in the LED that we did change went UP to about 87ma.
This is an extreme case however just to illustrate that the lower voltage source draws much more current now. We would not see that much change in a real circuit because the two remaining 3v sources are LEDs and can not contribute to the current in the third LED. So we modifiy the circuit a little. We add 1N4001 diodes in series with each 1 ohm resistor, and change each LED voltage to 2.3 volts. The current in each LED is again nearly 20ma and the total current nearly 60ma.
Now we reduce one of the LED voltages to 2.2 volts, again a change of -0.1 volts. Measuring the currents in the two untouched LEDs we now see about 7ma through each one, and the current through the changed LED measures about 46ma.
So we see that the LED with the lowest characteristic voltage draws the most current and that usually also steals current from the other LEDs.
The effect is a little more complicated than that because the LED heats up more and so the characteristic voltage goes down more (about -3mv to -5mv per degree C for white LEDs) but then that might heat up some of the others too and reduce their voltage also, but we can see this is not a good thing.
Doing the same experiment with four 'LEDs' in parallel, we see 20ma for each change to about 10ma for the three untouched LEDs and slightly over 52ma for the one that has it's voltage reduced by 0.1 volt. That experiment requires raising the voltage that powers them by 1 volt to 7 volts. If we instead change the series resistor of 50 ohms to 38 ohms (to power four LEDs with 20ma from a 6v source) then we see about the same results so that doesnt change the outcome very much: slightly under 52ma in the changed LED.
This experiment however is based on only one LED changing which is probably not the best way to look at it statistically, but from my experience the flashlight fails one LED at a time, which would seem to give this view some validity. It could be that the thermal runaway trumps the other effects and so that one LED fails, then the next, then the next, etc.
It should be noted that this was an extreme case too, as the real LEDs have more series resistance than 1 ohm. But with several in series strings where the strings are in parallel, the LEDs have more chance of survival because there is more series resistance due to the addition of all the series LEDs resistances. More overhead resistance is always better from a lifetime point of view, but of course not from an efficiency point of view
