If you did that, don't you get mis-represented images on highly varied images? Because I'm trying to track brightness changes. If I skip every X pixels, then if an "object (like a brightness patch or something)" has moved between pixels, I lose it don't I? Brightness changes between adjacent pixels are lost because the adjacent pixels are lost?
Like if you had fewer pixels, wouldn't each pixel kind of "average" out on it's own what it saw? Whereas if I skipped pixels I would see exactly what that one pixel saw but lose everything else around it?
It's also a problem to do it the way you mentioned because the pixels in the image are not at the same point in time which is a problem if I'm trying to track brightness changes.