I don't have time to work through that program line by line, but the basic principle of binary division in a computer is similar to one of the "long division" methods using pencil and paper.
You shift the divisor left by the number of digits you are working with - 1, and see if it can be subtracted from the dividend.
If it can, you add 1 (shifted by the same number of places) to the result.
Shift the divisor and "1" bit right one place and repeat, until the shift offset if zero, then you are done.
eg. just using four bits:
1100 / 0010
Result 00000000
00001100 - 0010000 (Initial shift 3 bits, "1" bit register set to 1000) Can't subtract, (and stay positive) so no change to the result.
Result 0000
Shift and repeat:
00001100 - 00001000 , 1 bit 0100; Subtraction works, add bit to result:
0100
Shift and repeat:
00000100 - 00000100 , 1 bit 0010; Subtraction works, add bit to result:
0110
Shift and repeat:
00000000 - 00000010 , 1 bit 0001; Subtraction does not work, result unchanged:
0110
The shift or bit offset in now zero, so operation complete. Anything still in the dividend register is the remainder.
(And 0110 is 1100 divided by two, 0010).
You example is using 16 bits, so two bytes or more for each value, resulting in more complex additions and subtraction tests etc., but seems to be pretty much the same concept.