I think I got the answer...it's so ghetto the way I did it though. I made two versions of the equation M(t):
Equation 1. Shifted output by making M(t) -> M(t+σ). This basically just changes the boundaries of integration. I did no more work on the formula after that.
Equation 2. Shifted input by making u -> u+σ and plugging that back into M(t). Now I worked to make Equation 2 look like Equation 1. THe key thing I had to do was make d(u) -> d(u+σ) = (du +dσ) = du (since σ is a constant.) I assumed I integrated the function, expanded it out to evalate between the integration boundaries, then repackaged it back into an bounded integral to get Equation 1.
If Equation (shifted output) equals Equation 2 (shifted input), then it means shifting the output, t in M(t) has the same effect as shifting the input. u in f(u) which means time invariant.