Many people do not realize that the notation [n] is a shorthand notation for (nT), where T is the sample time. The parenthesis () are an indication of a funtional dependence, for example f(t). The sampled signal is f(nT) and the notation f[n] is shorthad for that.
The key thing here is to understand the definition and its physical meaning. Then you can translate back and forth to the "real world" power and energy as needed.
Thank you, Steve.
I didn't know that there existed a separate delta function called Kronecker delt. So, I was simply trying to confirm the given definition/relationship from my previous post to my own personal 'informal' understanding; I was treating δ[n] as a Dirac delta function for discrete signal. If you ask me I still think I was doing it right although you can argue that I'm complicating the matter.
Thank you for the advice. I appreciate your concern.
Actually forget the "ε". Say you have a rectangle with height "1/a" and width "a"; the width is symmetric around the origin that is width=|-a/2|+(a/2). All the rectangles have zero height except the one at n=0. Now as a->0 {a * 1/a}=1. Thanks.
Regards
PG
For Q2,
Did you miss the fact that the given definition is for periodic signal? The definition includes the parameter T which is the period. If the signal is not periodic, there is no meaning to T.
The idea of average value can be extended to non-periodic signals simply by defining limits for the average. One can take any bounded signal as ask, "what is the average value of this signal from t1 to t2?". Once this is done, the limits of integration are clear, and then the integral is divided by (t2-t1) instead of T. This looks as follows.
[latex]A=\frac{\int_{t_1}^{t_2} x(t)\cdot dt}{t_2-t_1}[/latex]
This is just the extension of the common everyday idea of an average for a finite collection of numbers, as follows.
[latex]A=\frac{\sum_1^N x(n)}{N}[/latex]
where x(1), x(2), x(3), ... x(N) are finite numbers
In your specific example, the idea is to average the signal power which is averaging the square of the signal, not the signal itself. But, don't let this fact throw you off the basic idea of what an average is.
But I can say that if a function has definite area for an unbounded integration interval then it could have average power. For example, you see **broken link removed** the function has a definite area for unbounded integration interval.
I would recommend that you actually work similar examples out and see what you get.
What happens for y=exp(-x), y=exp(-|x|) and y=exp(-x)* u(x).
So the answer to the first problem is infinity, and the other two answers are zero, as you showed.
This means that this definition of average is not very useful for these signals. All three of these signals have values that are always positive and never negative, yet the answer for average value is not a useful finite value.
I just wanted you to see that this definition a bit strange. What it does is pick up any constant shift or DC offset. This is because the integration of a constant value of K, will have an integrated value of KT from -T/2 to +T/2. If you integrate this, then divide by T, you always get a value K, even when the limit of T goes to infinity.
Any other functional shape either averages out to infinity or down to zero. For those signals that average to zero, you can always add a constant to it to get a nonzero average value.
This definition can be used for periodic signals too, but the other one I gave you is simpler for those cases.
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?