Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

power and energy of discrete signal

Status
Not open for further replies.

PG1995

Active Member

Attachments

  • discrete.jpg
    0 bytes · Views: 9
  • signals_3.jpg
    0 bytes · Views: 12
  • discrete.jpg
    discrete.jpg
    298.7 KB · Views: 209
  • signals_3.jpg
    signals_3.jpg
    170.4 KB · Views: 209
Last edited:
What you are saying does make sense, but it makes sense mostly from an engineering/physics viewpoint. In discrete mathematics, the definitions 1.3 and 1.4 are more useful. Definitions are things that you can't really argue with. If someone decides to define it and call it something, then it is a usable thing.

Often the Δt is constant and does not vary from point to point as a function of n. In those cases, you can just factor out the Δt part, or factor it back in. Many people do not realize that the notation [n] is a shorthand notation for (nT), where T is the sample time. The parenthesis () are an indication of a funtional dependence, for example f(t). The sampled signal is f(nT) and the notation f[n] is shorthad for that.

The key thing here is to understand the definition and its physical meaning. Then you can translate back and forth to the "real world" power and energy as needed.
 
Last edited:
Many people do not realize that the notation [n] is a shorthand notation for (nT), where T is the sample time. The parenthesis () are an indication of a funtional dependence, for example f(t). The sampled signal is f(nT) and the notation f[n] is shorthad for that.

The key thing here is to understand the definition and its physical meaning. Then you can translate back and forth to the "real world" power and energy as needed.

Thank you, Steve.

I was one of 'those people'. But now I can understand it.

Regards
PG
 
Hi

I have another related **broken link removed** about **broken link removed**. Just need your confirmation. Thank you for the help.

Regards
PG
 

Attachments

  • sigma_discrete1.jpg
    0 bytes · Views: 3
  • sigma_discrete.jpg
    0 bytes · Views: 6
  • sigma_discrete.jpg
    sigma_discrete.jpg
    152.4 KB · Views: 215
  • sigma_discrete1.jpg
    sigma_discrete1.jpg
    40.7 KB · Views: 204
Last edited:
It's not clear why you would propose this definition, but it appears to be different in form and function from the correct one. The limit seems unnecessary since it just adds zero to nTs. And, the equation is not valid since every term in the summation is zero and hence the total sum is zero, and can't be one.

The Kronecker delta function δ[n] is very straight forward in discrete math. There is no infinity to deal with (as there is with the Dirac delta δ(t) ) and it is zero for every integer n, except for n=0 where it equals one. Based on this definition, it's clear that the summation of δ[n] over all integers must be one. Why complicate that simplicity?
 
Last edited:
Thank you, Steve.

I didn't know that there existed a separate delta function called Kronecker delta. So, I was simply trying to confirm the given **broken link removed** from my previous post to my own personal 'informal' understanding; I was treating δ[n] as a Dirac delta function for discrete signal. If you ask me I still think I was doing it right although you can argue that I'm complicating the matter.

For the sake of this argument think that δ[n] is Dirac delta function which is zero everywhere except at n=0 where it could be said it has very, very large value. **broken link removed** in my previous post, we are simply trying to sum the areas of all rectangles from n=-∞ to n=∞. The width of each rectangle is ΔT=nTs+ε, and all the rectangles have height=0 except the one at n=0 whose height is almost infinite. Therefore, as ε->0, the area of rectangle becomes unity and that's all. Obviously, the precise definition gets rid of all this verbosity but according to me the definitions are not that much intuitive unless we confirm them to our own framework of understanding.

Regards
PG
 
Last edited:
Thank you, Steve.

I didn't know that there existed a separate delta function called Kronecker delt. So, I was simply trying to confirm the given definition/relationship from my previous post to my own personal 'informal' understanding; I was treating δ[n] as a Dirac delta function for discrete signal. If you ask me I still think I was doing it right although you can argue that I'm complicating the matter.

OK, I understand what you were trying to do now. The notation δ[n] is certainly a confusing notation, since it looks a lot like δ(t) which is the continuous time object that does similar things. It's probably easier to try to go from the Kronecker delta to the Dirac delta, rather than the reverse, but either way, this is treacherous mathematical ground.

I'll stress again that what you wrote does not appear to me to be mathematically correct. The limit as ε goes to zero does not appear to do anything useful. It amounts to a substitution of ε=0, which has no effect on anything in your expression. Also, if you consider that the Dirac delta is infinite at n=0, and you are multiplying it by n, which equals zero, then you have an infinity times a zero, which is an indeterminate case.

I'll caution you not to try and generalize from continuous time to discrete time, but rather go the other way. It's easier to consider the discrete time versions of all objects and operations, and then generalize to the continuous time case. I understand why you are trying to do the reverse. It is because we are taught the more difficult math of continuous variables and calculus (with slopes and integrals) before we learn the discrete math (with sums and differences). Maybe my suggestion does not hold up when you get to Laplace Transforms versus Z-transforms, but for most other theory, my suggestion should make things easier. Try to approach the discrete math with a fresh mind and temporarily forget about continuous variable math.
 
Last edited:
Thank you for the advice. I appreciate your concern.

Actually forget the "ε". Say you have a rectangle with height "1/a" and width "a"; the width is symmetric around the origin that is width=|-a/2|+(a/2). All the rectangles have zero height except the one at n=0. Now as a->0 {a * 1/a}=1. Thanks.

Regards
PG
 
Thank you for the advice. I appreciate your concern.

Actually forget the "ε". Say you have a rectangle with height "1/a" and width "a"; the width is symmetric around the origin that is width=|-a/2|+(a/2). All the rectangles have zero height except the one at n=0. Now as a->0 {a * 1/a}=1. Thanks.

Regards
PG

So here you are just describing a Dirac delta function using a rectangular pulse.

Trying to truly make the link between the Kronecker and Dirac delta functions is mathematically challenging, as I mentioned. If you want to just get a basic approach down for intuitive purposes, and you don't mind upsetting a few mathematicians (actually I enjoy doing that), you can do the following.

In discrete time we consider time to be represented by all integers n. We imagine sampling a function using a zero order hold, so that the function now looks like a series of points. Now instead of using discrete time, we think of continuous time where the signal is constant over every time step from (n-0.5)Ts to (n+0.5)Ts. This function now looks like a staircase function in continuous time. The Kronecker delta is now converted into a pulse of width Ts and height 1/Ts, so that the area is one. Now instead of using summations, true integration can be done, however, the integrals are easy since the function and the Kronecker delta function are constant funcitons over a sample period Ts. (really this becomes a summation again). Now you can create the Dirac delta function by taking the limit as Ts goes to zero, and the delta and all of the math converts over.

This is very similar to what you described above, but you did not correctly link the "a" and 1/a with the sampling time Ts. This is a way to think about the limiting process that takes you from discrete time to continuous time.

Now we should both duck before the mathematicians start throwing spears at us.
 
signal energy and power of continuous signal

Hi

Could you please help me with **broken link removed** queries? Thank you.

Regards
PG
 

Attachments

  • signal_power_7.jpg
    0 bytes · Views: 8
  • signal_power_7.jpg
    signal_power_7.jpg
    939.9 KB · Views: 203
Last edited:
For Q1,

Personally, I agree with the author. I also agree with you. Also, I can agree with my own opinion which is that we are free to define anything we want, but no one else is ever going to care about or use our definition unless it is applicable to something and conceived within a logical framework.

For example, here is my proposed definition of energy.

[latex]E=\begin{cases}\int_{-47}^{101.345}x^2\; dx, &\text{on tuesday}\\ 0, &\text{otherwise} \end{cases}[/latex]

You can't say my definition is wrong or invalid. However, there is no logic to my definition and there is no context for which it would be useful, nor in any way related to the real world. However, if I then go on to claim that my definition relates to real energy in physics, I will be wrong. So, definitions are only useful when used appropriately within a logical theoretical framework.

In the case you are referring to, the author is defining something and then explaining why this definition is useful in the real world, based on our laws of physics. He is also clear to point out that "signal energy" is not exactly the same as real "energy" in physics.
 
Last edited:
For Q2,

Did you miss the fact that the given definition is for periodic signal? The definition includes the parameter T which is the period. If the signal is not periodic, there is no meaning to T.

The idea of average value can be extended to non-periodic signals simply by defining limits for the average. One can take any bounded signal as ask, "what is the average value of this signal from t1 to t2?". Once this is done, the limits of integration are clear, and then the integral is divided by (t2-t1) instead of T. This looks as follows.

[latex]A=\frac{\int_{t_1}^{t_2} x(t)\cdot dt}{t_2-t_1}[/latex]

This is just the extension of the common everyday idea of an average for a finite collection of numbers, as follows.

[latex]A=\frac{\sum_1^N x(n)}{N}[/latex]

where x(1), x(2), x(3), ... x(N) are finite numbers

In your specific example, the idea is to average the signal power which is averaging the square of the signal, not the signal itself. But, don't let this fact throw you off the basic idea of what an average is.
 
Last edited:
Thank you.

For Q2,

Did you miss the fact that the given definition is for periodic signal? The definition includes the parameter T which is the period. If the signal is not periodic, there is no meaning to T.

The idea of average value can be extended to non-periodic signals simply by defining limits for the average. One can take any bounded signal as ask, "what is the average value of this signal from t1 to t2?". Once this is done, the limits of integration are clear, and then the integral is divided by (t2-t1) instead of T. This looks as follows.

[latex]A=\frac{\int_{t_1}^{t_2} x(t)\cdot dt}{t_2-t_1}[/latex]

This is just the extension of the common everyday idea of an average for a finite collection of numbers, as follows.

[latex]A=\frac{\sum_1^N x(n)}{N}[/latex]

where x(1), x(2), x(3), ... x(N) are finite numbers

In your specific example, the idea is to average the signal power which is averaging the square of the signal, not the signal itself. But, don't let this fact throw you off the basic idea of what an average is.

I have added my comments about the Q2 **broken link removed**. Thanks.

Regards
PG
 

Attachments

  • signal_power_7a.jpg
    0 bytes · Views: 6
  • signal_power_7a.jpg
    signal_power_7a.jpg
    724.1 KB · Views: 213
Last edited:
OK, I understand what you are saying. Good thinking on your part. Their definition appears to be just a little more general than that for a purely periodic signal

I can say a few things which might help you.

First, any bounded signal has an instantaneous power given by [latex] P(t)=|x(t)|^2[/latex]

Second, any bounded signal has an average power over the range of t1 to t2 given by [latex] A=\frac{\int_{t_1}^{t_2} |x(t)|^2\cdot dt}{t_2-t_1}[/latex]

Third, if the bounded signal is periodic, with period T, one can define an average power over the period given by [latex] A=\frac{\int_{t_1}^{t_1+T} |x(t)|^2\cdot dt}{T} [/latex]

Fourth, if the limit converges to a finite nonzero value, it can be meaningful to define the average as they said, with a limit. [latex] A=\lim_{T\rightarrow \infty}\frac{\int_{\frac{-T}{2}}^{\frac{T}{2}} |x(t)|^2\cdot dt}{T} [/latex]

This surely works for purely period signals. It also surely fails for many signals, and you showed one example. Some signals grow such that the integral goes up faster than linearly with T - so you get infinity. Other signals might decay to zero, or just increase at a rate slower than linearly, which results in a zero average which is not very useful usually.

But, getting to your point, there will be signals that are not periodic strictly, but do have meaningful average values by this definition.

So, let me challenge you. Can you identify such a function?
 
Last edited:
Hi

Right now I can't come up with any example. Part of the reason is I'm very much sleepy and besides I'm not a math whiz! :)

But I can say that if a function has definite area for an unbounded integration interval then it could have average power. For example, you see **broken link removed** the function has a definite area for unbounded integration interval. Thanks.

Best wishes
PG
 

Attachments

  • improper.jpg
    0 bytes · Views: 8
  • improper.jpg
    improper.jpg
    124.7 KB · Views: 210
Last edited:
But I can say that if a function has definite area for an unbounded integration interval then it could have average power. For example, you see **broken link removed** the function has a definite area for unbounded integration interval.

I would recommend that you actually work similar examples out and see what you get.

What happens for y=exp(-x), y=exp(-|x|) and y=exp(-x)* u(x).
 
Last edited:
I would recommend that you actually work similar examples out and see what you get.

What happens for y=exp(-x), y=exp(-|x|) and y=exp(-x)* u(x).

Hi

I'm not really sure if what I did **broken link removed** is entirely correct. But you can always correct me. So, what should I conclude from this? Thank you.

Regards
PG
 

Attachments

  • power and signal.jpg
    0 bytes · Views: 5
  • power and signal.jpg
    power and signal.jpg
    458.8 KB · Views: 206
Last edited:
OK, good work. What you did is correct. For the first one, you can continue and actually determine the limit of the 0*∞ form, as follows.

[latex] lim_{T\rightarrow \infty}\left\{\frac{1}{T}\exp(\frac{T}{2})\right\} [/latex]

Here, the exponential function can be written as a Maclaurin series, as follows.

[latex] lim_{T\rightarrow \infty}\left\{\frac{1}{T}(1+\frac{T}{2} +\frac{1}{2!}(\frac{T}{2})^2 +\frac{1}{3!}(\frac{T}{2})^3+ ...)\right\} [/latex]

This can be simplified to

[latex] lim_{T\rightarrow \infty}\left\{(\frac{1}{T}+\frac{1}{2} +\frac{1}{2!}(\frac{T}{2^2}) +\frac{1}{3!}(\frac{T^2}{2^3})+ ...)\right\} [/latex]

At this point it should be clear that you have an infinte sum of a bunch of terms that go to infinity, so the answer is infinity.

[latex] lim_{T\rightarrow \infty}\left\{(\frac{1}{T}+\frac{1}{2} +\frac{1}{2!}(\frac{T}{2^2}) +\frac{1}{3!}(\frac{T^2}{2^3})+ ...)\right\}=\infty [/latex]


So the answer to the first problem is infinity, and the other two answers are zero, as you showed.

This means that this definition of average is not very useful for these signals. All three of these signals have values that are always positive and never negative, yet the answer for average value is not a useful finite value.

I just wanted you to see that this definition a bit strange. What it does is pick up any constant shift or DC offset. This is because the integration of a constant value of K, will have an integrated value of KT from -T/2 to +T/2. If you integrate this, then divide by T, you always get a value K, even when the limit of T goes to infinity.

Any other functional shape either averages out to infinity or down to zero. For those signals that average to zero, you can always add a constant to it to get a nonzero average value.

This definition can be used for periodic signals too, but the other one I gave you is simpler for those cases.
 
Thanks a lot, Steve.

So the answer to the first problem is infinity, and the other two answers are zero, as you showed.

This means that this definition of average is not very useful for these signals. All three of these signals have values that are always positive and never negative, yet the answer for average value is not a useful finite value.

I just wanted you to see that this definition a bit strange. What it does is pick up any constant shift or DC offset. This is because the integration of a constant value of K, will have an integrated value of KT from -T/2 to +T/2. If you integrate this, then divide by T, you always get a value K, even when the limit of T goes to infinity.

Any other functional shape either averages out to infinity or down to zero. For those signals that average to zero, you can always add a constant to it to get a nonzero average value.

This definition can be used for periodic signals too, but the other one I gave you is simpler for those cases.

Please have a look on these queries, **broken link removed** and **broken link removed**. Thank you.

Regards
PG

PS: **broken link removed** is for my own personal reference.
 

Attachments

  • power and signal2.jpg
    0 bytes · Views: 4
  • power and signal.jpg
    0 bytes · Views: 5
  • power and signal4a.jpg
    0 bytes · Views: 2
  • power and signal.jpg
    power and signal.jpg
    458.8 KB · Views: 200
  • power and signal2.jpg
    power and signal2.jpg
    249.7 KB · Views: 198
  • power and signal4a.jpg
    power and signal4a.jpg
    181.4 KB · Views: 203
Last edited:
So Q1 appears to be comments more than a question, but I agree with your comments. An average always implies a range. Still, it is not quite correct that the first formula finds an average without dependence on particular points. In fact, you chose the two points to be negative infinity and positive infinity. Although infinities are not strictly "points" you are defining a domain range.

For Q2, this is exactly the point I was trying to make clear when I recommended doing these examples. When you average a signal over an infinite domain, you are only going to get something useful when the integral itself would be infinite, and even then only when the infinite integral is a result of a constant value, or DC value.

Consider the function, y=0. It has an average value of zero.

Now consider the function y=K. It has an average value of K. The integral of y=K over the infinite range x=-inf to +inf, is infinite, but it increases (as T goes to infinity) as TK. Hence, when you average this you divide by T and get K.

Next consider the function y=abs(x). The integral goes as T^2, and when you average you get T as T goes to infinity. So, the average is infinity.

Next consider y=sin(x), which is periodic. It averages to zero.

Next consider y=1+sin(x). It averages to a value of 1.

Do you see the pattern here. Only a constant offset will give an average that is useful information. When you do the integral, you need something that goes up as something proportional to T, so that when you divide by T you get a finite number. Anything that integrates out slower than T (for example sqrt(T)) will give an average of zero. Anything that integrates out faster than T (for example, T^2 or T^(1.1) etc) will have an average of infinity.

There is also something weird about this definition. Consider the two functions; y=x and y=x+1. The function y=x has an average value of zero, but the function y=x+1 has a average of infinity. Based on what I said above, this doesn't make sense because the second function has a constant offset of 1. But, you can also view this function as being shifted in "x" instead. So why not do the average with limits of -T/2-1 to T/2-1 and then take the limit. Then you will get zero for an answer instead of infinity. So, practically speaking, there needs to be some additional constraints placed on the function you are averaging. Also, consider that in over 30 years of doing math, science and engineering, I've never used this definition of an average. The other definitions are more practical, in my opinion.
 
Last edited:
Status
Not open for further replies.

New Articles From Microcontroller Tips

Back
Top