Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

z-transform problems

Status
Not open for further replies.

PG1995

Active Member
Hi

Could you please help me with this query? Thanks a lot.

Regards
PG
 

Attachments

  • z_transform1.jpg
    z_transform1.jpg
    278.2 KB · Views: 477
Remember that z=|z|exp(jθ), and z^n=|z|^n exp(jnθ).

exp(jnθ) is just the phase term and always has magnitude equal to one. Hence, the phase part does not create an issue for convergence. However, the |z|^n part will diverge as n goes to infinity, unless |z|<1 .
 
Hi,

Just to add a little...

Sometimes we can't look at things in an entirely algebraic way, which is purely abstract, and see all the basic properties and how they relate to each other. Sometimes we have to go geometric. That is, use geometry. This should not be a surprise since we always see 2d or 3d drawings accompanying much of the math in these kinds of situations. This i believe is no different.

Looking at it geometrically, we see the convergence as a spatial animal rather than a purely numerical thing. When i throw single numbers out there like 3 or 4, we get the sense of something happening in one direction only. But when we move to complex numbers, we have to deal with two numbers at the same time, and this implies more than one direction is being referenced to at the same time which therefore makes a multidimensional geometrical view a good idea. In fact, this reveals the whole issue here.

Case in point, the modulus. It's a single number but it's not the same as most single numbers in that it came from the result of a calculation along two or more dimensions, and the result is also known as the "amplitude".
But geometrically it is really very simple: it's the radius of a circle and that circle has a certain area. When the solution lies within that circle the system converges, but if the solution lies outside that circle the system diverges, and if it lies on the circle the convergence is questionable although maybe stable.

So what we are really doing here is calculating the radius of the circle and seeing if it is less than the radius of a 'test' circle. When it passes the test the time solution is stable, but when it fails the test it is unstable. Because we are using an area that lies inside a circle we dont have to test every single coordinate which would require constantly paying attention to two numbers instead of just one.

Another way to think of it is if we assigned length units to the 'convergence' itself. We'd have to use meters squared (square meters) which is an area not a single dimension. So when we calculate the test 'convergence' we'd end up with an area, implied by the radius which can be specified in meters. It's not part of the direct calculation because for a circle we can specify the area via one number that is along a single dimension, but it is implied because it implies a circle is really what is being specified rather than a single dimensional length measurement.
 
Last edited:
Hi

Q1: The mathematical definition of causality states that a system is causal if all output values, y[n0] , depend only on input values x[n] for n<=n0. Another way of saying this is the present output depends only on past and present input values. Then, it says that y[n]=x[n]+x[n-1]+y[n+1] is a non-causal system and also note that y[n+1] can be written in terms of x[n]. How can we write y[n+1] in terms of x[n]? I know it's a very basic questions but... sorry! Thanks.

Q2: I don't see how they say the right-sided sequence is causal and the left-sided is non-causal. In other words, I find the terminology 'causal' and 'non-causal' confusing in this context and as I see most texts simply refer to sequences as right-sides, left-sides, and two-sided. How can you say that right-sided sequence is a causal one? Thanks.

Regards
PG
 

Attachments

  • dsp_z_transform12.jpg
    dsp_z_transform12.jpg
    57.5 KB · Views: 2,035
Q1 EDIT: I'll make it simpler than what I said before. You can use y[n+1]=x[n+1]+x[n]+y[n+2]. Then you can see that y[n+2] can be expressed with other samples of x[n] and y[n]. This leads to an infinite summation,

hence, [LATEX]y[n+1]=\sum^{\infty}_{k=n} x[k]+x[k+1]\ \ ,[/LATEX]

Q2 It's just terminology, so don't let it confuse you. If someone called them "Jim" and "Sally", would you worry that they are human? No, they are just labels. However, I believe that the origin of the labels comes from the fact that impulse responses of a causal system look like one type and impulse responses of a noncausal system look like the other type. So, the naming has a logical basis. So, your question should really be, "what is the origin of the terminology?". Hopefully, my answer about that is correct, or "correct enough".
 
Last edited:
Q1: The mathematical definition of causality states that a system is causal if all output values, y[n0] , depend only on input values x[n] for n<=n0. Another way of saying this is the present output depends only on past and present input values. Then, it says that y[n]=x[n]+x[n-1]+y[n+1] is a non-causal system and also note that y[n+1] can be written in terms of x[n]. How can we write y[n+1] in terms of x[n]? I know it's a very basic questions but... sorry!

y[n]=x[n]+x[n-1]+y[n+1]

can be re-written as

y[n+1]= -x[n]-x[n-1]+y[n]

or, if you wish

y[n]= -x[n-1]-x[n-2]+y[n-1]

Looks causal to me.
 
If you write it that way, then it is causal. Causality can depend on other details that have not been specified. I edited my post above to hide this issue. Initially I showed two ways to write this, with one way being similar to what your wrote. But then realized that the noncausal condition would then be questionable and this would then lead to more confusion.

One issue with the original specification is that no initial conditions for y[n] have been given. If you look at this system, you can add any constant to y[n] responses and the equation is still valid. Depending on the offset value, you can make the system appear causal, or anti-causal, but in general it is noncausal because the response can depend on future values. But, since we live in the real world, if we try to implement something that works, we have to do it this causal way you wrote and have to assume initial values for y[n] in the past (usually zero), hence it appears to be causal since you can implement it without requiring future values.

Tricky!
 
To puzzle it even more ... :)

Let's look at the stock market. It is closed today at some value y[n]. This happened because people traded today and moved it by some value x[n], so we write:

y[n] = y[n-1] + x[n]

Casual, right?

Tomorrow, the market will close at y[n+1]. This is because people will move it by x[n+1] tomorrow. So we write:

y[n] = y[n+1] - x[n+1]

Anticasual?
 
Hi,

Usually we end up having to use this in the real world for something real which does not include predicting the future really except in systems that are completely predictable. Once we examine the system even though we have the response in hand we may still have to look for a solution for a starting value, which may turn up another solution that is used in lieu of the original response for that one point (or more points) process time only. In other words, when the solution is not self starting.

y[n+1] on the right implies an effect that we have no cause for, but since n is not really real time we can sometimes move ahead by shifting everything 1n. Does this make it causal? Well if we can generate the first sample (as above) then it does, but if we cant generate the first sample then it doesnt.
 
Thank you, steveB, NorthGuy, MrAl, for the help.

It was good to know how a same sequence can be considered causal or non-causal.

Q1 EDIT: I'll make it simpler than what I said before. You can use y[n+1]=x[n+1]+x[n]+y[n+2]. Then you can see that y[n+2] can be expressed with other samples of x[n] and y[n]. This leads to an infinite summation,

hence, [LATEX]y[n+1]=\sum^{\infty}_{k=n} x[k]+x[k+1]\ \ ,[/LATEX]

I don't get the summation expression. You had y[n+1]=x[n+1]+x[n]+y[n+2] but "y[n+2]" is nowhere in the summation. I'm sorry if I'm sounding just plain dumb. Thanks.

Q2 It's just terminology, so don't let it confuse you. If someone called them "Jim" and "Sally", would you worry that they are human? No, they are just labels. However, I believe that the origin of the labels comes from the fact that impulse responses of a causal system look like one type and impulse responses of a noncausal system look like the other type. So, the naming has a logical basis. So, your question should really be, "what is the origin of the terminology?". Hopefully, my answer about that is correct, or "correct enough".

But the terms "causal" and "non-causal" have specific meanings in the given context. So, there should be reason for calling them so. Yes, my question should have been about the origin of terminology. I was confused because I didn't see any reason for calling a right-sided sequence 'causal' and the left-sided sequence 'non-causal'. I don't see any role of the following definition in their description or labeling: The mathematical definition of causality states that a system is causal if all output values, y[n0] , depend only on input values x[n] for n<=n0. Another way of saying this is the present output depends only on past and present input values. I'm still confused. Thanks.

Regards
PG

PS: Edited after the post of NorthGuy below.
 
Last edited:
The mathematical definition of causality states that a system is causal if all output values, y[n0] , depend only on input values x[n] for n<=n0. Another way of saying this is the present output depends only on past and present input values. I'm still confused.

Note that it talks about inputs. When you throw outputs into equation, it no longer applies. Hence the confusion in my examples. I thought you would catch me.
 
Hi NG

Note that it talks about inputs. When you throw outputs into equation, it no longer applies. Hence the confusion in my examples. I thought you would catch me.

Sorry, if I'm missing the point. I was talking about Q2 from this post. You helped me with Q1. Still, it's good that you highlighted that subtle point in the definition which, to be honest, I didn't notice. Thanks.

Regards
PG
 
I don't get the summation expression. You had y[n+1]=x[n+1]+x[n]+y[n+2] but "y[n+2]" is nowhere in the summation. I'm sorry if I'm sounding just plain dumb.

No, not dumb at all. I really didn't explain it in detail. Basically, the summation is a result of the fact that you need to keep defining y[n+2], y[n+3] ... etc. The end result is summation over the x[n] values only once you substitute in all expressions for y[n+m] for all m.

You have, y[n+1]=x[n+1]+x[n]+y[n+2].

Next substitute in, y[n+2]=x[n+2]+x[n+1]+y[n+3].

Next substitute in, y[n+3]=x[n+3]+x[n+2]+y[n+4].

... and if keep doing this, you end up with the infinite sum.

Another way to see it is to use y[n+1]-y[n+2]=x[n+1]+x[n] and then to a summation from k=n to m on each side.

[LATEX]\sum^{m}_{k=n}y[k+1]-y[k+2]=\sum^{m}_{k=n} x[k]+x[k+1]\ \ ,[/LATEX]

Look at the left hand side of this expression. It is analogous to taking the integral of a derivative. Specifically, we are taking a summation of difference, and all the middle terms will cancel out. This leaves the following ...

[LATEX]y[n+1]=y[n+m]+\sum^{m}_{k=n} x[k]+x[k+1]\ \ ,[/LATEX]

Then you can take the limit as m goes to infinity ...

[LATEX]y[n+1]=\lim_{m\to \infty}y[n+m]+\sum^{\infty}_{k=n} x[k]+x[k+1]\ \ ,[/LATEX]

You now get into some tricky math issues with limits, but for an anticausal system, y[n+m] will go to zero. If you do the causal formulation, then you can flip the whole thing around and do the summation from negative infinity, and then the output signal starts at zero for a causal system.

I guess strictly you can argue that the original question is not rigorously posed.

But the terms "causal" and "non-causal" have specific meanings in the given context. So, there should be reason for calling them so. Yes, my question should have been about the origin of terminology. I was confused because I didn't see any reason for calling a right-sided sequence 'causal' and the left-sided sequence 'non-causal'. I don't see any role of the following definition in their description or labeling: The mathematical definition of causality states that a system is causal if all output values, y[n0] , depend only on input values x[n] for n<=n0. Another way of saying this is the present output depends only on past and present input values. I'm still confused. Thanks.

I'm confused on whether you are satisfied with my answer above. Is my explanation that the terminology comes from the signals being compared to impulse responses for causal and anti-causal systems sufficient?
 
Thanks a lot, Steve.

I need to go through your reply to Q1 several times before asking any follow-on query.

I'm confused on whether you are satisfied with my answer above. Is my explanation that the terminology comes from the signals being compared to impulse responses for causal and anti-causal systems sufficient?

It's just that I don't know about the impulse responses of causal and non-causal systems and how they look. I believe I should just understand that a right-sided sequence is called 'causal' and the left-sided sequence 'non-causal'. Thanks.

Regards
PG
 
I believe I should just understand that a right-sided sequence is called 'causal' and the left-sided sequence 'non-causal'.
OK, let me clarify what I was trying to say. It turns out that a causal system has an impulse response that is a right sided signal. It also happens that an anti-causal system has an impulse response that is a left sided signal. So, the systems are the things that can be causal, anti-causal or non-causal. The signals are not really best described that way, but somehow the terminology has been carried over.

Also, note that there is a difference between "anti-causal" and non-causal. An anti-causal system is analogous to the causal system in that it does not depend on past value. However, a non-causal system is just one that is not causal, which is not necessarily the same thing.
 
Thank you.

OK, let me clarify what I was trying to say. It turns out that a causal system has an impulse response that is a right sided signal. It also happens that an anti-causal system has an impulse response that is a left sided signal. So, the systems are the things that can be causal, anti-causal or non-causal. The signals are not really best described that way, but somehow the terminology has been carried over.

Also, note that there is a difference between "anti-causal" and non-causal. An anti-causal system is analogous to the causal system in that it does not depend on past value. However, a non-causal system is just one that is not causal, which is not necessarily the same thing.

It's funny that my mind was simply reading it "non-causal" when it actually was anti-causal. A causal system produces its output based on previous value(s) and an anti-causal system produces its output based on future value(s). So, when an impulse is applied to an anti-causal system at t=0, it traces its output back into the past for t<0. On the other hand, a causal system traces its output into the future for t>0. Thanks.

Regards
PG
 
Last edited:
Hi

One of the properties of region of convergence for z-transform is that it contains unit circle. What does it really mean? I understand that at |z|=1, z-transform is equal to DTFT. Could you please comment on this?

Is this really true that z-transform deal with IIR systems and DTFT deals with FIR systems? I understand that z-transform is generalized form of DTFT. Thanks.

Regards
PG
 

Attachments

  • dsp_roc1.jpg
    dsp_roc1.jpg
    55 KB · Views: 1,051
First, you can use the z-transform for either IIR or FIR systems. The z-transform is analogous to the Laplace transform in that it is a more general transform than the FT, and includes the FT as part of it.

For a continuous time system, the s=jw axis in the complex plane is the domain of the FT. Also, convergence depends on the function. System stability is determined by poles not being in the right half plane.

Mapping from continuous to discrete is determined by z=exp(sT) where T is the sample time. The s=jw axis in the s-plane then gets mapped to the unit circle in the z-plane. Also, the right half of the s-plane gets mapped to the outside of the circle and the left half of the s-plane gets mapped to the inside of the circle. So, poles must be inside the circle for stable systems.
 
Hi,

To add just a little...

Anything in the 'z' plane consists of a real part and imag part just like with the 's' plane which could be written as a+bj just like any other complex number. Also, the entire left 's' plane maps to the 'z' plane by 'warping' the plane into a circle who's radius is 1. Thus any complex number that 'fits' inside the unit circle on the 'z' plane must exist in the left hand plane of the 's' plane.

Since the radius equals 1, and we know that converting a rectangular to polar means first finding the length, all this means really is that the length must be less than 1. To find the length we use L=sqrt(a^2+b^2) of course, and because we measure from the origin (0,0) outward if this length is less than 1 then the point must lie inside the unit circle.

Another quick way of looking at this is that the unit circuit can be generated by mapping all possible points that are a distance of exactly 1 unit from the origin, or by rotating the line that extends from (0,0) to (1,0) around the origin keeping the point (0,0) fixed at the origin while allowing the other point to move around, but keeping the length of the line equal to 1 at all times. This line then sweeps out all the possible solutions to what lies inside the unit circle.

A couple quick examples:
0.5+0.5j, the length is sqrt(2*0.25)=sqrt(0.5)=0.7071, and this is obviously less than 1 so it's inside the unit circle.
1.0+1.0j, the length is sqrt(1+1)=sqrt(2)=1.4142, this is greater than 1 so it is outside the unit circle.
sqrt(0.5)+sqrt(0.5)j, the length is sqrt(0.5+0.5)=sqrt(1)=1, this lies right on the edge so it is a perfect oscillator (although there are other interpretations as well for this condition).
 
Last edited:
Thank you, Steve, MrAl.

I'm still confused but believe that this confusion can only be solved by solving some problems.

Q1:
The full question along with its solution is here.

Let's start with the first part.

(a): It says that DTFT for the given sequence exists. We understand that at |z|=1=|e^jw|, z-transform is equal to DTFT. So, the unit circle must be part of the solution. By the way, what would it imply if it wasn't given that DTFT exist? I think it would imply that ROC exists for |z|<|1/3| and it would also imply an anti-causal sequence.

I don't understand anything in the solution where it says, "Since this ROC lies outside 1/3, this pole contributes right-sided sequence. Since the ROC lies inside 2 and 3, these poles contribute left-sided sequences. The overall x[n] is therefore two-sided". If it were to me, I would simply say that since the ROC lies between poles 1/3 and 2 which gives ROC in form of a washer therefore the sequence x[n] is two-sided sequence.

I think these properties of z-transform are relevant here.

Q2:
In another thread, we discussed FIR and IIR responses of a system. An IIR system could be stable or unstable. This text could be relevant here.

One can use the z-transform for either IIR or FIR systems.

DTFT can be used for FIR systems and also for stable IIR systems. Do I have this correct?

But DTFT cannot handle IIR unstable systems and for such systems we need z-transform (I think even z-transform can't handle all IIR unstable systems). Do I have this correct?

Thank you.

Regards
PG
 

Attachments

  • dsp_assig3.jpg
    dsp_assig3.jpg
    131.8 KB · Views: 724
  • dsp_two_sides.jpg
    dsp_two_sides.jpg
    55.2 KB · Views: 907
Last edited:
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top