impulse response of a system

Status
Not open for further replies.
Hi

I do realize that I'm kinda jumping back and forth over many different but related topics in this thread, and I understand that it makes IT very difficult for you to handle my queries. I'm sorry about this. I can't start a new thread for every topic being discussed here at this stage. So, please be patient. Thank you.

Please help me with these queries, Q1 and Q2. These queries are about the post #11 by misterT.

Q3: I'm still struggling to understand this part from post #10:
In impulse is a kick. It is a very brief excitation that sets the system into a response, but then immediately goes away. It is an injection of energy that happens before the system can begin to respond.

Your last reply to this topic was the following:

How do we get such a truly good approximation in real life? Even when a steel ball hits a steel wall, the ball compresses a little bit and the wall applies force on the ball as long as it is in contact with the wall. In an electrical circuit, we can say electrons are pushed (or, hammered) by -ve terminal of the battery. The movement of the electrons is response to the push of battery's -ve terminal. What you are saying is equivalent to saying that electrons are pushed but as long as the push doesn't decrease to zero, there is no movement of electrons. Perhaps, it has something to do with inductance because in an inductor voltage leads the current because at the start current flow is resisted. So, please help me to come out of this confusion.

I was just wondering how it would affect the analysis if the system starts responding while the impulse is still being applied.

Q4: This query is about posts #10 and #14.


H(s) is a transfer function and "s" stands for frequency?

h(t) is an impulse response. I don't get it where you say "h(t) would be an impulse multiplied by the constant value".

Thanks you very much for your patience, help and especially your time.

Regards
PG
 
Last edited:
Please help me with these queries, Q1 and Q2. These queries are about the post #11 by misterT.

Q1. I'm not sure what you are asking. The x-axis in B is omega, which is angular frequency 2∏f. The constant T is simple the width of the pulse, in time.

Q2. The C and D diagrams are correct. This is simply showing that a constant value in time has no frequency components and is represented as a scaled impulse function at zero frequency. Because of duality between the time domain and the frequency domain, the reverse is also true. An impulse in the time domain has all frequencies equally, hence it is a constant function in the frequency domain.


I think this is not a good line of discussion at this stage. You are getting into very tricky subjects related to making linearized models of a system, in which it would be meaningful to talk about an impulse response. And you are addressing issues of experiments and measurements that may not be all that easy to do in practice. At this stage, you just want to understand the basic ideas of what an impulse is and what an impulse response is. I think you will want a very simple system model, such as the first order low pass filter (RC circuit) you discussed before.

For any system, you need to make an accurate system model. Then you have to ask whether the model is linear. If it is, you can clearly identify inputs and outputs that you might want to study. Only then can you ask where and how you might apply an approximate impulse.

There are two key things to note here, which we mentioned before. First, the system must behave as a linear system for whatever signal you use to approximated the impulse. This means that the impulse response will not change if you change the amplitude. Second, the impulse response must be the same as you make the pulse narrower and narrower. If you claim to make an approximate impulse for measurements, but that response changes based on amplitude or time duration of the pulse, then you have failed to make the amplitude and duration small enough to approximate an impulse function. It's as simple as that. The rest is just details.

I was just wondering how it would affect the analysis if the system starts responding while the impulse is still being applied.

Well, there you have all the interesting real responses we might get from a system. Step response, sinusoidal response etc. are all examples of useful responses. And, you can have much more complicated input signals too. The key thing here is that the impulse response is the characteristic feature that allows you to determine how the system will respond to any arbitrary signal. I really think you need to start using Matlab to study responses to simple systems, and you can then answer these questions by experimentation which is much more useful that us just telling answers.

H(s) is a transfer function and "s" stands for frequency?

h(t) is an impulse response. I don't get it where you say "h(t) would be an impulse multiplied by the constant value".

What is a system with h(t)=δ(t) and with H(s)=1? Is this not a simple instantaneous transmission from input to output without any modification from input to output? And, doesn't this represent an infinite bandwidth system since any input signal, no matter how fast it is, gets through unmodified? I think you will answer yes to all of this.

Now modify the above and allow either amplification or attenuation from input to output. Now the impulse response is h(t)=Aδ(t) and the frequency response is H(s)=A. The value A is the constant value I was talking about. If A>1, you have amplification. If A<1 you have attenuation. If A=1, you have a unity transfer function. Such systems do not exist in engineering, nor in physics; however, we often make this useful approximation when suitable. The closest thing in the real world might be light passing though a perfect vacuum. We know photons of very low frequency (well below short waves) up past the very highest frequencies ever measured (gamma rays) travel through perfect vacuum without attenuation. However, since the speed of light is a finite constant, there will be linear phase delay, which represents signal time delay. Hence, even in this case, H(s) is more complicated that H(s)=1, although in some sense, vacuum seems to come closest to an infinite bandwidth system.

Now, what if A is not unit-less? For example, a transconductance amp, or transimpedance amp has units associated with the constant A. In this case we usually don't call this an attenuator, even if A<1 unit.
 
Last edited:
Thank you very much, Steve. It's really kind of you.

I was thinking that when Dirac invented/discovered his delta function, did he know that it can be used to know the response of a system for all the frequencies. I mean what was his motivation? Or, did it just happen that the delta function was originally meant for some other purpose but found to be extremely useful in signal processing. This reminds me of laser invention here. When laser was invented. Many thought that it was just a useless invention. In other words, it was the invention which was waiting or searching for its application. But later it was found to be extremely useful.


I'm learning Matlab but if you ask me I don't think it can help me much to understand this stuff. It can provide me probably with some numeric answers or graphs. But I do believe once I get grasp of the basics then Matlab would make more sense. For example, just to know the response of RC circuit, first I need to write down its response equation in Matlab then I think Matlab can show me some graph and tell me the response for different input values. By the way, for this specific case my TI-89 would be a better choice. Or, it is possible that I'm still unaware of real power of Matlab.


I have noticed that the transfer function is mostly written using 's' variable. Is there some special reason for using 's'. In this **broken link removed**, under the section "Fourier Transforms", the transfer function is written as "F(α)". Please ignore the queries in the attachment.

Now, what if A is not unit-less? For example, a transconductance amp, or transimpedance amp has units associated with the constant A. In this case we usually don't call this an attenuator, even if A<1 unit.

Okay. You won't call it an attenuator if A is not unit-less then why would you call it an amplifier? A transconductance amplifier uses voltage to amplify current, and transimpedance does the reverse. I think if it can be called 'amplifier' then why not 'attenuator'.

Kindly help me with the queries above. Thanks a lot.

Regards
PG
 
Last edited:
There is not much that you and I know that Dirac did not know much better. He was a first order genius and invented the delta function; then used it even before mathematicians could make sense of it. His motivation was to use it in the Quantum Theory, which he helped develop. There are striking similarities in the mathematics of many fields. The theory that electrical engineers use in signal processing, system theory and control theory directly parallels the math used in quantum mechanics.

I think perhaps you do not yet know the power of Matlab. However, it's possible you don't have the basics down enough to know how to use it to answer some of the questions you've asked. Perhaps as time goes on, I'll try to add a footnote to some of the answers I provide and indicate how/when Matlab can be used.

I will say that Matlab is not a substitute for the understanding of fundamentals. Rather, it is a time saver and can provide insight into the answer to complex questions.


The variable "s" is just used traditionally when the signals are in the time domain. The variable "s" then becomes a complex frequency variable for the Laplace transform, with units of radians/second when time is measured in seconds. When you see "s" you know you are dealing with a continuous time (i.e. analog) system. In discrete time, you will see the variable "z" which relates to discrete time and the Z-transform.



I would call it an amplifier because everyone else calls it that. I would not call it an attenuator because nobody else does, as far as I know. It's just terminology and tradition at work here. It's a minor point, but I thought to make it because it would be easy to misinterpret what I was trying to say, if i did not make the distinction.
 
Last edited:
Hi

I wanted to clear a point. I seem to remember that somewhere here or in some book it was said that to know the impulse response of a system we don't really need to build the actual system and apply an impulse. Rather, that system is designed and modeled by a differential equation then we can know the system's impulse response using mathematical techniques. Is it true? How would it differ from actually building the system and practically testing it for the response? Can we apply this technique of finding the response without building the system to digital systems too? Please help me. Thank you.

Regards
PG
 
If you find the impulse response of a system without building it and measuring it then you do so by modeling and/or derivations with math. But, yes of course you can estimate these things in many practical cases.
 
Thank you.

In reality, which approach is preferred? Using mathematical modeling or actually building the system then practically testing it? Is this approach also applicable to digital systems? Please help me with these points. Thank you.

Regards
PG
 
Yes, of course what I said applies to digital systems too.

In my view, it is best to do both analysis and experimentation at the same time. The process is typically to first design something conceptually on paper. Then write equations and try to take the analysis as far as possible with mathematical methods. If a full solution is not possible by math alone, then the next step is to implement a numerical solution, or some type of computer modeling tool. Then, you need to build the system and test it.

It is at that point you will know whether or not your model is adequate. If adequate, you are in position to start doing design optimization to meet specs and do proper tradeoffs, or you need to make improvements to the design itself if you find you can't meet specs and find acceptable tradeoffs.

If your model is found to not predict your actual design performance, then you need to add fidelity to your model. This can be challenging because you may not know why your model does not work. However, discovering why is one of the most educational things you can do in design work.

You will find that not everyone uses this approach, but it is the approach I prefer. Some people build and tweak only. Some people use an advanced simulator and building without mathematical analysis. These approaches often work, but true understanding of the design can sometimes be lacking with these approaches. However, sometimes these approaches are necessary, if you do not have the ability to do a proper mathematical analysis. Eventually, everyone will reach the limit of what they can do mathematically as circuit complexity increases and parasitic effects become too much, and at that point you need to revert to these approaches.
 
Thanks a lot.


But even when such a system is 'built' and tested theoretically, the impulse function is a preferred input signal, right? Because it can reveal a lot about the system and one of the reasons being it contains all the frequency components with equal magnitude? Thanks.


The discrete time Kronecker delta function is a lot simpler compared to its continuous time counterpart. But what characteristics should Kronecker delta possess? It has a magnitude of unity at n=0 but what does this unity mean in this case? For example, in case of Dirac delta, it has magnitude of infinity at t=0, and in practical cases it can be characterized as , say, a pulse with lot of energy or enough energy to kick the system and moreover the pulse with a a very short duration. Please help me with it. Thanks.

Regards
PG
 
You are correct about the impulse function being very useful to characterize a system. However, sometimes people use step functions instead. Both work because both have infinite frequency content (in principle). Of course, you can not make a real impulse or step function. All you can do is make signals with very fast edges, and the impulse you make is really a very fast pulse with area indicating the energy of the impulse.

For your second concern. For impulse functions, whether Dirac or Kronnecker type, think of the signal energy of the impulse, rather than the amplitude. Both have an energy of unity.
 
For your second concern. For impulse functions, whether Dirac or Kronnecker type, think of the signal energy of the impulse, rather than the amplitude. Both have an energy of unity.

I understand that the important thing is that it has energy of unity. But still I don't think you can ignore amplitude altogether. Because if amplitude is too low then width has to be large which, in my personal opinion, makes it a 'bad' impulse. Perhaps, we can say that amplitude should be large but it doesn't need to be very large. What do you say? Please let me know. Thanks.

Regards
PG
 
Yes, you are correct. The amplitude has to be large enough and the pulse width has to be narrow enough to be a good approximation for an impulse function. Basically, imagine gradually increasing the amplitude and decreasing the width, while maintaining the same energy or area of the pulse. Then you will notice a point where you get the same pulse response for higher amplitudes and narrower pulses. At that point, you know your pulse is narrow enough and you have approximated an impulse function well enough.

If you think about it, the pulse width is more critical than the pulse amplitude. If the amplitude is smaller, then you can just scale the output response to the new lower energy, and you'll get a useful answer. But if the pulse is too long, you will not get an impulse response at all.
 
Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…