Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

laplace transform problems

Status
Not open for further replies.

PG1995

Active Member
Hi

Could you please help me with this problem? Thank you.

Regards
PG
 

Attachments

  • laplace1.jpg
    laplace1.jpg
    128.6 KB · Views: 651
I agree it looks complicated, but if you break it down step by step it can be done. I haven't worked it out, but it seems to me that you will need to apply the derivative rule many times. When you do this, you might find the function transform that you are trying to solve for, show up again in the process. If this happens, you can rearrange the equation and solve for the function transform.
 
Hi

Could you please help me with this problem? Thank you.

Regards
PG

Hello there,


Im not sure if this is what you are looking for, but when you use those derivative formulas it is usually because you already know the Laplace Transform for a time function that is the integral of the time function who's Laplace is what you are looking for.

For example, the derivative of sin(wt) is w*cos(wt), so if we know the Transform of sin(wt) is F1(s) then the transform of cos(wt) we'll call F2(s) is:
F2(s)=(s*F1(s)-f(0))/w [dont forget the derivatives usually include a 'w' as well as the other trig function]

So we would have known that the Transform of sin(wt) is w/(s^2+w^2) and since the derivative of sin(wt) is w*cos(wt) then the Transform of cos(wt) is simply:
(s*(w/(s^2+w^2))-0)/w=s/(s^2+w^2)

So we got the new transform from knowing the transform of the integrated time function by using one of the derivative formulas.

Try it for finding the transform of sin(wt) knowing the transform for cos(wt) next. The transform of cos(wt) we found above.
 
Mr Al and PG,

What needs to be realized here is that there is a higher purpose to this exercise. It's not just a straightforward method for finding one transform, if you already have another. The method they are asking us to discover can generate the solution without need for having another solution in hand and without needing to do a complicated integral transform directly.

Notice how the simple example is given first. Solve that one, and the method will be clear. This solution is not very interesting because the answer to sin and cos are well known, in tables, or can be calculated directly from the integral transform. But, the second problem would normally be difficult. However, with this method, it is merely slightly tedious.

When solving for the transform of sin(wt), express it in terms of the transform for cos(wt), then find the transform for cos(wt) using the same method and it will be written in terms of the transform for sin(wt). Combine the equations and solve, and you will have an equation for the transform of sin(wt) directly, without need for the transform of either cosine or sine.
 
Wouldn't the proof just be a double application of equation 10.5?

By the way, I recommend that you not use a star or asterisk to indicate multiplication because often this is a symbol for convolution. Instead use a small dot. If a dot is not available, then a space to separate the symbols will make it clear that multiplication is the operation between two different symbols.
 
Mr Al and PG,

What needs to be realized here is that there is a higher purpose to this exercise. It's not just a straightforward method for finding one transform, if you already have another. The method they are asking us to discover can generate the solution without need for having another solution in hand and without needing to do a complicated integral transform directly.

Notice how the simple example is given first. Solve that one, and the method will be clear. This solution is not very interesting because the answer to sin and cos are well known, in tables, or can be calculated directly from the integral transform. But, the second problem would normally be difficult. However, with this method, it is merely slightly tedious.

When solving for the transform of sin(wt), express it in terms of the transform for cos(wt), then find the transform for cos(wt) using the same method and it will be written in terms of the transform for sin(wt). Combine the equations and solve, and you will have an equation for the transform of sin(wt) directly, without need for the transform of either cosine or sine.

Hello there Steve,

Yes, and i think you are talking about frequency differentiation right? That simplifies the calculation of a time function multiplied by t one or more times. That's another useful Laplace operation.

But im not sure i understand what you said in your last paragraph. You seem to be saying that you need the transform of sin (or cos) but then in the last line you state that we dont need the transform of sin (or cos depending on the particular problem. Did you mean we calculate that one with the integral transform directly? Note that when i talked about 'knowing' a related transform before we start that doesnt exclude those which we can calculate easily.

So just to summarize then, when a time function is multiplied by time t we can use the Laplace Transform operation of Frequency Differentiation to get the more complex transform more easily. I think that's what Steve was trying to say here.
 
Hello there Steve,

Yes, and i think you are talking about frequency differentiation right? That simplifies the calculation of a time function multiplied by t one or more times. That's another useful Laplace operation.

But im not sure i understand what you said in your last paragraph. You seem to be saying that you need the transform of sin (or cos) but then in the last line you state that we dont need the transform of sin (or cos depending on the particular problem. Did you mean we calculate that one with the integral transform directly? Note that when i talked about 'knowing' a related transform before we start that doesnt exclude those which we can calculate easily.

So just to summarize then, when a time function is multiplied by time t we can use the Laplace Transform operation of Frequency Differentiation to get the more complex transform more easily. I think that's what Steve was trying to say here.

Hi MrAl,

No, I wasn't even thinking about any other Laplace properties, although they are very useful in general, of course.

My description was not very clear, so I can try to explain again.

There is a rather beautiful calculation method being introduced by this problem. Let's take a simpler example that requires only one application of the derivative rule. Consider the transform of f(t)=exp(-at)u(t) as follows.

[latex]L\{\exp(-at)\}=L\{ \frac{d}{dt} \frac{-\exp(-at)}{a} \}=s \; L\{ \frac{-\exp(-at)}{a} \}+\frac{1}{a} =\frac{-s}{a} \; L\{ \exp(-at) \}+\frac{1}{a}[/latex]

or, simply ...

[latex]L\{\exp(-at)\}=\frac{-s}{a} \; L\{ \exp(-at) \}+\frac{1}{a}[/latex]

notice how the transform we are trying to calculate has shown up again in the process of applying the derivative rule. This is a direct result of the fact that derivative of an exponential is another scaled exponential. Well, now we can solve with very little work, by algebraic solving and we get.

[latex]L\{\exp(-at)\}=\frac{1}{s+a} [/latex]

The cases of sine and cosine are similar, but the derivative rule must be used twice because the second derivative returns these functions again. Hence, one can easily calculate the transform of sin(wt) as asked in the problem by double application of the derivative rule. No other rules are needed and no integration is required. This method is elegant and beautiful for calculating the transforms of sine and cosine.

Then one can expand to functions like t sin(wt), t cos(wt), t^2 sin(wt) ... etc.

One problem with this technique is that the region of convergence is not made as clear as if you do out the full integration manually. When integrating, you will more easily see the conditions for which convergence is valid.
 
Hello again Steve,


Ok that illustrates the concept very well i think, very nicely done. So basically what we do more generally is we equate a particular operation in the time domain to the frequency domain making the conversion as necessary. So this should work with the integral too then.
Have you tried it yet using the time integral operation rather than the derivative operation? Might introduce some complexities however but should be interesting anyway :)

For t^2*sin(wt) i think the other operation works nicely too though (the operation of frequency differentiation) and it's not hard to remember.

Your post also illustrates the fact that nothing beats a good example :)
 
So this should work with the integral too then.
Have you tried it yet using the time integral operation rather than the derivative operation? Might introduce some complexities however but should be interesting anyway :)

I agree, it should be interesting. I haven't tried it yet, but I think i will later on. I agree that it will introduce some complexities.

In particular, the issue relates to the fact that we typically deal with the single sided Laplace transform, or equivalently, we multiply any signal by the step function u(t). The derivative rule includes the initial condition of the function f(0), which is an artifact of the single sided transform. Hence, when we use the derivative rule, the important information is retained. The integral rule is the same for the full transform and the single sided transform, so, I'm guessing we need to modify the integral rule, or add a step into the process.
 
So this should work with the integral too then.
Have you tried it yet using the time integral operation rather than the derivative operation?

OK, so i just tried this problem as a way to relax before sleeping. It seems to work fine, but a little care is needed to maintain the single-sided signal assumption that we typically use in practice.

Since, as you say, an example is indeed very illustrative, let's use the same single sided exponential function that I used above.

[latex] L\bigg\{\exp(-at) \cdot u(t) \bigg\}=L\bigg\{-a \cdot \int_{0}^t \exp(-a\tau )\cdot u(\tau)\cdot d\tau +u(t) \bigg\}[/latex]

Before proceeding with the solution, it should be noted that care is needed to make sure that there is a correct equality established above. Because of the single sided signal, the integration is started at zero, and the appropriate unit step function is added to force the equality. I expect there are other correct ways to start, but it is important that the starting point be a correct equality.

Proceeding with the solution, we get ...

[latex] L\bigg\{\exp(-at) \cdot u(t) \bigg\}=-a\cdot L\bigg\{\int_{0}^t \exp(-a\tau )\cdot u(\tau)\cdot d\tau \bigg\}+\frac{1}{s}[/latex]

Then the integral rule can be applied, resulting in.

[latex] L\bigg\{\exp(-at) \cdot u(t) \bigg\}=\frac{-a}{s}\cdot L\bigg\{ \exp(-a\tau )\cdot u(\tau) \bigg\}+\frac{1}{s}[/latex]

Then, algebraic solving gives the solution as.

[latex] L\bigg\{\exp(-at) \cdot u(t) \bigg\}=\frac{1}{s+a}[/latex]
 
Last edited:
Hello again Steve,


Yes that looks good :)

The way i did it was slightly different, using the operational definition more directly and i used e^at rather than e^-at so i could integrate from minus infinity to zero (which accounts for the activity before t=0), which gave me 1/s as you got on the far right.

Interesting isnt it? :)

Of course the derivative definition is going to be easier to apply.
 
Wouldn't the proof just be a double application of equation 10.5?

By the way, I recommend that you not use a star or asterisk to indicate multiplication because often this is a symbol for convolution. Instead use a small dot. If a dot is not available, then a space to separate the symbols will make it clear that multiplication is the operation between two different symbols.

Thank you.

In my previous attempt I made several errors. Now I have corrected it and here is the proof. Thanks.

Regards
PG
 

Attachments

  • laplace_2nd_derivative.jpg
    laplace_2nd_derivative.jpg
    138.8 KB · Views: 439
Hi

Q1:
The follow is the reply to this question from post #1.

I agree it looks complicated, but if you break it down step by step it can be done. I haven't worked it out, but it seems to me that you will need to apply the derivative rule many times. When you do this, you might find the function transform that you are trying to solve for, show up again in the process. If this happens, you can rearrange the equation and solve for the function transform.

It's really very complicated and I think it would take quite an effort and labor to calculate f(t), let alone the multiple times application of derivative rule. Notice that to calculate f(t) I need to use integration by parts many times. If my fear is justified then I shouldn't proceed with this problem because I'm only doing it for my own learning. Kindly let me know. Thanks.

Q2:
I don't get the following formula. I mean what "1" stands for. Is it unit step function?

laplace_formula_issue-jpg.72011


Thank you.

Regards
PG
 

Attachments

  • laplace_formula_issue.jpg
    laplace_formula_issue.jpg
    3.5 KB · Views: 824
  • laplace_stuck.jpg
    laplace_stuck.jpg
    82 KB · Views: 482
Last edited:
Q1:
The follow is the reply to this question from post #1.



It's really very complicated and I think it would take quite an effort and labor to calculate f(t), let alone the multiple times application of derivative rule. Notice that to calculate f(t) I need to use integration by parts many times. If my fear is justified then I shouldn't proceed with this problem because I'm only doing it for my own learning. Kindly let me know. Thanks.

I would estimate there is about an hour of careful work to be done there. It's up to you if you want to spend that time. The important thing is that you understand the method being used and understand the class of functions that can be transformed using this method.

It's clear that exp, sin and cos are functions that are easily transformed by this method. What is less clear is that the t cos(wt), t sin(wt), t^2 cos(wt), t^2 sin(wt) and in general the t^n sin(wt) and t^n cos(wt) can also be transformed, when n is a positive integer.

If I were to do this problem, I would proceed in a systematic way and build up useful transform formulas along the way.

Here are my recommended steps.

1. Find transform of sin(wt) and cos(wt) by applying the derivative rule twice
2. Find transform of t sin(wt) and t cos(wt) by applying the derivative rule as needed and using the transforms of cosine and sine
3. Find transform of t^2 sin(wt) by applying the derivative rule as needed and using the transforms of cosine, sine, t cos and t sin.


I believe that this is easier than trying to directly do step 3. I think what you will find when you are done is that you can progress to the transform of any positive integer power n for functions of the type t^n sin(wt), simply by building up a table of the transforms for t^m sin(wt) and t^m cos(wt) for m<n, and using these transforms as needed.

Q2:
I don't get the following formula. I mean what "1" stands for. Is it unit step function?

laplace_formula_issue-jpg.72011


Thank you.

Regards
PG

I'm not sure why they explicitly show the 1. It may be to stress the fact that there was a u(t) originally.
 
Last edited:
Here are my recommended steps.

1. Find transform of sin(wt) and cos(wt) by applying the derivative rule twice
2. Find transform of t sin(wt) and t cos(wt) by applying the derivative rule as needed and using the transforms of cosine and sine
3. Find transform of t^2 sin(wt) by applying the derivative rule as needed and using the transforms of cosine, sine, t cos and t sin.

I thought I would verify my recommendation rather than risk sending you on a wild goose chase. I was able to do steps 1 and 2 in 27 minutes from scratch working carefully. Then I worked more hurried to do step 3. This took 14 minutes, but by rushing, it seems I have made a mistake. I say that because the transform came out negative which I don't think is correct. I expect there is a slip of the pen that is easily corrected.

However, it is clear to me that about 1/2 hour of work on this step 3 will give the answer without mistakes. Hence, my estimate of an hour of work seems good to me and the method works perfectly.
 
example 2-4 Ogata

Thank you, Steve.

Could you please also help me with this query? Thanks.

Regards
PG

Edit: Updated the query after making some progress.
 

Attachments

  • laplace_examp_2-3_new.jpg
    laplace_examp_2-3_new.jpg
    124.6 KB · Views: 455
Last edited:
Yes, basically that's it. Use the initial value at t=0- which is what is usually implied from the context. Just about all functions we deal with are single-sided functions that have initial value of zero. So, δ(t), u(t), δ'(t), which are the impulse, step and doublet (derivative of δ) functions, all have initial value of 0 at t=0-.

Here is an example to help you see this. Let's say we want to take the single sided Laplace transform of a simple sine function f(t)=sin(t).

We can try this by direct integration as follows ...

[latex]L\{\sin(t)\}=\int_{0-}^\infty \sin(t)\cdot \exp(-s\cdot t)\cdot dt =\big(\frac{-\cos(t)-s\cdot \sin(t)}{(s^2+1)\cdot\exp(st)}\big)\bigg|_{0-}^\infty=\frac{1}{s^2+1} [/latex]

Next, we can apply the derivative rule, but there are different ways to do this. Let's try two different ways that highlight how u(0-)=0 is relevant. Note that the single sided transform of sin(t) and sin(t) u(t) should be the same.

[latex]L\{\sin(t)\}=L\{\frac{d}{dt}(-\cos(t)))\}=\frac{-s^2}{s^2+1}+1=\frac{1}{s^2+1}} [/latex]

[latex]L\{\sin(t)\cdot {\mathrm u}(t)\}=L\{\frac{d}{dt}(-\cos(t)\cdot {\mathrm u}(t) )+\cos(t)\cdot\delta (t)\}=\frac{-s^2}{s^2+1}+1=\frac{1}{s^2+1}} [/latex]

The key difference between these two cases is that cos(t) has an initial value of 1, but cos(t) u(t) has an initial value of 0. Yet, the derivative rule, which relies on the initial values has no problem if we are careful to note that the u(t) is a function of time and derivative of cos(t) u(t) must be found with the product rule for derivatives. In the end it all checks out, if we are careful.
 
If I were to do this problem, I would proceed in a systematic way and build up useful transform formulas along the way.

Here are my recommended steps.

1. Find transform of sin(wt) and cos(wt) by applying the derivative rule twice
2. Find transform of t sin(wt) and t cos(wt) by applying the derivative rule as needed and using the transforms of cosine and sine
3. Find transform of t^2 sin(wt) by applying the derivative rule as needed and using the transforms of cosine, sine, t cos and t sin.

PG,

Since you indicated that you might not spend the time to solve this, I thought I would post the solution for you and any others that might run across this thread in the future. i corrected my previous mistake and then checked the answers with the Matlab symbolic processor.
 

Attachments

  • derivative1.JPG
    1.7 MB · Views: 533
  • derivative2.JPG
    1.6 MB · Views: 533
Last edited:
Thanks a lot, Steve. It's really kind of you.

Q1:

[latex]L\{\sin(t)\cdot {\mathrm u}(t)\}=L\{\frac{d}{dt}(-\cos(t)\cdot {\mathrm u}(t) )+\cos(t)\cdot\delta (t)\}=\frac{-s^2}{s^2+1}+1=\frac{1}{s^2+1}} [/latex]

Could you please check if I have it correct conceptually?

Q2:
Could you please help me with this query too?

Q3:
Kindly help me with this query? Thank you very much.

Best wishes
PG
 

Attachments

  • laplace_steve.jpg
    laplace_steve.jpg
    125 KB · Views: 2,346
  • laplace_delta.jpg
    laplace_delta.jpg
    63.6 KB · Views: 453
  • laplace_examp_2-5.jpg
    laplace_examp_2-5.jpg
    450.2 KB · Views: 435
Last edited:
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top