• Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Derivative of X^2

Status
Not open for further replies.

neptune

Member
Hello everyone,

The derivative of function
is
, what i understood is if function
represents the graph of time and displacement, its derivative at a point on graph tells me the instantaneous velocity.

Question 1: since we find derivative by making the
as small as we can lim->0 but we never make it a point, so why do we call it derivative at a point ? or instantaneous velocity at a point. it better be called velocity as x approaches at that particular point.

If
represents the area of square its derivative
represents change in area when side is changed by small amount. but when i put in numbers in
it just tells me the area at that particular point.

Question 2: if it tells me the area of square at particular point where is the notion of change coming in ?
and it is not accurate as it ignores the
part, i could have found out the area of square much more accurately by simply putting the value in the function.
 

alec_t

Well-Known Member
Most Helpful Member
Perhaps this will help explain things.
Consider a square of side 'x' (hence area x^2) being extended 'dx' each side :
dA.PNG
The increase in area, dA, is the area of the two added strips, i.e. is 2.x.dx (ignoring the tiny overlap of the two strips, which is valid when dx tends to zero).
This can be re-written as dA/dx=2x.
 

MrAl

Well-Known Member
Most Helpful Member
Hello everyone,

The derivative of function
is
, what i understood is if function
represents the graph of time and displacement, its derivative at a point on graph tells me the instantaneous velocity.

Question 1: since we find derivative by making the
as small as we can lim->0 but we never make it a point, so why do we call it derivative at a point ? or instantaneous velocity at a point. it better be called velocity as x approaches at that particular point.

If
represents the area of square its derivative
represents change in area when side is changed by small amount. but when i put in numbers in
it just tells me the area at that particular point.

Question 2: if it tells me the area of square at particular point where is the notion of change coming in ?
and it is not accurate as it ignores the
part, i could have found out the area of square much more accurately by simply putting the value in the function.
Hi,

What we call it depends on the application. Not all applications are about velocity or distance. The derivative is a more general concept that can be applied to many applications not just one particular one.

If the area is A=x^2, then with x=3 we get A=9. But the change in area as x is varied in general is 2*x.
Now going back to the area A=x^2, if x=3 then area A=9, but the CHANGE in area when x is varied at the point x=3 by an increment dx is 6*dx because the area changes 6 times faster than x in that particular location.
If x=5 then A=25, and the rate of change of area with x at x=5 is 10*dx because IF WE CHANGE x by a small amount the area changes 10 times faster than the increment dx.
Note the area is not 2*x, that's the derivative which is entirely different than the actual area.
The line y=2*x (which is the derivative function) is also called the tangent line because it is the straight line that passes through the point x.y on the original graph of x^2.

Going back to x=3, we get A=9 and the area changes at the rate of 2*x at that point, so if we change x by 0.1 unit we get 2*3*dx=6*0.1=0.6 square units, so lets see how close this is:
At x=3.1 we get area A=9.61, so we got close with the estimate, 9.6 vs 9.61 which is exact.
The reason it is not exact here is because the increment is still finite. As we make the increment dx smaller, we get a better an better approximation, until when we get to dx=0 we get the exact value but we must do it analytically we can not do it numerically without some tricks.

The whole idea with this is to figure out what happens to the area, how it changes, as x changes by a small amount dx. It's not about actually calculating the area in all cases, it's about knowing how that area changes.
For example, if you are going 100mph in a 60mph zone you know your distance is changing too fast for that location.

The simplest examples usually come from distance and velocity and then acceleration problems. Learning about them and how the derivative fits in is the usually way to learn about derivatives and why they are so important.

The applications that use the above concepts are extremely varied and very important in many areas of mathematics and science. Without the derivative there would probably be a lot of things that could not be solved in the universe today.
 
Last edited:

Ratchit

Well-Known Member
Hello everyone,

The derivative of function
is
, what i understood is if function
represents the graph of time and displacement, its derivative at a point on graph tells me the instantaneous velocity.
True

Question 1: since we find derivative by making the
as small as we can lim->0 but we never make it a point,
The above sentence does not make sense. What does lim->0 mean. What variable is approaching zero? With respect to what variable are you calculating the function?

so why do we call it derivative at a point ? or instantaneous velocity at a point. it better be called velocity as x approaches at that particular point.
They are both names for the same thing.

If
represents the area of square its derivative
represents change in area when side is changed by small amount.
No, you just gave the definition of a differential, not a derivative.

but when i put in numbers in
it just tells me the area at that particular point.
No it doesn't. 2x is the rate of area change as the side of the square changes.

Question 2: if it tells me the area of square at particular point where is the notion of change coming in ?
x^2 is the area at a particular point. The rate of change at a particular point is 2x

and it is not accurate as it ignores the
part,
What dx^2 part? You are not doing a second order derivative, are you?

i could have found out the area of square much more accurately by simply putting the value in the function.
You need to get straight about rates and amounts.

Ratch
 

Kerim

Member
As you said, neptune:
(x+dx)^2 = x^2 + 2*x*dx + dx^2
or
(1+dx/x)^2 = 1 + 2*dx/x + (dx/x)^2
So if dx/x = 1/1,000 (which is not too small), (dx/x)^2 becomes 1/1,000,000

I mean in theory, you are totally right, neptune; no matter how small dx/x is, there is always (dx/x)^2 which is not present in the derivative. But when someday you will apply math on real projects, you will discover by yourself when a small (relatively speaking) amount could be ignored or not.

For instance, when I solve equations now to find the values of resistances on the controller's boards I design, I should remember that the tolerance of the resistors, I can get, are, at best, +/-5% ;)

In brief, it is up to you, in real life, to choose the optimum accuracy that fulfil your goal.

Kerim
 

neptune

Member
Going back to x=3, we get A=9 and the area changes at the rate of 2*x at that point, so if we change x by 0.1 unit we get 2*3*dx=6*0.1=0.6 square units, so lets see how close this is:
At x=3.1 we get area A=9.61, so we got close with the estimate, 9.6 vs 9.61 which is exact.
The reason it is not exact here is because the increment is still finite. As we make the increment dx smaller, we get a better an better approximation, until when we get to dx=0 we get the exact value but we must do it analytically we can not do it numerically without some tricks.
but why to calculate derivative in first place when i can tell the the exact change brought to area when i change x by 0.1
for example at x=3; A=3*3=9 but A=3.1*3.1=9.61 ; just subtract and you get change as 0.61; clearly we can deduce from it that when we x is changed by 0.1 at x=3 the A changes by 0.61 square unit
 

neptune

Member
The above sentence does not make sense. What does lim->0 mean. What variable is approaching zero? With respect to what variable are you calculating the function?
Correction as lim of x ->0, it means we never approach 0, as lim 0/0 is indeterminate, we are just approximating curved graphs by a straight line at every point on the curve. perhaps we should never call that straight line tangent to a point because we never reach that point.
No, you just gave the definition of a differential, not a derivative.
what is the difference between differential and a derivative ?
What dx^2 part? You are not doing a second order derivative, are you?
My mistake, it was supposed to be
. the whole part of the equation is
, the derivative ignores
as it is very small, but it would have made the result more accurate.
You need to get straight about rates and amounts.
I was also thinking that i am confusing rates and amounts, if A=
then A shows the Area (amount) and its derivative
shows instantaneous change(rate), which is just a slope of tangent from that point or just an approximation of slope of tangent from that point, it was easy to understand in terms of position and time that slope of tangent is instantaneous velocity (position/time). but here the slope of tangent is (area/side) i don't know what to call it ?
 

MrAl

Well-Known Member
Most Helpful Member
but why to calculate derivative in first place when i can tell the the exact change brought to area when i change x by 0.1
for example at x=3; A=3*3=9 but A=3.1*3.1=9.61 ; just subtract and you get change as 0.61; clearly we can deduce from it that when we x is changed by 0.1 at x=3 the A changes by 0.61 square unit
Hi,

Because calculating the analytical derivative is exact while calculating the numerical derivative is not exact, and attempts to make it exact while still doing it numerically leads to a whole slew of problems. The problems in calculating numerical derivatives have been addressed in various ways over the years and what we end up with is a multi point approximation when we need higher than just the two point approximation like you did there. One of the problems is that we have limited numerical precision in all numerical calculations especially when we rely on the usual PC computer precision of 16 or 17 digits. Even that many digits leads to problems. Let's try to get a better approximation to see how this works.

3.1*3.1-3*3=0.61 as you noted, and 0.61/0.1=6.1 but that's off by more than 1 percent so lets go with a smaller incrment:
3.01*3.01-3*3=0.0601, and 0.0601/0.01=6.01, closer but exact is 6 with no fractional part.
3.001*3.001-3*3=0.006001, and 0.006001/0.001=6.001, closer but still not exact.
3.0001*3.0001-3*3=0.00060001, and that divided by 0.0001 is 6.0001 still not exact.
continuing in this manner we come to:
3.0000001^2-3^2=0.000000600000009, and what happened? We started to loose precision when the increment got small.
It's not too bad because we happen to loose digits and in this case helps, but then we get to:
3.0000000000000001^2-3^2=0.00000000000000000, and here we loose all information because the natural limited precision numerical calculation dropped all the digits! So when we tried to get very high precision, we got zero as the derivative when we should have gotten exactly 6 eventually. But we also dont know that ahead of time like we do with this very simple example, so we might be inclined to keep trying to get better and better precision, and end up with zero :)

This example was not typical either, it was a very well behaved example. More common real life examples really start to get botched up sometimes after an increment as large as 0.001, and in fact many programs do not go below 0.001 unless they use an adaptive step size algorithm.

So you see we are very lucky to be able to derive the analytical form of the derivative with a set of fairly simple rules.

Also, some problems need to be solved analytically so that they can be more general than a single numerical solution could be, and that's when we really need the analytical derivative the most. Many theories in science depend on that.

Add to that many problems can be solved much easier and faster if we know how to deal with derivatives. Without them we would have to do the problem numerically over and over again.
Consider the physical situation where there is a ladder leaning up against a wall, and it begins to slip down the wall. If the initial height of the ladder is given and the rate it slides down the wall is given, what is the rate that the bottom of the ladder slides across the ground? That's just one question that we might like to answer using derivatives.
 

neptune

Member
3.0000000000000001^2-3^2=0.00000000000000000
according to my calculation 3.0000000000000001^2-3^2=0.0000000000000006

but you gave me a new topic to see (analytical and numerical derivative), which i never knew earlier, seems to me what i was analyzing was correct.

analytical derivative is exact while calculating the numerical derivative is not exact
Why do you call analytical as exact when it ignores
part in
and gives dA =0.6 as opposed to 0.61 given by numeric method.

question is still unanswered, that why do we need to create a formula to find the slope of tangent.
is it because derivative of a function gives us equation of the slope of function all along it more readily.
Instead of slow process of finding slope at every point using numeric method and then finding the equation of from it.
 

MrAl

Well-Known Member
Most Helpful Member
according to my calculation 3.0000000000000001^2-3^2=0.0000000000000006

but you gave me a new topic to see (analytical and numerical derivative), which i never knew earlier, seems to me what i was analyzing was correct.


Why do you call analytical as exact when it ignores
part in
and gives dA =0.6 as opposed to 0.61 given by numeric method.

question is still unanswered, that why do we need to create a formula to find the slope of tangent.
is it because derivative of a function gives us equation of the slope of function all along it more readily.
Instead of slow process of finding slope at every point using numeric method and then finding the equation of from it.
Hi,

The number you get when you raise that number to the power of 2 will vary depending on what calculator or number cruncher algorithm you use. If you use the normal CPU floating point unit precision, i believe you get the number i quoted, but if you use a program that uses the CPU to get more digits of precision then you will get another number, but if you make the increment smaller you should see the same thing happening again. The problem with using anything other than the floating point unit of the CPU along is that the calculations take a lot longer. Using the CPU power itself is the fastest calculation possible.

Analytical is called exact because when we allow the increment to becomes infinitesimally small, we get the exact slope at the point x, and we can ONLY get the increment to be infinitesimally small when we do it analytically, because in numeral sense an infinitesimally small number is zero, not really an infinitesimally small number.
There are two things to look at:
1. The precision of the calculation gets better and better with smaller and smaller inrement.
2. #1 above breaks down when we try to exceed the numerical precision of the number algorithm we are using.

#1 happens for a good reason. When we make the increment smaller, the second sample is closer to the target x which in our examples was "3". #2 kicks in at some point because we loose decimal digits eventually, and so the subtraction y2-y1 actually becomes zero rather than just tending toward zero.

The analytical result tells us the actual derivative right at the target x value, which was 3 in our examples, while any attempt to get this with numerical calculations eventually breaks down unless the order of the method is equal to the order of the expression. For example, a first order method to calculate the derivative of x^2 might fail at some point, but a second order method might provide an exact value. That same method will not work as well on x^3 however because that would require a higher order method.

Analytical derivatives are not that hard to calculate so you should try to learn methods to do that. As you probably know, the rule for exponents is for example when y=x^n:
dy/dx=n*x^(n-1)

Not too hard to do right?
There are rules for more complicated expressions too.

Numerical methods are great and should be studied too, but they do have limits of application.

The one sided derivative is:
(f(x+h)-f(x))/h

The two sided or central means derivative is:
(f(x+h)-f(x-h))/(h+h)
 

neptune

Member
The analytical result tells us the actual derivative right at the target x value, which was 3 in our examples
The one sided derivative is:
(f(x+h)-f(x))/h
it is also approximation as we can never put h=0, as notion of change would go haywire. better we call it infinitesimal averaging of the slope of the curve.
 

MrAl

Well-Known Member
Most Helpful Member
it is also approximation as we can never put h=0, as notion of change would go haywire. better we call it infinitesimal averaging of the slope of the curve.

Well, we dont really have to invent anything new here to understand (f(x+h)-f(x))/h.
That's an approximation, but in the limit it's the exact derivative:
limit h-->0 (f(x+h)-f(x))/h = dy/dx
where y=f(x).
So in the limit the one sided derivative becomes exact.

If you look up how limits work you can take that limit and see what happens.
For example, for x^2 we have:
y=x^2
limit h-->0 ((x+h)^2-x^2)/h=(2*h*x+h^2)/h=2*x+h
limit h-->0 (2*x+h)=2*x
and we are left with the exact derivative.

Notice that we could not put h equal to zero in the original NUMERICAL expression, but we could in the limit after using the actual function.

For x^3 we get a similar situation, 3*x^2+3*h*x+h^2, and taking the limit as h goes toward zero we get 3*x^2 which is again exact.
Sometimes the limit is harder to find, but that's the way it works.
 

Kerim

Member
I guess if someone doesn't like the notion of the derivative at a point of a function, he likely cannot like the notion of the derivative of the whole function as well. But perhaps I am wrong.

For instance, what about the dx (also close to zero but not zero) in finding the integral of a function?
 

MrAl

Well-Known Member
Most Helpful Member
I guess if someone doesn't like the notion of the derivative at a point of a function, he likely cannot like the notion of the derivative of the whole function as well. But perhaps I am wrong.

For instance, what about the dx (also close to zero but not zero) in finding the integral of a function?
Hi,

Yes that's not a bad point. Might as well ask why we use 'x' in y=2*x+1 instead of just using a number like 1,2,3, 4.5, etc., for x instead as it makes it much easier :)
Mathematical theory is more general than a simple numerical expression so it is more widely applicable.
 

Spuriosity

New Member
The formal definition of a derivative, as you've pointed out, is

However, the subtle point here is what the limit sign actually means.
Formally, (This is 1st year uni level Real Analysis, and is not easy to get your head around) the limit as x tents to a point a is defined as follows:

Let f(x) be an arbitrary function. We want to prove that this function has limit L.

Let an arbitrary number ε > 0 be given.
If we can find a δ such that for any x in the interval
a - δ < x < a + δ
|f(x) - L| < ε,

then we say that



Any good Real Analysis textbook will explain this a lot better than I have, but in terms of a general description:

We're choosing a 'target precision' ε, and we figure out how close x has to be to a in order to get f(x) within ε of L.

For a 'simple' example, consider proving that the limit of f(x) = -x as x -> 1.
let ε >0 be given. we want to prove that f(x) = -x tends to -1 as x tends to +1.

Then |f(x) - (-1)| = |f(x)+1| = |1-x|
If |x-1| < δ, then |1-x| = |x-1| < ε
if we choose any δ ≤ ε .

Don't worry, it took me a whole semester to wrap my head around it.

Anyway the point of all this arcane pure maths is to say that the limit has to be the same regardless of which direction you approach the function from - it has to be true for any a - δ < x < a + δ . If we had a less nice function, like a Heaviside step function H(x) = 0 if x<0, 1 if x>0, 1/2 at x=0 and tried to take the limit as x-> 0, we run into a problem. Approaching x=0 from the left (negative) side, H(x) tends to 0, but approaching from the right says that H(x) tends to 1. No matter what δ>0 we choose, f(0-δ) = 0, but f(0+δ) = 1. We require, when we decrease delta, that f(x) gets closer to the limiting value, but no convergence is seen.

In plainer terms, the limit as x-> 0 of H(x) does not exist.

What does this have to do with derivatives?

Your question seems to stem from being worried about the value of the derivative at point a f'(a) being the gradient 'just to the right of' x=a, but because the derivative is a limit, not a value, it is in fact the value at the point. When you differentiate, you shrink h down to be close to zero from both sides: i.e. we require that



is satisfied. If the above equation is not satisfied, the derivative is not defined.

Isn't analysis fun?
 
Last edited:

Ratchit

Well-Known Member
The formal definition of a derivative, as you've pointed out, is

However, the subtle point here is what the limit sign actually means.
Formally, (This is 1st year uni level Real Analysis, and is not easy to get your head around) the limit as x tents to a point a is defined as follows:

Let f(x) be an arbitrary function. We want to prove that this function has limit L.

Let an arbitrary number ε > 0 be given.
If we can find a δ such that for any x in the interval
a - δ < x < a + δ
|f(x) - L| < ε,

then we say that



Any good Real Analysis textbook will explain this a lot better than I have, but in terms of a general description:

We're choosing a 'target precision' ε, and we figure out how close x has to be to a in order to get f(x) within ε of L.

For a 'simple' example, consider proving that the limit of f(x) = -x as x -> 1.
let ε >0 be given. we want to prove that f(x) = -x tends to -1 as x tends to +1.

Then |f(x) - (-1)| = |f(x)+1| = |1-x|
If |x-1| < δ, then |1-x| = |x-1| < ε
if we choose any δ ≤ ε .

Don't worry, it took me a whole semester to wrap my head around it.

Anyway the point of all this arcane pure maths is to say that the limit has to be the same regardless of which direction you approach the function from - it has to be true for any a - δ < x < a + δ . If we had a less nice function, like a Heaviside step function H(x) = 0 if x<0, 1 if x>0, 1/2 at x=0 and tried to take the limit as x-> 0, we run into a problem. Approaching x=0 from the left (negative) side, H(x) tends to 0, but approaching from the right says that H(x) tends to 1. No matter what δ>0 we choose, f(0-δ) = 0, but f(0+δ) = 1. We require, when we decrease delta, that f(x) gets closer to the limiting value, but no convergence is seen.

In plainer terms, the limit as x-> 0 of H(x) does not exist.

What does this have to do with derivatives?

Your question seems to stem from being worried about the value of the derivative at point a f'(a) being the gradient 'just to the right of' x=a, but because the derivative is a limit, not a value, it is in fact the value at the point. When you differentiate, you shrink h down to be close to zero from both sides: i.e. we require that



is satisfied. If the above equation is not satisfied, the derivative is not defined.

Isn't analysis fun?
Fun? No. Instructive? Yes. As long as there is an interval around a point where f(x) is always defined, and |h| is always > 0, then the the above definition of a derivative will be valid because all three terms, f(x+h), f(x), and h will have valid and definite values. The value of the derivative is the limit as h approaches zero. The limit is the value an expression approaches as a variable within the expression approaches a particular value.

Ratch
 
Last edited:

MrAl

Well-Known Member
Most Helpful Member
Hi,

Yes minor typo it should be (f(x+h)-f(x))/h, however that is not my reason for the reply.

It's a little interesting that if we instead use an alternate definition:
d f(x)/dx=limit (f(x+h)-f(x-h))/(h+h) as h goes toward zero

then we dont have as much of a problem because we must have plus and minus h to begin with which implies there are already values at those two points. If we dont have values at those two points then it cant be continuous :)
 

Spuriosity

New Member
All of the above definitions can be shown to be exactly equivalent with some careful arguments. The most 'correct' definition of the derivative is the gradient of the tangent slope at a point a - i.e. in any neighborhood of the point a,

f'(a) is the number that satisfies the following equation:

(1)

(remember, f '(a) is the derivative evaluated at a, i.e. a constant with respect to x)
where
, the error term, tends to zero as x tends to a.

We need to keep the ξ(x) in there to account for the fact that this tangent line differs from f(x) at all values except for a:


I should stress that this isn't an application of differentiation, but a definition of it. (f'(a) is defined to be the number that satisfies this equation)

Evaluating at x = h + a, we find



Rearranging,



Taking the limit of both sides (remembering f'(a) is just a constant)




So we see that, as long as
(1)
we have the familiar formula for the derivative, but with a here replacing x.

Similarly, we are perfectly entitled to evaluate at x = a - h
f' =


the slightly weirder (symmetric) formula is derived as follows:




However, you can't always rely on the limit (1) being true. For example, for f(x) = |x|:
Assume

then,


i.e. xi(x) is a piecewise defined function - it's (-f'(0) -1)x for negative x and (-f'(0) + 1)x for positive x.

(Sorry for the clumsy notation, but I can't massage the LaTeX here to do anything nice)


Like the Heaviside up above, it's discontinuous at x=0 regardless of the value of f'(0) - the limit

does not exist.

For this reason, |x| is not differentiable at x=0.
 
Last edited:

Ratchit

Well-Known Member
All of the above definitions can be shown to be exactly equivalent with some careful arguments. The most 'correct' definition of the derivative is the gradient of the tangent plane at a point a - i.e. in any neighborhood of the point a,



(remember, f '(a) is the derivative evaluated at a, i.e. a constant with respect to x) where
, the error term, tends to zero as x tends to a.
How can the above equation be a definition of f '(x) when it includes f '(a) ? What is the definition of f '(a)? Isn't that a circular definition?

I should stress that this isn't an application of differentiation, but a definition of it.
Sure had me fooled.

Substituting x = x + a, we find
How can the sum of x plus another quantity be equal to itself?

What is the definition of xi(x+a)?

Rearranging,



Taking the limit of both sides (remembering f'(a) is just a constant)




So we see that, as long as
(*)
we have the familiar formula for the derivative, but with x here replacing h and a here replacing x
i.e. let x= h to get

f' =



Similarly, we are perfectly entitled to let x = -h
f' =


the slightly weirder (symmetric) formula is derived as follows:





This is sometimes tricky to prove! For example, for f(x) = |x|:
Assume

then,


i.e. xi(x) is a piecewise defined function - it's (-f'(0) -1)x for negative x and (-f'(0) + 1)x for positive x. Like the Heaviside up above, it's discontinuous at x=0 regardless of the value of f'(0) - the limit (*) does not exist.

For this reason, |x| is not differentiable at x=0.

This also gives some insight into where Taylor polynomials come from, but this post is already too long.
A very long and confusing presentation.

Ratch
 
Status
Not open for further replies.

Latest threads

EE World Online Articles

Loading
Top