Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Greenhorn question

Status
Not open for further replies.

legacy_programmer

New Member
:?:

I finally decided to take the plunge into hobby electronics. After reading a couple texts and some web surfing, I'm still confused on more than a few things.

I understand that negative flows to positive, but sometimes like with capacitors and PNP transistors (a whole other beast) it seems to contradict this. I.E., why create a power supply that outputs + voltage? Can you 'output' positive voltage, and if so, what about the negative to positive thing? Along that lines, how can you use a capacitor in an AC circuit if it's polarized? Doesn't the alternating current ensure it's never always positive?

Another thing, does it matter where a resistor is in a series circuit? Does it effectively reduce current or voltage around the whole circuit, or only between the resistor and the positive end of the power supply? Does the load need to be 'after' the resistor?

Obviously I haven't purchased a DVM to do any 'hands on' learning so I'm approaching this from theory alone at this point. Any help is greatly appreciated!

Thanks
 
Hi there, it seems you're under a common misunderstanding with - and + just as I was. Let me try and explain this as simply as possible. Everything is relative. A 9 volt battery has +9v with respect to the - terminal of the battery. The - terminal of the battery is considered as ground. If the + terminal of the battery was considered ground, then the - terminal would be considered -9 volts.[/b]
 
Ok, I can see how that makes sense. So in that respect, can you read a circuit diagram following either + to - or - to + and have it make sense? Or should you always try to follow - to +?

I guess the concept of 'ground' has me confused then too. If ground is the most negative part of a circuit, then does current flow from ground to positive? I'm sure that's a rediculous statement since ground is not a power source.

Maybe I should go back to being a rocket scientist...
 
Here is a pretty good discussion of**broken link removed**, which is the cause of your confusion.

Ground isn't necessarily the most negative point in a circuit. It is just the reference node. It usually is the common node for the input voltage, the output, and the power supply(s). The power supply, if there is only one, can be negative relative to ground.
 
Thomas Edison assumed that current flows from positive to negative. That convention is followed today even tho we know that electrons flow from negative to positive. However, it doesn't matter which convention you use as long as you are consistant. You will find it a lot less confusing if you follow the usual convention and let current flow from positive to negative.
 
I remember my electronics teacher banging it into our head that "current flows from negative to positive." Blah blah blah, he's right, but now I've done the whole college thing I conceptually know that, but I still solve circuits assuming current comes out of the positive terminal. Conventional current flow is the way it is taught, however as they've said above it doesn't matter as long as you're consistant.
 
I don't know why people use the electron current flow (neg to pos). Not only does it have unnecessary sign changes, but it's not even "right" mathematically. The "current" value you use in calculations really isn't defined as the flow of electrons, but the flow of "charge".

Since the charges are negative, when you multiply the charge by the average electron velocity and integrate over the cross sectional area of your conductor, you get a current vector that points in the opposite direction from the avg. electron velocity. Thus, the current DOES flow from positive to negative, and treating it the other way is actually wrong.

The thing that seems to be confusing you right now, though, is the definition of voltage. First, voltage can only exist between two points--when you see a point in a circuit diagram that has a certain voltage (like a power supply connection that's labled "+12V"), that's actually the voltage between that point and "ground" (note that you can call ANY point ground, and the node calculations will still come out the same--its only purpose is to make the calculations easier).

Voltage itself is actually the amount of ENERGY (in watt-seconds) gained or lost for every coulomb of charge that moves from one point to the other (it's actually more complicated than that, but this gives the basic idea).

This is easy to think of if you compare it to vertical distance. If you take an object weighing 1 newton, and drop it, by the time it falls 1 meter it will have gained 1 joule of energy (1J = 1 watt-second). If it falls 1m and hits the ground, 1J of energy is released on impact (either bouncing the object back up, turning to heat, denting the ground, etc). If you lift it 1m above ground and drop it into a hole 1m deep, then it gained 2J of energy. If you throw it from the ground onto a 3m-high roof, then it lost 3J of energy during its flight. If you throw it hard enough that it reaches a height of 4m before falling back to the roof, then it absorbed 4J of energy at launch, lost 3J in transit and released 1J at impact. In this case, the energy is actually being "stored" in the combined gravitational field of the earth and the object.

In circuit theory, if a 12-volt battery pulls 1Coulomb of charge into its negative terminal and releases it from the positive terminal, that charge gained 12J of energy (this energy is "stored" in the electric field generated between the electrons that were moved and whatever atoms they were pulled away from). If it "falls" back down to the negative terminal through a conductive path, it has to release 12J of energy into that path (be it heating a resistor, charging a capacitor, energizing an inductor, turning a motor, etc).

This can be applied to a real circuit as follows:

Say you have a 12V battery with a 3-ohm resistor connected across the terminals. ohm's law states that a resistor will generate 1 volt of reverse voltage for every amp of current, and every ohm of resistance. Since that battery always has 12V across it (ie, it will freely give 12J of energy to every coulomb of charge that manages to pass through it), the resistor will have to generate exactly 12V of back emf (releasing 12J of energy for every coulomb of charge being forced through it). Otherwise, there would be energy magically appearing / disappearing, and a pair of points with two different voltages between them.

The only current that will satisfy this and, thus, the only current that can possibly exist in this circuit (equivalently, the current that HAS to exist in this circuit) is 4 amps (4A * 3ohm = 12V). This equates to 4 coulombs of charge, per second, being lifted up 12V and then filtering down through the resistor, losing its energy on the way. Thus, the resistor is releasing 4 * 12 J of energy per second as heat (48J / second), which is equal to 48 watts of power.

Hopefully that was somewhat easy to follow, and gave some idea of what voltage / current really are. In terms of what you'll see in books, the stuff above is basically a simplified explanation / partial derivation / qualitative proof of Kirchoff's voltage law.
 
Is it just me?, or is this mostly pointless from a practical point of view?.

I don't see as it makes any difference what is flowing in which direction, essentially it's circular anyway.

If I'm working on a NPN circuit (+ve to top) or a PNP circuit (-ve to top), I consider it EXACTLY the same - it makes no difference which way any particles too small to see might be moving - the maths works the same regardless.
 
It seemed to me that the OP was having trouble with the concepts of voltage and current--something that a lot of people struggle with. Trying to move forward in electronics without a firm grasp on those fundamental ideas is like trying to trying to drive a car without starting it first...
 
Nigel Goodwin said:
Is it just me?, or is this mostly pointless from a practical point of view?.

As a general rule I agree, but the OP is trying to get the basics right in his head. Once he has done that, he can use electron flow or conventional current whichever suits his thought processes best.

40+ years ago I was trying to understand the operation of thermionic valves using conventional current, it did not make sense. Once I had got the idea of electron flow, I started to understand.

OK I know that valve are a bit passe, but we still use cathode ray tubes and magnetrons, which use the same principles!

JimB
 
JimB said:
40+ years ago I was trying to understand the operation of thermionic valves using conventional current, it did not make sense. Once I had got the idea of electron flow, I started to understand.

It makes sense with valves, because their operation is so simple, all you really need to know is that like charges repel, opposite charges attract, and that electrons have a negative charge.
 
legacy_prorgrammer:

If you're in the U.S. I'd suggest the "Basic Electronics" and "Basic Digital Electronics" found at Radio Shack, written by Alvis J. Evans. They're easy to understand for basic stuff, and even get into a bit more complex subjects. Good luck!
 
Nigel Goodwin said:
Is it just me?, or is this mostly pointless from a practical point of view?.

I don't see as it makes any difference what is flowing in which direction, essentially it's circular anyway.

Phew, thought I was loosing my mind, glad to see someone else never thinks about the way in whcih current is flowing :lol:
 
Thank you all, I appreciate the explanations and encouragement. Maybe I'm just over thinking it to a degree. I understand Ohms Law, current vs voltage vs resistance, and the basic cycle of AC and hertz.

I guess what threw me was the arbitrary designation of where 'common' could be placed in a circuit. I like one previous responders comments regarding 'relative potential' which makes sense (ie, one node being more 'negative' than another).

The fact that you can read a wiring diagram from + to - or the other way around is intriguing and an eye opener :!:

:?: So, how does that work with polarized capacitors, especially with an AC circuit? Wouldn't the negative 1/2 cycle prohibit the use of a polarized capacitor, or do you use diodes to manage the polarity?

I may be speaking way over my head as I'm just getting familiar with the basic components. Bear with me if my questions are frustrating.

Thanks,
LP
 
In an AC circuit, 2 polarized capacitors are connected + to + so one of them is always properly biased, thus limiting the current. Commercial non-polarized caps are built that way, I believe.
 
Russlk said:
In an AC circuit, 2 polarized capacitors are connected + to + so one of them is always properly biased, thus limiting the current. Commercial non-polarized caps are built that way, I believe.

Interesting, though I assume you must still need a diode to block the + 1/2 cycle from entering the negatively polarized end of the capacitor?
 
A diode is not needed, a polarized cap is like a diode anyway (a diode with a lot of capacitance). A diode is a capacitor also, altho a very small one and the capacitance varies with the applied reverse voltage.
 
legacy_programmer said:
So if that's the case, then why do they polarize the capacitor?

They don't deliberately polarise the capacitor - it's the method of manufacture that makes them polarised. The value of a capacitor is directly proportional to the size of it's plates, and indirectly proportional to the distance between them. So to make the capacitor a larger value you can either make the plate larger, or make the distance between them smaller.

In order to get large value capacitors (in a reasonably small space) you have to make the distance between them smaller. With this you run into problems with how thin the insulator has to be, and problems with it breaking down - electrolytic capacitors use an electrolyte between the plates (a liquid chemical) which becomes an insulator when you apply a DC voltage across it.

Rather a crude description?, but hopefully it gives you an idea of why?.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top