Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

NiCad Battery Testing

Status
Not open for further replies.
Nowhere in this thread does it say how little is the battery discharge current. 10A? 20A? The BUZ101 Mosfet selected is fully turned on with a Vgs of 10V then it can conduct 21A.
The size and mAh rating of the battery cells also are not mentioned.
 
No, but per the original circuit posted in #1, the circuit is set up to maintain a constant voltage of 1V across the 4Ω source resistor. That implies that the drain current of the NFET is 0.25A and the gate of the NFET only needs to be Vth plus a small deltaV to make that happen.

As long as the Vth of the NFET is 2V or less, then even a LM358 type opamp has enough positive drive to turn-on the NFET sufficiently. Here is a simulation using a power NFET with a Vth of 2V. The LM358 compares the voltage at the source V(s) to a 1V reference voltage. The output of the opamp makes V(g) whatever it needs to be in order to force 0.25A through R4, so that the voltage at the non-inverting input just matches the one at the inverting input.

I plot V(s) [green], V(g) [red], -I(V2) [the battery current, blue] and the dissipation in M1 [violet] as a function of V(Vb), which shows that the circuit can be used with any battery voltage from 1V to ~20V, provided you pay attention to heatsinking M1. This means that the circuit is usable with a single NiCd cell, although it will only discharge it down to 1V.

When discharging a multiple-cell NiCd pack, I would stop the test when the battery voltage drops to less than 1.05*n, where n is the number of cells. As a multiple cell pack ages, it is no better than the weakest cell, because as the first cell craters, the load current causes the voltage across that bad cell to reverse, rapidly reducing the pack total voltage.

34.png
 
Only 250mA? From little AAA Ni-Cad cells? Most of the AAA Ni-MH cells in my solar garden lights can produce 8A.
 
Only 250mA? From little AAA Ni-Cad cells? Most of the AAA Ni-MH cells in my solar garden lights can produce 8A.
But for how long? The TS stated he wanted to discharge his cells at 0.1C rate, that would mean that he is discharging 2500mAh cells. Read his post!
 
Last edited:
I haven't used Ni-Cad cells for 30 years but I think D size were 2500mAh. My new AA size Ni-MH cells are 2500mAh.
 
No, but per the original circuit posted in #1, the circuit is set up to maintain a constant voltage of 1V across the 4Ω source resistor. That implies that the drain current of the NFET is 0.25A and the gate of the NFET only needs to be Vth plus a small deltaV to make that happen.

As long as the Vth of the NFET is 2V or less, then even a LM358 type opamp has enough positive drive to turn-on the NFET sufficiently. Here is a simulation using a power NFET with a Vth of 2V. The LM358 compares the voltage at the source V(s) to a 1V reference voltage. The output of the opamp makes V(g) whatever it needs to be in order to force 0.25A through R4, so that the voltage at the non-inverting input just matches the one at the inverting input.

I plot V(s) [green], V(g) [red], -I(V2) [the battery current, blue] and the dissipation in M1 [violet] as a function of V(Vb), which shows that the circuit can be used with any battery voltage from 1V to ~20V, provided you pay attention to heatsinking M1. This means that the circuit is usable with a single NiCd cell, although it will only discharge it down to 1V.

When discharging a multiple-cell NiCd pack, I would stop the test when the battery voltage drops to less than 1.05*n, where n is the number of cells. As a multiple cell pack ages, it is no better than the weakest cell, because as the first cell craters, the load current causes the voltage across that bad cell to reverse, rapidly reducing the pack total voltage.

View attachment 104864

I was working from a breadboard circuit. I have returned to simulation to understand what is going on. Unfortunately, Mike, I am not following your post. Probably because I can't reconcile your simulation method with mine.

For 1v here is my simulation.
BatTest1v.jpg

The Opamp output is 2.55V for a single cell (threshold =1v).

For two cells the output jumps to 3.55V
BatTest2v.jpg

For 6 cells, the opamp output jumps to 7.6V.
BatTest6v.jpg


Each of these datapoints is just to maintain the threshold of 1.0V*n cells (1, 2, or 6 as shown).

This is where I restarted the post. Trying to understand why the opamp output was increasing as the number of cells increase. Based on some of the points prior to the simulation, as the number of cells increase, the setpoint increases and thus Vsource increases, thereby increasing Vgate.

I think these simulations show what I mean, which doesn't seem to me to correlate with

which shows that the circuit can be used with any battery voltage from 1V to ~20V, provided you pay attention to heatsinking M1.

In the event of trying to test my 9.6V, 10.8V and 12.0 V tool battery packs, I can't get the opamp output to go high enough.


P.S. I increased Vcc to 10V to get the 6 cell simulation.
 
In your simulation, the current from the battery is about 250mA for a single cell but the current (and the voltage to the gate of the opamp) are wrongly increased for a wrong battery current of about 1500mA for more battery cells. The battery current has increased because you wrongly increased the input voltage to about 7.6V that was supposed to stay at 1V. The battery cells are supposed to be in series, but do you have them in parallel instead? Do you understand the difference between series and parallel?
 
The independent variable in my simulation is the simulated battery voltage Vb (the horizontal plot X-Axis). The dependent variables I plotted are gate voltage V(g), source voltage V(s), Battery current -I(Vb) (which is flat regardless of Vb) , and Power dissipated (big expression) in the NFET (which increases linearly as the battery voltage increases..

I did a .DC analysis with a swept battery voltage Vb for all voltages from zero to 20V in steps of 0.1V. Yours looks like a .TRAN transient simulation which is not relevant.
 
Last edited:
...
In the event of trying to test my 9.6V, 10.8V and 12.0 V tool battery packs, I can't get the opamp output to go high enough.


P.S. I increased Vcc to 10V to get the 6 cell simulation.
With the NFET I used in my simulation, the output pin of the opamp doesn't ever have to go above about 3.5V, regardless of how many cells are in the pack. The output pin of the opamp only has to get to V(s) + Vth + Δ, where Δ is a few hundred mV. V(s) is 1V, because 0.25A*4Ω = 1V.

What is the Vth of the NFET you are using? If it is >3V, then you cannot power the opamp with only 5V; it should be powered with a 8V or 10V supply. If the Vth of the NFET is about 2V, like I used in my sim, then a 5V opamp supply is sufficient. If you change the opamp supply voltage, you will have to recalculate the voltage divider that provides the 1V reference voltage to the opamp.
 
In your simulation, the current from the battery is about 250mA for a single cell but the current (and the voltage to the gate of the opamp) are wrongly increased for a wrong battery current of about 1500mA for more battery cells. The battery current has increased because you wrongly increased the input voltage to about 7.6V that was supposed to stay at 1V. The battery cells are supposed to be in series, but do you have them in parallel instead? Do you understand the difference between series and parallel?

Yes, thank you very much. I just made a mistake. I forgot to increase the load resistor. But updating the load resistor is not going to lessen the fet source voltage.

Updated simulation for 2V and 8 ohm load.
BatTest2v v2.jpg


6V and 24 ohm load.
BatTest6v v2.jpg


As you can see there was only the smallest change in the opamp output.
 
Last edited:
The independent variable in my simulation is the simulated battery voltage Vb (the horizontal plot X-Axis). The dependent variables I plotted are gate voltage V(g), source voltage V(s), Battery current -I(Vb) (which is flat regardless of Vb) , and Power dissipated (big expression) in the NFET (which increases linearly as the battery voltage increases..

I did a .DC analysis with a swept battery voltage Vb for all voltages from zero to 20V in steps of 0.1V. Yours looks like a .TRAN transient simulation which is not relevant.

It has been too long since I used protel and a dc analysis. I took the short way out and did several transient analysis. I don't see how a transient analysis is not relevant. I just have to do a lot more simulations. I can see the EXACT same information in your simulation of a single cell using DC analysis as I can in my single cell transient. When I look at your graph, I just read it right to left starting at Vb=1.2V.

I am simulating a MTP75N03HDL fet with has a Vgs(th) of 1-2V. I have also increase the opamp power from 5V to 10V (which makes calculating the opamp setpoint resistors easier and gives the headroom for simulating a 6 cell discharge).
 
I plot V(s) [green], V(g) [red], -I(V2) [the battery current, blue] and the dissipation in M1 [violet] as a function of V(Vb), which shows that the circuit can be used with any battery voltage from 1V to ~20V, provided you pay attention to heatsinking M1. This means that the circuit is usable with a single NiCd cell, although it will only discharge it down to 1V.

When discharging a multiple-cell NiCd pack, I would stop the test when the battery voltage drops to less than 1.05*n, where n is the number of cells. As a multiple cell pack ages, it is no better than the weakest cell, because as the first cell craters, the load current causes the voltage across that bad cell to reverse, rapidly reducing the pack total voltage.

Ah ha! I see wheat is missing from the DC analysis, and the comment of my bad simulations by Audioguru was the key. As Audioguru pointed out, I neglected to increase the load as I increased the setpoint and cell count. The DC analysis increases the cell count (Vb), but neither increases the setpoint (Vs=n) nor increases the load resistance (which compensates for the increase in setpoint voltage to maintain the 250mA current).
 
Ah ha! I see wheat is missing from the DC analysis, and the comment of my bad simulations by Audioguru was the key. As Audioguru pointed out, I neglected to increase the load as I increased the setpoint and cell count. The DC analysis increases the cell count (Vb), but neither increases the setpoint (Vs=n) nor increases the load resistance (which compensates for the increase in setpoint voltage to maintain the 250mA current).

Huh?
The circuit you posted in #1 of this thread, and the modified circuit I simulated in post #22 doesn't care how many cells are in the battery pack. No load resistance has to be changed if you want it to test a pack with a different number of cells. It always discharges any pack, from a single NiCd or NiMh cell up to ~15cells without changing anything in the circuit, at a discharge rate of 0.25A, provided you pay attention to the power dissipated in the NFET.

The discharge rate is determined by the voltage across the NFet's source resistance (my R4), and the value of R4. The opamp and NFET make a high-power source-follower circuit that makes the voltage across R4 the same as the reference voltage.

With V(s) = 1V and R4=4Ω, then the discharge current is 1V/4A=0.25A. If you want to increase the discharge rate to say 0.5A, either increase the reference voltage from 1V to 2V, or decrease the resistor from 4Ω to 2Ω. Be aware that if you raise the reference voltage, then you will no longer be able to discharge a single cell because the drain voltage of the NFet can not go lower than its source voltage, so you want to keep the source voltage to 1V or less.

Please post the actual circuit you are simulating, because without seeing your circuit, I cannot see what you are varying to get the results you posted.

 
The layout of the circuit I'm simulating is exactly the same at post #1, only I'm updating the input to the opamp (Vs) to be 1.0*n cells, the load resistor (RL) to be 4*n cells, and Vbatt to be 1.2*ncells (to start and then decaying to simulate battery drain).
1 cell Vs=1.0V, RL=4 ohms, Vbatt=1.2V
2 cells Vs=2.0V, RL=8 ohms, Vbatt=2.4V
6 cells Vs=6.0V, RL=24 ohms, Vbatt=7.2V

All of these number are shown in or can be derived from the transient analysis trends.

I derived the circuit from somewhere years ago. I only have my notes now to go work from. From that, it appears the idea was to have Vs=1.*n cells so the batteries cannot be over-discharged. If Vs is not increased, then it is possible to drive a n-cell battery pack to a total output voltage of 1V and possibly destroy the cells.

Do you agree that in this method (Rl=4*n, Vs=1*n), Vs is going to rise to the point where the opamp will not be able to provide Vgs to turn on the fet?
 
Last edited:
1V/4 ohms= 250mA.
2V/8 ohms= 250mA.
8V/32 ohms= 250mA.
Since the current is always 250mA then why change the resistor value and change Vs?? The circuit provides "constant current" if the battery voltage is higher than the voltage across the resistor.
Maybe use a 1 ohm resistor with a Vs of 0.25V?
 
From that, it appears the idea was to have Vs=1.*n cells so the batteries cannot be over-discharged. If Vs is not increased, then it is possible to drive a n-cell battery pack to a total output voltage of 1V and possibly destroy the cells.

Makes sense to me. To you?
 
From that, it appears the idea was to have Vs=1.*n cells so the batteries cannot be over-discharged. If Vs is not increased, then it is possible to drive a n-cell battery pack to a total output voltage of 1V and possibly destroy the cells.

Makes sense to me. To you?
Correct. Then lots of Vs voltage is needed since there is no battery voltage sensing and disconnect circuit.
 
From that, it appears the idea was to have Vs=1.*n cells so the batteries cannot be over-discharged. If Vs is not increased, then it is possible to drive a n-cell battery pack to a total output voltage of 1V and possibly destroy the cells.

Makes sense to me. To you?

The biggest problem with your idea is that the power dissipation in R4 (the NFet source) resistance goes up linearly with the number of cells. The second problem is you have to simultaneously vary the voltage divider that provides the reference voltage and the source resistor R4.

I would decouple the two issues; make a constant-current dis-charger at 0.25A (which you have with the circuit in post#1), and then use a second, independent circuit to shut it off when the battery voltage decreases to nV (where n is the number of cells).

To do the combined circuit, you will have to power the opamp from a supply that is at least (n+2)V.
 
Now that I think we are all on the same page and have the first item I didn't understand under control. I have something else that I don't understand in the same circuit. It also appears in both Mike DC and my transient analysis.

Using Mike's DC analysis, at any point when Vb >~1.2V, Vg = 3.0V. The FET should be off, but -Ib still shows 250mA of current flow. (the same thing is seen in the transient analysis while Vb>1.0*n) So either what am I missing, or why???

P.S. I'll get back to the constant-current source in a moment.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top