I'm just trying to get my head around whether a circuit I've built is going to work correctly. I'll try and explain it.
I have a variable voltage supply at a fairly fixed current rating (0.5A). It's of a low impedance of around 3ohm and a low saturation point, net result being it'll supply 6V/0.5A and can potentially reach 36V with no load, but with a load it'll not offer 36V/0.5A. With a load the voltage will drop dramatically. Realistically 10W is it.
So, I need to charge a large super-cap rated 50F/5.5V. Due to other power requirements it is important to limit the current going into the super-cap, which will happily sink 7A on my power supply when it's near empty. I've implemented current and voltage monitoring to do this.
I have two ways to charge the cap, either buck or PWM. Currently I've gone for PWM, which may be a mistake. Why? Because PWM is the full voltage with periods. Effectively I'm sinking up to 36V into a 5.5V rated capacitor. Although the power supply would then drop significantly there is the possibility of over-charge.
This got me thinking. If the capacitor is rated 5.5V and my bench supply gives out unlimited amps, then I can record how many amps are sinked at a particular capacitor voltage level. I can then replicate this by PWM using the voltage/current monitoring values when the PWM is at 1 and 0. The PIC I'd use would therefore stop current supply once the cap reached 5.5V. The cap would see higher voltage pulses but wouldn't go above 5.5V.
1. Would this work?
If not, I can turn it into a buck. This introduces efficiency penalties.
The second issue I have is a follow-on boost circuit uses the super-cap supply. It's max input is 5.5V so quite possibly higher voltage's will destroy it. The PWM is just that, but maybe it'll still be safe because the super-cap sinks so much power those higher pulses won't even get it. I could stick a zener or TVS across it to dampen those pulses until the load causes the voltage-drop on the power supply (which will happen very quickly).
2. Does this sound feasible?
Cheers,
Andrew
I have a variable voltage supply at a fairly fixed current rating (0.5A). It's of a low impedance of around 3ohm and a low saturation point, net result being it'll supply 6V/0.5A and can potentially reach 36V with no load, but with a load it'll not offer 36V/0.5A. With a load the voltage will drop dramatically. Realistically 10W is it.
So, I need to charge a large super-cap rated 50F/5.5V. Due to other power requirements it is important to limit the current going into the super-cap, which will happily sink 7A on my power supply when it's near empty. I've implemented current and voltage monitoring to do this.
I have two ways to charge the cap, either buck or PWM. Currently I've gone for PWM, which may be a mistake. Why? Because PWM is the full voltage with periods. Effectively I'm sinking up to 36V into a 5.5V rated capacitor. Although the power supply would then drop significantly there is the possibility of over-charge.
This got me thinking. If the capacitor is rated 5.5V and my bench supply gives out unlimited amps, then I can record how many amps are sinked at a particular capacitor voltage level. I can then replicate this by PWM using the voltage/current monitoring values when the PWM is at 1 and 0. The PIC I'd use would therefore stop current supply once the cap reached 5.5V. The cap would see higher voltage pulses but wouldn't go above 5.5V.
1. Would this work?
If not, I can turn it into a buck. This introduces efficiency penalties.
The second issue I have is a follow-on boost circuit uses the super-cap supply. It's max input is 5.5V so quite possibly higher voltage's will destroy it. The PWM is just that, but maybe it'll still be safe because the super-cap sinks so much power those higher pulses won't even get it. I could stick a zener or TVS across it to dampen those pulses until the load causes the voltage-drop on the power supply (which will happen very quickly).
2. Does this sound feasible?
Cheers,
Andrew