What is it?
The internet (and places like the
EEVBlog forum) are full of implementations of some variant of this basic MOSFET based constant current dummy load.
It's pretty simple - The positive input terminal of the op amp is connected to a voltage reference and the negative input is connected to a sense resistor. The opamp turns on the MOSFET until enough current flows in the resistor so the voltage matches the reference voltage.
In the example above I am putting a 1kHz, 200mV square wave as the reference which should generate 200mA current at peak.
The current should be constant regardless of the load voltage. MOSFETs can handle large amounts of power and have a very low resistance when saturated (less than an ohm). The circuit looks pretty simple over all. What could go wrong? We'll see below!
Why Do it?
People use these types of dummy load circuit to test the performance of power supplies. For example the 1kHz signal could be used to check how long a power supply takes to recover when the load is varied sharply.
I wonder about the value of this since this is essentially a closed-loop control circuit trying to keep the current constant while connected to another closed-loop control circuit trying to keep the voltage constant. Will you get any meaningful results from this or just some weird interactions between to control loops? I don't know and I'm not sure I would risk it. Certainly it would be a handy way of checking the voltage doesn't drift under load or to check the thermal design etc.
I do want to build a linear bench power supply at some point though and I thought this was an interesting stepping stone towards that goal. Not so much as a test tool but because ultimately regulating the output of the supply is going to be done with a closed look control circuit something like this.
Real vs Simulated
I think the first one of these I saw was put together by Dave Jones on EEVBlog
here. Lots of people tried to replicate his design (which is identical to the circuit above) and found it didn't work. The circuit would oscillate at even relatively low current loads.
When I simulated the circuit above, the output was exactly what I expected which is that the voltage on the resistor would switch from zero to 200mV and back to zero.
So then I went ahead and built this up on a breadboard and connected in a 1kHz 200mV signal to the inverting input and guess what? It oscillates! A lot!
So why is it so different to what I simulated? One explanation is that the leads between the power supply and the circuit have some inductance. Lets try adding 1.5uH of inductance and say 0.1 ohms of resistance and see what happens to the simulation. Here is the circuit:
And here is the waveform at R1. The signal on the scope is a tiny bit worse (the oscillations have a slightly higher magnitude) but it is close.
Modeling the Problem
To make this work better we first have to understand what is going on. The problem is the control loop is causing the output to oscillate so we have to fix the control loop. But how do we model the control loop?
One way of figuring out how to stabilise the control loop is to model the open-loop gain of the circuit and then ensure it has at least a significant amount of phase margin. See
here. What this means is that where the gain crosses 0dB (which corresponds to a gain of 1) you don't want the phase to be negative. If it is then this means the output is going to be the opposite phase to the input which causes the circuit to oscillate.
The tricky part is the
open loop gain needs to be modeled which means you have to break the control loop and model the transfer function. The problem with this is that when you break the control loop you break the DC conditions also and then the results aren't valid.
The way you do this in LTspice is you:
- Zero out the inputs - we don't want the inputs to perturb our result.
- Cut the control loop somewhere and place a voltage source in the loop. Set the voltage source to have an AC value of 1.
- Label the nodes either side of the AC source as 'fb' and 'input'
- Run an AC analysis and plot the value of v(fb)/v(input).
In my case I changed the voltage source going to the positive terminal to be a fixed 100mV and added the AC source to the loop. if I change the 100mV it effectively will show me the stability at different output loads.
Here is the AC analysis of v(fb)/v(input). As you can see the phase margin is -15 degrees so it makes sense that the circuit oscillates.
Fixing it
So we can see the weirdness at around 1MHz - 10MHz caused by the MOSFET capacitance and the inductance on the output. If we roll off the gain a bit faster maybe the 0dB point will occur while we still have some positive phase margin. So I add a 10n capacitor between the output terminal of the op amp and the negative input and now it looks like this.
That certainly brought the gain down but we still only have 5 degrees of phase margin. If I switch back to my transient response model the output when a 1kHz square wave is applied looks like this
Some pretty serious overshoot but otherwise it looks a lot better. If I make the same change to the real circuit it looks like this:
Still a fare bit of ringing but it looks better.
Now because the MOSFET is effectively a capacitive load for the opamp, I tried adding a resistors between the opamp output and the MOSFET gate. I fiddled with different values but no matter what I couldn't increase the phase margin. Then I tried also adding a feedback resistor between the negative terminal and the 1ohm sense resistor and this cleaned it up a lot. The final circuit looks like this:
The AC analysis looks like this - much better!
And finally it looks like this on the scope when I breadboarded it
Ok so the corners are not as square as my simulation but that doesn't look bad!