Thursday, 8 October 2015

Lab power supply project - Pre-regulator re-design

The power supply design is now pretty close however I still have a few problems. I noted that at more than about 10V I can't get more than 4A of current before the output oscillates. I think the problem is that the maximum voltage the SCR pre-regulator's can achieve on the bulk capacitor is significantly less than the peak output of the transformer.

Also, no matter how hard I try to stomp on the ground issues I can't get rid of the last bit of noise on the output. The noise is a 1mV bump that occurs when the SCR fires to charge the bulk capacitor. The current flow causes the ground point to jump up a tiny bit and the output voltage to effectively drop by the same amount.

A while ago there was a discussion about low noise pre-regulators for linear supplies on the EEVBlog forum. A contributor called Blackdog described a pre-regulator that uses a P channel MOSFET as the pass element here on the EEVBlog forum. This design was picked up by another contributor Prasimix who is developing a 0-50V 3A supply here.

The Blackdog pre-regulator has a few advantages over what I am doing and may resolve some of my issues.

Blackdog Pre-regulator

The blackdog pre-regulator circuit is shown below:
The way it works is quite simple:
  • Two P channel MOSFETs (Q2, Q4) are used to turn on or off current to the main bulk capacitor.
  • The two AC inputs are combined and passed through an SCR (T1) and a resistor in series. The voltage on the SCR controls the voltage at the base of an NPN transistor (Q1) that drives the MOSFET gate. When the SCR fires the base of Q1 goes low.
  • A PNP transistor (Q3) with a voltage divider on its base turns on when the bulk capacitor voltage reaches a certain proportion above the voltage regulator output.
  • When Q3 turns on it fires the SCR which lowers the voltage at the base of Q1 and switches the MOSFETs (Q2, Q4) off.
  • The capacitor between the base and collector for Q1 causes the gate voltage to ramp down instead of suddenly dropping. This helps by to reduce noise.
  • When the voltage on the SCR falls (when we move to the next AC cycle). The SCR resets (commutates) which turns on the MOSFETs again and repeats the cycle.
So basically the circuit turns on the power to the capacitor at the start of each AC cycle and then turns it off when the capacitor voltage reaches the desired level. This has a lot of advantages over a traditional SCR pre-regulator
  • The voltage/current ramps up with the AC waveform. The peak current flows when the bulk capacitor voltage is at a minimum and the transformer voltage is ramping up and charging the capacitor. The traditional SCR pre-regulator turns on when the transformer voltage is high and the bulk capacitor is low and creates a significant current spike. I think it is this spike that is creating the unwanted output noise in my current design.
  • At the point where the MOSFET turns off, the capacitor has partially charged which means the current is lower than it would have been at SCR switch-on in the SCR design. The drop in current is changed to a ramp by the capacitor on Q1 which further reduces noise.
  • It is much easier to figure out when to turn power to the bulk capacitor off than it is to figure out how long to delay turning it on. This makes the circuit much simpler.
  • The Rds of the MOSFET is very low and results in a much lower voltage drop than the drop across the SCR.

Managing the Pre-regulator Voltage

The Blackdog circuit essentially sets the pre-regulator to a voltage that is some proportion above the regulator voltage. The thing is thought that the pre-regulator needs to be a constant voltage above the output regardless of the output voltage. The amount it is above the output is related to the maximal current demand and the bulk capacitor size as this will determine how much the voltage ramps down within a cycle.

In my version of this circuit I used one of the LT1716 comparators to compare the pre-regulator voltage with the output. I used a zener to set the pre-regulator voltage a few volts above the output.

It occurred to me that I needed this to be 5-6V above the output to ensure I could deliver 5A. If however the supply is not delivering this amount of current then the excess voltage just ends up as heat on the voltage regulator MOSFET. The bulk capacitor needs to he 1.06V above the output per 1A of current delivered. If instead we set the voltage to be some nominal amount above the output (say 1-2V) plus 1.5V per 1A (for safety) then I could massively reduce the dissipation.

The solution is to add a summing node to set the desired pre-regulator voltage and then use a comparator to fire the SCR.


The summing node adds 1.5 times the current to the required output voltage and will keep charging the bulk capacitor until the pre-regulator is another 1.4 (two diode drops) above that. Even going from 0 to 5A on the output this provides enough headroom to keep the output smooth.


The green trace above shows the current flowing (0-5A), the red is the output voltage (set to 14V) the blue is the pre-regulator voltage. You can see the pre-regulator voltage increase as the load increases and you can see the sawtooth waveform get taller as the current discharges the capacitor more per cycle.

One case where this didn't work as well was when the supply went into current limit mode.

In this case the voltage took one cycle to get back to the set point as the pre-regulator would discharge too low. I think this is acceptable however as current limiting is not a normal mode of operation but more a catch-all to prevent damage in case of operator error.

The power dissipation is relatively modest. With the output set to 14V and the load drawing 5A, the dissipation through each MOSFET is a series of short 12W spikes. This dissipation across the main pass transistor is around 22W (averaging about half that).


Conclusion

This pre-regulator looks like a winner. The next step is to build it up on my bread-board arrangement and test it out. 

Till next time!

Wednesday, 30 September 2015

Lab Power Supply Project - AD7705 Followup

After all the problems I had with the AD7705 described in my previous post the cause of the second channel problem turned out to not be what I expected. In desperation I tried out the Arduino software for the AD7705 provided by Kerry Wong on his blog. I reasoned that this should work if the configuration is identical (which didn't take much to achieve - I just moved the SS line to a different pin).

Interestingly it behaved in exactly the same way as it did with my software - i.e. channel 0 worked but channel 1 didn't.

Then I started to think the part may have been damaged - I did muck up the placement of the reset line at some point. Maybe I damaged it?

I ordered a replacement from Element14 - I found a through-hole version for a fair bit less than what I paid for the surface mount one and tried it out.

Sure enough - it worked!

That's all for now - I just wanted to provide this quick update.

Sunday, 27 September 2015

Lab Power Supply Project - AD7705 - Not having fun...

I've been stuck trying to get this AD7704 ADC working for some time. The part proved to be surprisingly tricky to get going and in fact I am still stuck trying to get the second channel to work. A big hat-tip to Kerry Wong who published a couple of article describing his experience with this part and some software to interface with this part. While I didn't use his software at all it did provide some valuable tips about what was going wrong. The comments on his articles are telling in that there are a lot of people having problems getting this part to work. The Analog Devices forum also has a lot of questions as a result of people having problems.

I think the documentation, while very long isn't always that clear. It appears comprehensive when you first start working with it but the more I learned the more holes I found.

AD7705 Hardware

The AD7705 is a 16 bit analog to digital converter of the delta sigma type. It's excellent for low rate, high precision conversions. It has a two differential inputs and is SPI (which is important as the DAC I am using is also SPI). The specs look very impressive with very good linearity and no missing codes.

As I said before - the part has no internal reference so I am using it with an AD780 that I was testing before and which I found impressively stable.

It turns out that to get the full range I need, I have to configure the part to run in bipolar mode. It isn't really bipolar in the sense that both the analog inputs for a given channel have to be above the voltage of the ground line but the AIN(+) can be at a lower voltage than AIN(-). In unipolar mode I can only get a range of zero to the voltage reference (2.5V). To get 0 .. 5V I have to run the part in bipolar mode and connect the AIN(-) line to the reference. Then the AIN(+) can swing from zero (ref-2.5V) to 5V (reg+2.5V). I could run in unipolar mode and scale the input to 0..2.5V but this means the voltage per step is then just 38uV which is starting to look a lot like the noise level.

The part contains an input buffer but if you use this, the input range is restricted to 50mV - VDD -1.5V. This pretty much rules out the buffer for my application since even if I ran it in unipolar mode I need to go lower than 50mV.

The plan is to use one channel to read the output voltage and the second channel to read the current. A voltage divider followed by a unit-gain buffer will scale and buffer the output voltage and there is already a buffer for the current (which doesn't need to be scaled).

The part has a data ready line that goes low when there is data to read. It is also possible to check the status of this line in software. I went back and forth between using the hardware line and using software and am currently using the hardware line. This was mostly as I was having problems though rather than for any real reason.

Programming

SPI Setup

As for the DAC, I used my SPIDevice class as the basis for the ADC code. The first thing is that this device uses a different SPI mode than the DAC. The setup looks like this:

    setClockPolarity(SPIDevice::CLOCK_POLARITY_FALLING_LEADS);
    setClockPhase(SPIDevice::CLOCK_PHASE_SAMPLE_ON_TRAILING);
    setDoubleSpeedModeEnabled(true);
    setClockRate(SPIDevice::RATE_DIV_DBL_2);
    setBitOrdering(SPIDevice::MSB_FIRST);

Registers, Initialisation

The ADC takes more setup than the DACs I used before. The ADC has a bunch of internal registers you can read or write that change it's behaviour or return the result of a conversion. The first byte you send to the part is used to write to the communications register and this allows you to select a register, to indicate if you want to read or write, select the channel you want to work with or put the part in standby mode. You basically write a byte to the communications register to say what you want to do next and they either write some data or read some data in the following steps.

The converter has a set of modes it can be in although these are all about calibration. The device is either in normal mode or in one of the calibration modes.

You have to setup a couple of registers to get the part to run. To initialise the ADC you have to:

  • Configure the clock register which sets the clock divider, tells the part what clock frequency you are running with and sets up the filter based on your desired conversion rate.
  • Configure the setup register which configures the gain, buffer mode and the mode of the converter. In the example code I found it will initially set the mode to be an internal calibration mode. The part then reverts to normal mode once the calibration has been completed.

Reading Data

Reading the result of a conversion involves waiting for the data to be ready and the reading the data register to get the value.

Data values are 16 bits so you have to read two bytes (i.e. write two zeros via SPI and read back what comes in on MISO).

As I am using the part in bipolar mode my expectation was that the result would be a signed integer. As it turns out this isn't the case and differential voltages of -VREF to 0 come out as 0..0x7fff and 0 to VREF come out as 0x8000 - 0xffff. This is actually easier for me as I really wanted a unipolar reading from 0 to 5V (2* VREF). With AIN(-) tied to VREF this equates to -VREF .. +VREF. 

This isn't the first quirk, confusing or downright un-obvious thing I found in the docs for this part however.

Handling CS


This wasn't apparent from the datasheet (well apart from the sample code at the end) but they seem to expect that each write or read is a separate SPI transaction. So for example to write to the clock register:

  • You lower CS, 
  • Write the comms register value, 
  • Raise CS
  • Lower CS
  • Write the clock register value
  • Raise CS

Handling Data Ready

I had a number of problems with this. First of all I used the hardware data ready line but this seemed to always return the same value. I stupidly realized I have to use the PIND in the code to read from port D not PORTD (which you use when writing). In between I figured out that the communications register contains the value of the data ready line so if you read this you can poll until data read goes low.

Initially whenever I tried to read the data register I would get zero or some constant value. After some digging I figured out that you have to wait for data ready to go low again after setting the setup register before you do anything else. I setup both setup registers one after the other so the second one was clobbering the first.

A useful tip is that the data ready line will pulse high every so often (every 20ms in my setup) and then go low again. For a couple of days I thought I had damaged the part as I moved the data ready line from port B to port D and then the data ready never went ready. In fact I got nothing back from the part at all. Eventually I figured out I had accidentally knocked the reset line wire out of the breadboard and when I put this back it worked again. Dammit!

The other tricky thing that is not apparent in the data sheet is the order of when to check data ready. When I had problems with the second channel (see below) I found a FAQ published by Analog Devices that spells out the order when using the data ready line. In the case where you are polling the comms register I believe (but haven't found documentation confirming this) the order is:
  1. Write to the comms register, set the comms register selector, the read/write bit as read and the channel you want to get data from.
  2. Read the next byte (which will be the contents of the comms register) and check the data ready bit. If it is not low go back to 1. Otherwise continue.
  3. Set the comms register to select the data register, set the read/write bit to read and the channel you want to read from
  4. Read two bytes of data - this is the last conversion value.
If you are using the hardware data ready line its a bit different as you have to wait AFTER writing to the comms register to ask for a read of the data register. So you
  1. Write to the comms register, select the data register and the read/write bit as a read and the channel you want.
  2. Wait until the data ready line is low (poll)
  3. Read two bytes of data - this is the last conversion value.

Reset

I figured out something useful out while I was trying to get things working and that is you can reset the state of the comms. The problem is the SPI transactions are very stateful in that you write the comms register and then you must read/write the correct number of bits for the register you are then reading/writing. If this gets out of sync for any reason the part will behave oddly.

It turns out the first bit of the comms register must always be zero and if you write a value where the leading value is a one the part ignores it. So to reset the SPI communications (and nothing else - all the registers etc stay as they are) you can write four bytes of 0xFF and it clears.

Calibration

The device has three calibration modes:
  • Self-calibration. This is where it internally disconnects the input and calibrates against the reference,
  • Zero calibration. This is a system calibration mode where you present a zero voltage to the part and then initiate calibration.
  • Full scale calibration. As a above but a full scale value.
For a while there I had a problem where the lowest voltage it would read was around 50mV. It turned out to be because I wasn't waiting for the self-calibration to settle before I did something else. In the mean time I looked at the calibration options to see if I could get rid of the offset.

The first not-so-fun thing is that 'zero' when in bipolar mode means mid-scale. This means I couldn't use this to get rid of the zero offset anyway. I wasn't quite sure what full scale would be in bipolar mode but I assume it would be 2 * VREF (depending on gain settings of course).

You can both read and write the calibration registers after calibration so you could do a system calibration once and then store these in EEPROM or something. I haven't tried this but the hardware seems to support it.

Sort-of Dual Channel

At first glance it appears the unit requires everything duplicated for both channels. The comms register specifies what channel each operation applies to for example. But then you have registers like the clock register which seem unlikely to be per channel and more likely to be global. Certainly everything I have read seems to indicate it is global although the datasheet doesn't specify this.

Then I also found this forum post that seems to indicate that the front end is in fact common, For example it appears the gain, buffering and bipolar settings is in fact common between the channels./ The thing that is odd about this is that there are separate calibration registers for each channel - you have to wonder why if it is common. Also, if the front end is common (i.e. if you set gain for one channel it effects the other) do you actually have to set the setup register twice? In my tests I think I do as otherwise I get weird offsets due to the missing self-cal but I'm not sure.

And then I got Stuck...

So I got the voltage readings going well and I even did a calibration run using my python script and generated a table for my lineariser code. This was working reasonably well although the data appeared a bit noisy,

Then I tested the current measurement (so this is on the other channel). and this didn't work at all. The readings are constantly around 32000. I checked the voltages are correct, I checked the wiring to make sure AIN1(+) and AIN1(-) are wired to the right places and they are, I tried switching to software polling of the data ready line but nothing worked.

Occasionally it looked like it was working but I figured out that if the voltage input is quite low (so the power supply output voltage is set for 2V and therefore the ADC input on channel 0 is less than 30mV or so) then I get a reading that is not pegged at 32000 on channel 1. It doesn't seem to change with the voltage on channel 1 however. Also when switching back to channel 1 the first reading is often messed up.

I put something on the Analog Devices forum but at time of writing I had no response. In conclusion - I don't really like this part much...

Saturday, 29 August 2015

Lab PSU - Resistor Precision

In the last post I was using my 34461A to calibrate the output of my lab supply using a little python. The things that really stood out were

  • It took a *long* time for the output to settle. Sometimes as long as 40s or more. The voltage would get within a couple of mV of the target but the last digit would take a long time.
  • The drift between different runs was huge. As much as 30mV
  • The settling time was much longer when switching between the low and high range or vice versa.
I did some investigation and learned a few things including a truly face-palm worthy realization.

Op Amp Settling Time

I found an interesting application note from Analog Device (AN-395) that describes what effects the settling time of op amps, approximations for measuring the settling time and aspects of the amplifier's design that will determine the settling time. For example mismatch of the poles and zeros in the amplifier's open look transfer function will effect settling time. The table in that application note below was most useful:
The key things are that settling time is dependent on:
  • Amplifier bandwidth
  • The gain of the amplifier stage
  • The level of precision required
The amplifiers I am using (LT1639) have a gain bandwidth product of 1MHz and the amplifier is taking the 0-5V from the DAC and multiplying it by 6 to generate the 30V output. If we use the table above for gain = 10 and accuracy = 0.01% (where we actually want 0.003% for 1mV out of 30V) it comes out to 1.5uS. This is far less than what I was seeing.

Resistor Effects

When I chose the resistors for the x6 amplifier, I chose a set of values to give me slightly more than x6 gain so I could get the full 0-30V range. I noticed the settling time issues and thought it could be that I chose values which were too low and was drawing enough current that heat was effecting the settling time. I tried choosing higher values but this didn't change much.


I found this chapter from a book called "Op Amp Applications" via Google books that proved to be useful. Essentially the author calculates the power dissipated by each resistor, notes that the feedback resistor carries much more current than the negative terminal resistor. He works out that based on the Tc for carbon resistors (which is shocking!) 1500ppm and based on the case-to-air thermal resistance that the error induced by voltage changes would be quite significant even with 14 bit resolution.

This is where I did a face-palm and realized that of course these crappy carbon resistors wouldn't match the 2ppm/K accuracy of my DAC. Not only that but the thermal effects probably cause some of the settling issues as the devices heat up and cool down.

It became apparent I need to find a more precision resistor. For now I am switching to carbon film resistors which have a tempco of more like 150ppm. I also started investigating resistors I could use in the final design including precision resistor divider networks such as this one which is a 25ppm 10K/2K resistor network. The good thing about this is that even though 25ppm is much higher than the 2ppm of my reference, both resistors are in the same package so the ratio. There are other packages worth considering but the cost tends to be pretty high.

Another option is to use surface mount resistors with low tempco such as these TE Connectivity 10ppm resistors  and to place them physically close they will be at close to the same temperature. This won't help with self-heating however but when the tempco is so low the effect of self-heating is pretty small.

Loss of Resolution

Furthermore, in my zeal to cover the full range I missed the fact that by using a gain of 6.1 I was facing a significant loss in resolution. This I think partly accounts for why I would get quite different results between calibration runs.

I decided to use a 10k/2k ratio (so exactly 6). And did another calibration run. The results are pleasing! Here is the output when set for 1V, 5V 10V, 20V and 30V

I'm pretty sure that with the right resistors I should be able to nail this.





Monday, 24 August 2015

LabPSU - ADC Linearization

Putting it Together

It's been a while since I updated the blog but I've actually made lots of progress with this project. I have most of the schematic in CircuitMaker and have been creating footprints for components as I go. I prototyped selecting the transformer windings with a relay and coded a lot of the control software including the ADC control. I have a new toy and I used this to gain some insights into the linearity of the ADC.

Switching between 15V and 30V Mode

The plan has been to configure the transformer windings so when greater than 15V is required the windings are configured in series but when less than 15V is required they are in parallel. The plan is to use a relay to do the switching and have the micro-controller on board each power supply module control the relay.


In this schematic you can see the three transformer windings coming in off the connector on the left and you can see the relay for switching the two windings either in series or parallel. There is a transistor switching the relay current based on a signal from the micro-controller.

The V+ output is the bias voltage for the voltage control opamps and MOSFET gate. When we are in 15-30V mode this is configured for 40V but in 0-15V mode this is set to 24V. The transistor connected to the adjust pin of the LM317 makes this adjustment.

The other two LM317s generate the 5V required by the digital logic and some 5V analog circuits plus the 6V required for the relay.

The relay and V+ control lines are separate which allows the micro-controller to switch the relay before it switches the voltage regulator. Only then will it update the ADC to set the desired output voltage. This ensures the voltages remain stable through the transition.

During testing I had problems with this setup which is that the 6V regulator gets quite hot. The relay draws around 100mA when activated and when in this mode the V+ rail is at 40V. This means it is dissipating (40-6)*0.1 = 3.4W of power which doesn't sound like much but is enough to get it pretty hot. Way too much for the surface mount versions of the LM317 I planned to use also. I found another relay with 8A rating that requires 12V to activate. This one requires 60mA to hold which means the regulator dissipates (40-12)*0.06 = 1.7W which is much better. For now I am using the 6V relay in my proto-type and have a chunk of metal attached to it for heat.

Voltage/Current Control DAC

I decided to go with the AD5689 digital to analog converter (DAC) from Analog Devices. This part has an awesome 2ppm internal reference, is SPI and dual channel so I can drive both the voltage and current set points from the same device.


The image above is where I am at currently with the digital control circuitry, There is an MCP2200 USB to UART converter used to receive commands from the USB bus. This goes via an ADUM1201 isolator so the micro-controller is galvanically isolated from the USB bus. Using the isolator means I can have multiple power supply channels all connected to the same USB bus without a common ground point. The microcontroller has a 6 wire ICSP port so the firmware can be re-flashed.

The AD5689 chip is connected to the SPI bus lines of the microcontroller. I don't really need to synchronise updating the two channels so the DAC's LDAC line is permanently pulled low. This has the effect of sending updates straight out to the analog output. The gain pin is pulled high to allow the output to go from 0-5V. The RESET line is also pulled high as we won't need to reset the part.

At some point I will split the analog and digital 5V lines with a small (10 ohm) resistor to limit transmission of noise. For now I haven't done this.

The output of the voltage DAC goes into an op amp configured with a gain of (1 + 47/8.2) = 6.7. Ideally we need a gain of 6 (5 * 6 = 30) but in reality the DAC can't go right to the rails and this only means a very marginal loss in resolution. It's a trade-off between choice of resistors and also we want minimal current flow in the op amp resistors.

Driving the DAC

I decided to build an SPIDevice base class for configuring the ATMEL SPI bus registers and for doing common things like setting up the SS pin etc. The class allows you to create a sub-class for a particular device and in the sub-class you call a setup routine and then write/read each byte on the bus and then shutdown. The class provides enumerated types for all the different options and functions to set this up.

class SPIDevice
{
public:

    enum BitOrder
    {
        MSB_FIRST,
        LSB_FIRST
    };
    
    enum ClockPolarity
    {    
        CLOCK_POLARITY_RISING_LEADS,
        CLOCK_POLARITY_FALLING_LEADS 
    };
          
    enum ClockPhase
    {
        CLOCK_PHASE_SAMPLE_ON_LEADING,
        CLOCK_PHASE_SAMPLE_ON_TRAILING   
    };
    
    enum ClockRate
    {
        RATE_DIV_4=0,
        RATE_DIV_16=1,
        RATE_DIV_64=2,
        RATE_DIV_128=3,
        RATE_DIV_DBL_2=4,
        RATE_DIV_DBL_8=5,
        RATE_DIV_DBL_32=6,
        RATE_DIV_DBL_64=7
    };
    
    /*
    Constructs the SPI Device with default setup.
    By default we are MSB first, SPI Master and interrupts
    are disabled.
    */
 SPIDevice(bool master, int selectPin);

    /*
    Sets the SPI Mode (0-2) which sets the clock phase and polarity
    Mode 0 = CLOCK_POLARITY_RISING_LEADS | CLOCK_PHASE_SAMPLE_ON_LEADING
    Mode 1 = CLOCK_POLARITY_RISING_LEADS | CLOCK_PHASE_SAMPLE_ON_TRAILING
    Mode 2 = CLOCK_POLARITY_FALLING_LEADS | CLOCK_PHASE_SAMPLE_ON_LEADING
    Mode 3 = CLOCK_POLARITY_FALLING_LEADS | CLOCK_PHASE_SAMPLE_ON_TRAILING
    */
    void setSPIMode(const int mode);
    
    /*
    Returns true if the next operation will be done using interrupts.
    */
    void enableInterrup(bool enable=true);
    
    /*
    Returns true if interrupts are enabled for the next operation
    */
    bool isInterruptsEnabled() const;
    
    /*
    Sets the bit order that will be used for the next operation.
    */
    void setBitOrdering( const BitOrder order );
    
    /*
    Returns the bit order that will be used for the next operation.
    */
    const BitOrder getBitOrdering() const;
    
    /*
    Sets the mode of the AVR. Set to true if the AVR is the master and 
    false otherwise
    */
    void setMaster( bool master = true);
    
    /*
    Returns the mode of the AVR for the next communication. 
    Set to true if the AVR is the master
    */
    bool isMaster() const;
    
    /*
    Sets the clock rate divider that will set the rate used to
    communicate on the SPI bus for the next communication.
    */
    void setClockRate( const ClockRate rate );
    
    /*
    Returns the clock rate divider that will set the rate used
    to communicate on the SPI bus for the next communication.
    */
    const ClockRate getClockRate() const;
    
    /*
    Returns the clock polarity that will be used for the next
    transaction.
    */
    const ClockPolarity getClockPolarity() const;
    
    /*
    Sets the clock polarity that will be used for the next transaction
    */
    void setClockPolarity(const ClockPolarity polarity);

    /*
    Returns the clock phas used for the next transaction
    */
    const ClockPhase getClockPhase() const;
    
    /*
    Sets the clock phase that will be used for the next
    transaction
    */
    void setClockPhase(const ClockPhase phase);
    
    /*
    Enablse double speed mode on the SPI bus for the next
    operation.
    */
    void setDoubleSpeedModeEnabled( bool enable=true);

    /*
    Returns true if double speed is enabled for the next
    operation.
    */
    bool isDoubleSpeedModeEnabled() const;
    
protected:

    /*
    Returns the status of the interrupt flag from the SPI Status
    register. This indicates if the operation has completed.
    */
    bool getInterruptStatus() const;
    
    /*
    Returns true if a collision occurred on the SPI bus during
    the last operation. This is derived from the collision flag
    in the SPI status register
    */
    bool getWriteCollisionFlag() const;
    
    /*
    Sets up the SPI hardware ready for an operation.
    */
    void setup();
    
    /*
    Asserts the select line so the device knows we are talking to it
    */
    void setupSelectLine() const;
    
    /*
    Clears the select line because the transaction is finished
    */
    void clearSelectLine() const;
    
    /*
    Initiates a write operation with the byte specified
    Returns the value returned by the device
    */
    uint8_t writeByte( uint8_t byte );
    
    /*
    Reads a byte of data from the device
    */
    uint8_t read();
    
private:

    int             m_selectPin;
    BitOrder        m_bitOrder;
    ClockPolarity   m_clockPolarity;    
    ClockPhase      m_clockPhase;
    ClockRate       m_clockRate;
    bool            m_interruptEnabled;
    bool            m_master;
    bool            m_doubleClockSpeed;
};

Then implementing the ADC involves

  • Calling the right methods to configure the clock phase, clock divider, master mode, bit ordering and so on 
  • When performing an operation the sub-class calls setup(), setupSelectLine() then either write() or read() and then finally clearSelectLine()
I had a few problems at first where nothing would appear on the bus (looking at the lines with the scope). It turned out that you have to make sure the SS pin is configured as an output as otherwise the SPI hardware can interpret any stray signal as the SS line going low and will switch from master to slave mode.

The AD5689 has lots of different features but I don't really need any of them. The way it works is you write a one byte control byte that specifies the command and which of the channels the command applies and then two more bytes of data. As I just want the value to go straight out I just used the 'Write to input register' (command 1) and set the channels to update. I don't have the MISO hooked up so no response is received.

New Toy

I've been eyeing one of these off for a while. I think for the money they are pretty much unbeatable. I recently bought a Keysight 34461Am 6 1/2 digit multimeter. It has a big clear LCD display and can display trends, histograms and stats. It has Ethernet connectivity! I contemplated buying a GPIP to USB converter for my other DMM but now I pretty much don't need to! The accuracy and resolution are excellent for what I want.

Amusingly it isn't total over-kill for my needs (I kind of expected it would be). Here is a photo of the meter watching the output of a AD780 voltage reference (I plan to use this with the ADC - the DAC has a built in reference). As you can see the device drifted by less than 10uV over the 25 minutes I had it powered up! Impressive!



Accuracy and Linearisation

So the output of the DAC must then be multiplied up by roughly 6 so 0-5V then controls 0-30V. The DAC doesn't quite make it all the way to 5V so the multiplication needs to be slightly more than 6.

As a simple first pass I set the DAC to close to full scale and calculated a volts-per-step number. I then modified my software so I could set the output to a voltage and it would calculate this using the volts-per-step number I hard-coded. I found this was pretty inaccurate at various points on the range.

The power supply is easy to control using the serial interface and the DMM can be driven from essentially telnet by sending SCPI commands. I knocked together this python script to sweep the power supply from 0.5V to 30V in 0.1V steps. At each step it pauses to let the voltage settle (more on this later) and then takes a measurement on the DMM. The PSU spits out the DAC code as debug and the python script gathers this up with the measurement from the DMM and prints it out.

import socket
import os
import time

s = socket.socket()
s.connect(("192.168.1.37",5025))
s.send("*IDN?\n")
print(s.recv(300).decode("UTF-8"))

tty=open("/dev/cu.usbmodem1411","r+")

tty.write("ISet=2.0\n")
str = tty.readline()
str = tty.readline()

tty.write("VSet=5.0\n")
str = tty.readline()
str = tty.readline()

time.sleep(1.0)
voltage = 0.5

while voltage < 30.0:
    tty.write("VSet=%f\n" % voltage)
    tty.readline();
    count = tty.readline().split()[4]

    time.sleep(10.0);

    s.send("MEASURE:VOLTAGE:DC?\n")
    print(count+" "+s.recv(300).decode("UTF-8"))
    voltage += 0.1

s.close
tty.close


I took the output of this and saved it as a CSV file and loaded it into Excel. Here is the graph of code vs voltage. It looks *very* linear with a very small offset.



This doesn't explain the inaccuracies I was seeing however so I thought I would calculate the gradient between successive points and see how that looks.
Now the variation in the graph is pretty small however its apparent that:
  • There is a big change at the point where the PSU switches from the mode where the windings are in parallel to series.
  • There is some variations at low voltages. This could be a settling issue
  • The gradient is *not* uniform.
My plan is to do a piece-wise continuous approximation. I created a class called Linearizer that looks like this:
class Linearizer
{
public:
 struct Point
 {  
  uint16_t code;
  float  value;  
 };
 
 static const struct Point ZERO_POINT;
 
 Linearizer(const Point *points, int numPoints);
 
 /*
 Calculates the code for the value provided by using
 the table of points provided in the constructor.
 */
 const uint16_t valueToCode(const float value) const;
 
 /*
 Calculates the value given the code provided by using
 the table of points provided in the constructor
 */
 const float codeToValue(const uint16_t code) const;
 
protected:

 /*
 Calculates the code by interpolating using the points provided.
 Uses point1 and point2 to calculat the gradient and then extrapolates
 using this gradient from the basepoint provided.
 */
 uint16_t interpolate( 
  const Point& point1, 
  const Point& point2, 
  const Point& basePoint,
  const float  value  ) const;
  
 const Point *m_points;
 int   m_numPoints;
};


The way it works is you construct the Linearizer with a table of code/value points that were measured from the device output. When you want to set the output to a specific value you call valueToCode and it searches the table for two points that bracket the value you want. It then linearly interpolates between these points to approximate the code you need to get that value.
If the value is less than the first point in the table it will interpolate between (0,0) and the first point. If the value is after the last point it will use the gradient between the last two points and will extrapolate from the last point to calculate the code.
Here is that algorithm in code:
const uint16_t Linearizer::valueToCode(const float value) const
{
 //
 // If the value is between zero and the first point
 // we interpolate between zero and this point and use Zero as the
 // base point
 if ( value < m_points[0].value )
 {
  return interpolate(ZERO_POINT,m_points[1],ZERO_POINT,value);
 }
 
 for(int i=1;i<m_numPoints;i++)
 {
  if ( value < m_points[i].value )
  {
   return interpolate(m_points[i-1],m_points[i],m_points[i-1],value);
  }
 }
 
 //
 // So the value is greater than the biggest point. In this case we extrapolate
 // from the last point using the gradient between the last two points
 //
 if ( m_numPoints < 2 )
 {
  return interpolate(
  ZERO_POINT,   // Second last point
  m_points[m_numPoints-1], // Last Point
  m_points[m_numPoints-1], // Last Point
  value);
 }
 else
 {
  return interpolate(
   m_points[m_numPoints-2], // Second last point
   m_points[m_numPoints-1], // Last Point
   m_points[m_numPoints-1], // Last Point
   value);
 }
}
Here is the code to calculate the gradient between two points and then work out the final code by extrapolating from a base-point. Usually the base-point would be point1 but for the case where we are going beyond the last point in the table it is point2
uint16_t Linearizer::interpolate(
 const Point& point1,
 const Point& point2,
 const Point& basePoint,
 const float  value  ) const
{
 float gradient = (point2.value - point1.value)/(float)(point2.code - point1.code);
 
 return round((double)((value - basePoint.value)/gradient + (float)basePoint.code));
}

So the next step was to re-write my python code to measure the output again but this time I did it in one volt steps and I got the code to print it out essentially as a C constant definition. I ran this again and cut-and-pasted it into my code, re-flashed the mico-controller and tried again.

I was disappointed by the result. It was always around 2mV off at lower ranges and as much as 5mV at higher ranges. At a few spots it was closer however. I tried increasing the wait period so it would allow 60s for the voltage to settle before taking a measurement, ran the scan again and re-flashed the device but this didn't make a difference.

Also on the settling time issue - I noticed that it can take as much as 20s for the voltage to settle after a change. The settling time is bigger if the voltage change is bigger. I suspect it is bigger when I cross over the threshold where it switches the bias supply between 24V and 40V too but not certain.

So still lots more to do but progress nonetheless.


Wednesday, 22 July 2015

Printing/Reading floats on an AVR in Atmel Studio

As part of my lab power supply project I have been coding the control software that runs on the power supply module and drives the DACs/ADCs as well as other things. The plan is to use provide a text based control interface over USB using a USB to UART chip (MCP2200).

I need to print floating point values as strings and parse strings containing floating point values. Turns out this isn't as easy as I hoped!

I am using Atmel Studio 6 and (sensibly) by default it doesn't include code for formatting or parse floats inside printf or scanf. The code to do this is large and would bloat out applications that don't need it.

The way this works is you have to include additional libraries that contain the version of printf and scanf which contain the extra formatting code. So first of all you have to include the libprintf_flt.a and libscanf_flt.a libraries in the linker options. So you right-click your project, choose properties then select the Toolchain tab. Find the AVR/GNU Linker options and select Libraries. Click the gree add icon in the top panel and add printf_flt (the linker adds the lib prefix and .a suffix). Do the same for scanf_flt. If you only need printf and not scanf support you can leave whichever one you don't need out.


When I first did this I found I got link errors like this:


e:/program files (x86)/atmel/atmel toolchain/avr8 gcc/native/3.4.1061/avr8-gnu-toolchain/bin/../lib/gcc/avr/4.8.1/../../../../avr/lib/avr5\libm.a(mulsf3.o): In function `__mulsf3':
(.text.avr-libc.fplib+0x2): relocation truncated to fit: R_AVR_13_PCREL against symbol `__fp_round' defined in .text.avr-libc.fplib section in e:/program files (x86)/atmel/atmel toolchain/avr8 gcc/native/3.4.1061/avr8-gnu-toolchain/bin/../lib/gcc/avr/4.8.1/../../../../avr/lib/avr5\libm.a(fp_round.o)
  
I searched for quite a bit before I found that the problem is you have to have libm at the BOTTOM of the list. Then it links fine! Thanks for that Atmel!

Then you have to tell the linker to use the float version of the function. For printf you go to the General group under the AVR/GNU Linker and select the last option which is 'Use vprintf library'


For some reason they didn't include a checkbox for scanf. To add scanf you go to Miscellaneous group under the linker options and add the option -Wl,-u,vfscanf manually.



Then you should be able to use printf and scanf with the %f format string and it work correctly.

Why do they make it so hard?

Wednesday, 1 July 2015

Atmel Studio 6.2 and atmega328p Serial Comms

As part of my ongoing Lab Power Supply project I thought that it was time I learned to use Atmel Studio. While Arduino is pretty quick to get things going I really started to hit its limits in the dummy load project. Just managing lots of Windows gets hard and I ran into problems when I tried using microprocessors not supported by the IDE.

I also want to experiment with JTAG ICE debugging and I think Atmel Studio will make this much easier than other approaches (such as using WinAVR and the GDB command line).

My plan was to experiment with using a USB to UART chip in conjunction with an isolator chip to create a galvanically isolated USB port for the power supply.

Atmel Studio 6.2

I downloaded the code from the Atmel web site. It is a huge download at just over 500MB. Then when it downloaded it needed to download and install the Visual Studio IDE. Finally the thing installs and the first thing it does is pops up and tells me there is an update the the ASF (Atmel Software Framework) which then downloads another 140MB or so.

I'm quite familiar with Visual Studio as I use it for C++ development during my day job. Atmel Studio felt pretty similar. I was able to quickly create a C++ project for an ATMEGA328P. Weirdly it pretty much immediately warned me that some of the ASF won't work in a C++ project and I should use C. Undeterred I pushed on. 

I tried the ASF thing but found it pretty confusing. Before you can begin you have to choose a board. I am basically using the micro in a breadboard so I wasn't sure what to choose. There was a 'generic' "user board template megaAVR' board option so I went with that. The list of libraries in the ASF wizard is pretty confusing too - things that would have been there out of the box in Arduino appear to be libraries. For example there is a ADC driver, a Delay routines service a GPIO and an IOPORT service (no I don't know what the difference is either). Each of these opens a little folder and has a link to documentation and shows you the headers for that option. 

Amusingly there is a unit test framwork which appears to be C unit. I later found you have to use this in a separate project from your target project. Unit testing would be very nice.

A nice way of describing the user documentation is sparse. I haven't yet discovered the sample code (if there is any). Basic tasks often require direct access to registers etc.

When I have gone looking I found that things which are basic in Arduino aren't so basic in Atmel Studio. Also you lack lots of library support for things and people tend to code their own. It will be much more work using this but I am hoping it is worth it. Also there is an option to use the Arduino libraries from within Atmel Studio but I haven't explored this yet.

I compiled the empty project, set a breakpoint and hit F5. It popped up a dialog asking me to choose the target but the only option was a simulator. Running in a simulator is nice for unit-testing I suppose. This seemed to work as expected and the debugger seemed like it would be useful if the project actually did something.

Programming the Chip

Unfortunately Atmel Studio only comes as a Windows tool so I am running it on a Windows 7 VM under parallels on my Mac. Initially I plan to use an AVRasp programmer but later will try out a JTAG programmer.

Annoyingly Atmel Studio doesn't support this programmer directly so you have to configure avrdude as an external tool and use that to program the chip. I installed WinAVR which adds avrdude to the path. The command line options are described quite well here

I found this would not detect my usbasp device. I had to go to this site , download the drivers and then right-click the device in Device Manager and update the drivers from the downloaded ones.

To add avrdude to the Atmel Studio project you go to tools -> external tools and add something like this (see image). First browse to the avrdude exe, then for the parameters add:

 -c usbtiny -p atmega328p -F -U flash:w:"$(ProjectDir)Debug\$(ItemFileName).hex":i

De-select the 'close on exit' check box as otherwise if something goes wrong you won't see the error. Here is what it looks like:


Serial

My plan is to eventually control each Lab PSU channel using text commands sent over a USB->UART bridge. For now I just want to be able to print something. On the Arduino this is easy but pretty basic.

I tried searching the documentation and while there were links to USART stuff it all seemed to be related to specific board configurations I wasn't running. Apparently there is a ReMORSE example app somewhere for the ATMEGA328P but I am buggered if I can find it.

I Googled for examples and found that to make serial work you essentially have to provide functions for getting or putting a single character to the serial interface and then wire these into a FILE structure. From there you can replace the stdin or stdout globals with your own so when you call printf() or scanf() etc it uses the serial port.

The example code I found would basically do this:

FILE uart_io = FDEV_SETUP_STREAM(uart_putchar, uart_getchar, _FDEV_SETUP_RW);

This uses a macro to set the fields of the FILE structure during initialization. When I did this I got this error:

C:\Users\fred\Documents\Atmel Studio\6.2\LabPSU\LabPSU\LabPSU.cpp(33,18): error: sorry, unimplemented: non-trivial designated initializers not supported

It turns out that this macro uses a language feature that never made it into the C++ spec and hence you get this error. I found there was a function that did something similar however so I could do this instead:

    FILE uart_io;
    memset(&uart_io,sizeof(uart_io),0);
    fdev_setup_stream(&uart_io,serialWrite,serialRead,_FDEV_SETUP_RW);


Now I needed functions to emit or consumer characters from the serial interface. Most of the code I found on the net did this with busy wait loops which seemed positively medieval. I found a library by Peter Fleury that implemented putting/getting characters from the serial interface using interrupts and a ring buffer here. This seemed much better!

I had to create bindings to allow this to interface with the functions expected by the FILE structure so I created a pair of functions like this, I'm not worrying about errors as I don't think there is anything that can be done anyway,

int serialWrite(char c, FILE *fp)
{
    uart_putc(c);
    return 1;
}

int serialRead(FILE *fp)
{
    return uart_getc();
}



Then I registered these using the snippet above. Once that is done I can do things like this and the output should turn up on the serial port.


    int i=0;
    
    while(1)
    {
        printf("Testing... %d\r\n",i++);   
    }

When I first compiled this and programmed it onto the chip nothing happened. No output and no signs of life. I ended up creating a blinking LED program just to make sure I was programming the chip correctly and this worked.

Eventually I found another copy of this library but this copy included an example program. In that he included avr/interrupt.h and called the function sei() before using the UART functions. I added this code and it worked!

Note in the code example above I am printing \r\n as for whatever reason without this the output moves to the next line but doesn't return to the start of the line. Given the output is going to a Mac I would have thought just \n would be fine. Anyway no matter. This looks like a starting point.

Isolated Interface

The bit I didn't mention is my hardware setup. I plan to use an MCP2200 USB bridge chip from Microchip. I bought a breakout board for one of these from RS (for like $20! Cheap!). Initially I just connected it to a computer (my Mac) and typed characters at it while watching what happened on the RX port. My Mac already had a driver for it so I just plugged it in and used screen to connect to /dev/tty.usbmodem1411. That was pretty easy.

There is some software from Microchip for configuring the device but this is Windows only. I installed the drivers onto my Windows 7 VM and ran the tool. I configured some of the GPIOs to blink LEDs and this seemed to work. Overall this looked really good.

Next I connected my microcontroller to it and used the VDD and GND pins to power the micro controller. This worked and I could see the test serial output from the micro controller via the screen tool.

Now for this to work in m lab power supply it needs to be electrically isolated from the USB power. I bought one of the Analog devices ADuM1201 chips from RS. They are relatively cheap and very simple. You have two power and ground ports and you have an in and an out port on each side. The two sides are isolated from each other but the data signal goes through.

I powered one side of the chip from the MCP2200 breakout and the other side from a bench PSU (my Jaycar PSU configured for 5V). When I power up the bench PSU I can see the comms on the screen tool. If I put my scope on the pins I can see the traffic. Interestingly if I connect the ground to the opposite side from where I am connecting the signal the voltage floats around as you would expect.

This looks super easy and I think is the way to go.