Wednesday, 26 April 2017

Two repairs

So I recently did two more repairs although neither was terribly interesting and I didn't take enough photos. Still worth a quick entry though.

Sharp JH1600E Solar Inverter

I am a member of a small sailing club and a few years ago the government were offering what was probably a way to generous incentive for people to install solar panels. They would pay 66c per kW.h for the electricity you produce where here in Australia we pay around 25c when buying the same power off the grid. I signed up the sailing club and the unit paid itself of in around 18 months and then continued to pay healthy dividends for the next few years.

The scheme ended last December and now they only pay 6c per kW.h for electricity produce but we now have the option of changing to a nett metering system so we can consume the power we produce and save 25c per kW.h consumed.

The inverter also went out of warranty at about the same time and then within a few weeks of this it failed. We got quotes of between $350 and $450 for repairs but when we earn so little from the system this becomes a major outlay. Interestingly these units seem to fail often and there are a lot of broken ones on ebay. The company that repairs them seems to be doing a good trade.

I decide to have a shot at it. The concern is if I get it wrong the unit could catch fire and damage the sailing club (which is empty most of the week so nobody would notice until it was too late). Trusting that the circuit protection will save the club I had a go anyway.

There is a service manual for the unit here but it only contains a block diagram and no schematics. As you'd the service manual shows a DC boost converter (to level the DC voltage coming in off the solar array) feeding a full-bridge converter. There is loads of EMI protection (both DC and AC) plus transient protection. The system can monitor the DC voltage, AC voltage as well as the power being pushed back onto the grid. There are loads of temperature sensors so it presumeably can shut itself off in case of trouble.

The unit failed after a thunderstorm. The AC breaker had opened and the system reported an F01 error indicating it didn't have a grid connection. Restoring the breaker didn't fix it so I dug deeper and found there is a 20A fast blow HRC fuse between the breaker and the AC output of the unit and this had opened. I ordered some of these, replaced the fuse and the unit then detected the grid and began its sequence to connect to the grid. As it got towards the end it made a loud pop, opened the AC breaker and destroyed the fuse again.

My theory was that a MOSFET in the full bridge converter had failed short either due to heat (it has been *really* hot here last summer with days as high as 47C) or line transient.

Taking the unit apart wasn't much fun. You have to unscrew the end caps, undo the bolts holding the lower section on, undo the screws holding the outer extrusion on and lift it off. Then you are looking at the bottom of the board so you have to unscrew the heatsinks from the back panel and lift the board out (which is insanely heavy).

There is a digital board on top that interfaces with button, generates the warning lights and has the LED numerical display. The main board has a small daughter board that contains the controller for the main inverter but this is hard to access and soldered in. The main board is conformally coated also which made just getting continuity readings difficult.

After some poking around with a DMM I found one MOSFET that was a dead-short. Even though it was screwed to a heatsink I had to desolder the four MOSFETs on that heatsink and unscrew the heatsink from the board to get it out as there was no clearance to get a screwdriver onto the transistor.

Here you can see the heatsink desoldered and unscrewed. In the forground are the heatsinks for the AC EMI, the DC boost converter and .. actually I'm not sure what the other one is. Lots of fist-sized transformers for filtering. There are some massive capacitors for the DC coming in (left side) and loads of MOVs inside little sleeves complete with thermal fuses.

The transistor was a SPW47N65C3 from Infineon. Unfortunately it wasn't carried by RS or Element14 so I had to get one from Xon (via ebay) for an eye-watering $28.

There wasn't really a safe way to test this and in fact it was going to be difficult trouble shooting the unit unless I partially re-assembled it. I couldn't find anything else that looked suspicious so I took a chance and put it back together with the new transistor. Sure enough it worked fine (with a new fuse) so the repair was a success.

The only annoying thing was I somehow lost a space for the selection button and haven't been able to get that working again. Given how much effort it takes to disassmble and reassemble the device I decided I would fix that another day.

DENON DCM-270 Five Disk CD Player

This was my sister's CD player from the 90s that started to skip and have problems playing CDs. It got worse progressively until it wouldn't play CDs at all anymore. When I tested it the unit would spin up the CD, click the focus a few times and then stop and move on. It couldn't even read the table of contents.

I found a service manual for the unit complete with schematics and exploded diagrams. The schematic was truly awful - lines everywhere and the lines were so illogically laid out it made the whole thing really hard to follow. The device uses a bunch of special purpose ICs for controlling the motors, processing the signals from the CD mechanism and decoding the audio. Thankfully the manual showed internal block diagrams for these so you have some clue what is going on.

First off I checked the power supply and inspected the board. There was a big hots spot around the -26V supply and signs of corrosion near a capacitor. The capacitor measure ok in circuit and the voltages looked ok so I moved on (more on this later).



From some googling I discovered that the usual failure mode of these devices is the laser sled. They use a pretty common Sony KSS213C mechanism and these are available all over ebay. I ordered a replacement and parked the troubleshooting.

When the replacement mechanism failed I found that it worked worse than the original! The replacement mechanism didn't even spin the disk where the original at least tried to read.  I checked the motors and they were fine (I could make them go with my bench supply).

I started looking at the Sony chipset used to drive the CD mechanism and decode the signals. Meanwhile I was trying to understand how the mechanism worked and understand what was going on.

A Quick CD Player Primer

So the CD mechanism had connections for:
  • The disc motor
  • The sled motor (to move the laser)
  • Laser focus
  • Laser power
Then it had a bunch of other connections labeled A-E. It turns out these are opto-diodes and the way they work is quite clever. The opto-diodes are arranged as follows (see below - this was stolen from here ).
                     |<--- ----="" array="" photodiode="">|
                               +---+---+
   ---------_________ +---+  +-| A | B |-+  +---+
   Track--->          | E |- | +---+---+ | -| F | ________
                      +---+  | | C | D | |  +---+         ---------
                        |    | +---+---+ |    |           Track--->
                  /|    |    |   |   |   |    |
    Focus       / +|----|----+---|---+   |    |    
    Error o---<    |    |    |   |   *   |    |    |\
    (A+D)-(B+C) \ -|----|----|---+-------+    +----|+ \       Tracking
                  \|    |    |   *       |         |    >---o Error
              FE Amp    +--------------------------|- /       (E-F)
                             |           |         |/ TE Amp
    * Since the photodiodes  |           |           
      are current sources,   |           |         |\
      the simple junctions   |           +---------|+ \       Data Out
      implement a sum.       |                     |    >---o RF Test Point
                             +---------------------|- /       (A+B+C+D)
    All Amps: current mode inputs.                 |/ DO Amp

So there are 6 opto-diodes in total (A-F). The four centre diodes are used both the read the disk and to keep the laser focused on the disk. The laser optics cause the laser spot to become elliptical along one or other diagonal depending on if the focus is to close or too far. By comparing the power at A and D with the power and B and C you can tell if one diagonal has more brightness than the other. This tells you if you need to focus in or out.

The focus mechanism has to be accurate to within 1um (!!) and this is while the disk is moving at 500RPM and could even be a little warped (spin unevenly).

The E and F sensors are placed such that one is inward of the current track and one outward. By comparing the power detected by E an F the mechanism keeps the track centred. The track is a continuous spiral over the whole CD. The data on the disk is read by summing the four centre detector outputs (A-D).  

The encoding of the disk data is such that by tracking the signal edges you can detect the speed of rotation. The system uses this to manage the rotation of the CD and keep it at a fixed velocity (using a PLL).

CD Chipset and Debugging

The CD player uses a Sony CXA1782BQ for the RF signal processing and CD mechanism servo control and a Sony CXD2500BQ to handle the signal processing of the CD data.

I figured out that the CXA has a FOK (Focus Ok) output and with the original mechanism in the CD player this would go high while it tries to read the CD but with the new mechanism it didn't.

I assumed the mechanism was broken and worked with the original. I was trying to understand what was going wrong as clearly the motors were working and I could hear the laser focus operating.

I found there was an RF_O signal that is effectively the processed data signal coming off the CD. This is supposed to be very closely synchronized and you should be able to measure the jitter in the CD speed control by triggering the oscilloscope of either the positive or negative slope and viewing an 'eye' diagram of the signal. In the case of this CD player the signal was a total mess. You could see the waveform get shorter as it speed up the disk to try and lock onto the signal but it was like the waveform was moving up and down (i.e ripples at the top then bottom).

I looked at the FE (Focus Error) output but this too was pretty messy and it wasn't clear if it was failing to focus, failing to track or what. I wasn't sure if the problem was the mechanism or something else.

Mechanism

So I thought maybe the laser power was too low and this was why it couldn't get a good signal. I read that you can't really adjust the power without a power meter since if you even slightly over power the laser diode it will fail in a matter of hours. I took a risk and turned up the power on the new mechanism but it responded exactly as before (no focus).

Doing some more reading I found out that new mechanisms are sold with a solder bridge across a critical spot on the board. This is to prevent damage from electrostatic discharge through the board during transit. I found this guide explaining where the bridge was located and how to remove it. With some excitement I removed the bridge on the new unit and installed it in the CD player. Unfortunately while it now spins the disk it still didn't read the CD. Basically it was the same as the original mechanism.

There is a FE (Focus Error) bias adjustment pot on the board and I tried changing this. I noticed that if I adjusted this I could make the problem worse but not better. When it was worse the player wouldn't spin the disk or would give up reading the TOC sooner.

Back to the Power Supply

Removing and re-fitting the laser unit is quite difficult and requires the removal of a number of connectors from the main PCB and removal of the CD loading mechanism from the chassis (as you can only access the laser unit from the bottom). I had done it a few times now and at some point I must have left the power connector for the front panel off and when I re-connected it the VFD didn't come on.

I decided I needed to tackle this. The VFD supply is powered by a separate transformer tap but is referenced to a -28V supply generated from another transformer tap. After a bit of poking around I realized this was missing. The -28V supply is generated with a pretty dumb linear supply consisting of a PNP transistor and a 27V zener diode. It turned out the zener was a dead short in both directions so I replaced it. This brought the supply back and now the VFD worked again.

The CD reading still didn't work however. While looking at the RF_O signal I noticed that even when it wasn't reading a CD there was a bit of noise. The noise looked like a 2MHz signal but only at about 200mV. At some point it dawned on me that this is actually a pretty significant proportion of the RF signal amplitude and is probably why the signal is so messed up.

I then proceeded to go hunting for the source of this noise. I think in part I ignored this before as I assume it was just bad grounding but even with the ground point closer to the power supply section the noise is loud and clear.

The CD Player has +14V, -14V, 8V which is generated by a monolithic regulator and 5V which is generated from the 8V by the motor driver chip and the -28V I fixed above.

The corroded cap in the image above was part of the -28V circuit so I decided I would replace it anyway. I pulled it out of circuit and the ESR measurement was through the roof at over 10K plus the capacitance was way off what it should be. I replaced it but the noise was still there (maybe a little less).

I found the 8V signal was most noisy and the 5V signal was also pretty noisy. There is a transistor controlled by the motor driver that generates the 5V line and this had an output capacitor. This tested Ok in circuit but I replaced it anyway. When I pulled it out of circuit the capacitor measured way off spec and again a very high ESR.

Replacing this capacitor fixed it! The RF_O signal now looked as it should and the CD player played CDs! I stepped through tracks and left it running for about an hour and all was well. I replaced a few more capacitors that looked suspicious but overall this was the extent of the repair.

So at least I learned a few things about CD players. I now have a spare Sony CD mechanism too.



Monday, 10 April 2017

R&S Give Away

Count me in for the Rohde and Schwarz RTB2000 Oscilloscope giveaway! And here is why!

MJLorton is giving away a RTB2000 oscilloscope on his channel so in part this post is in response to that but also this post is a bit of a whinge about Keysight.

MSOX2000 Gripes

I have a beautiful MSOX2024 that I've had for two years or so. I bought it as a re-furb unit and added a couple of options (expanded memory, I2C/SPI decoding). It has super fast waveform update rate, its really nicely built and pretty easy to use.

There are a few things I really wish the scope did:


  • Segmented memory (history R&S call it) so I can capture multiple serial or analog events over time and see each one.
  • General serial decoding. When I was working on my power supply project I could have really used this to decode serial problems caused by dodgy soldering of fine pitch chips!
  • Decoding of serial signals on the digital ports. I don't get why this limitation exists and this is pretty annoying.
  • LAN interface so I can automate some measurements.
So the first two things are additional software options I don't currently own. The list price segmented memory is $AUD450 and RS232 is $AUD737. That's nearly half what I paid for the scope!

There is an 'Application bundle' that includes everything for $AUD1820 but there is no discount for the options I already purchased.

No serial decoding on the analog channels is a real PITA. The problem is if you are trying to decode a bidirectional SPI signal you have used up all your channels and can't look at anything else. At this point you may as well be using a cheap USB logic analyzer as your correlation with analog has gone out the window. You can cheat to some extent and use clock-timeout instead of a dedicated CS line but in my case there were multiple SPI devices on the bus so that wasn't going to work either.

The digital probes also make much more sense for this type of work as they are fine and better suited to attaching to a wedge or the legs of  chips etc.

The only way to get serial decoding on digital channels is to upgrade to a MSOX3000. The cheapest one I would consider is a MSOX3014 and these start at $AUD10K. I found a re-furb with no probes for the bargain price of $AUD6500. They are a very nice piece of kit but this is well out of hobby territory. The application bundle is an eye-watering $AUD4500 (but it does include a lot of stuff).

The cost of the LAN card for these scopes is insane - $AUD400. Ok it has a VGA adaptor too but I didn't actually want that. I managed to work around this as many people have reverse engineered the LAN interface and provided PCBs. I got a PCB, assembled it and got around the problem.

While on this topic there is a rich thread over at the EEVBlog forum on how to hack this scope (via the LAN interface) to enable all the software options. Based on the prices above you can see why!

R&S RTB2000

I've been watching a number of reviews of this scope by MJLorton, Dave Jones at EEVBlog and Mike's Electric Stuff.

What most impressed me was:
  • The big, clean hi-res interface. I love the numbers on the graticule!
  • 10 bit converter and therefore the nuts low signal performance. I've been using my old analog scope for this as the lowest I can go with the MSOX2000 (and a x10 probe) is 10mV per division.
  • 16 digital channels, multiple serial decodes on digital or analog channels.
  • 'History' (segmented memory)
  • LAN!
The cost is still steep but element14 are offering the fully-loaded model for $5AUDK which is comparable with the MSOX2024 with no software options.

Active probe interface would have been nice. Not for hig-speed differential but for current probes or high-voltage differential probes etc. It would also be nice if the scope makers got together and standardized this! FFS!

Conclusion

Maybe I will get lucky and MJLorton will post me a scope but in all likelihood I'll be waiting for another second hand unit to come by.

I get that a lot of engineering goes into these units and they need to earn a crust but I still find their pricing structure steep.

Some of these options aren't really options though - are you going to buy a scope with 100K points of memory per channel? No LAN interface? I wouldn't (not again anyway) so really listing the memory upgrade etc separately is just a way of hiding the real scope price. Serial decoding is another example.

Surely there is some value in putting their kit in the hands of home-gamers like me (as AvE calls us) as one day we might buy a lab full of their gear (most hobby electronics people are students but some might be one-man shops that eventually go-big with a product). They don't though - they assume we are going to spend as much as a university and so instead we go buy Rigol or Siglent and make do.

Maybe the 2000 wasn't the right purchase for me.

Tuesday, 7 March 2017

GW INSTEK GSP-827 Sprectrum Analyzer Repair

I was lucky to find a GW INSTEK GSP-827 for sale in Australia. The advert said it had a screen issue and didn't pick up any RF signals. I managed to buy it quite cheaply (or cheaply enough that I felt I could get it past SWMBO).

It's not a particularly high-end unit. For starters it has a monochrome screen and it's RF specs aren't stellar (phase noise, sweep rate etc). It does however go from kHz up to 2.7GHz and importantly it has the tracking generator option fitted.

How it came

The unit was packed really well and the whole thing was really clean and tidy. There was a sticker on the lid listing a capacitor and a couple of resistors that had been replaced. These turned out to be on the motherboard.

The seller was a nice chap and explained that he bought it at auction from a College of Technical and Further Education (TAFE as they are known here). He hadn't gotten around to repairing it and had other (better) instruments so decided to sell it as-is.

The unit was as he described in that it powered on, the bottom half of the screen was messed up and when I adjusted the trace so I could see it at the top of the screen, I couldn't get it to respond to an external signal. All the functions I could see seemed to respond and when I enabled the tracking generator I could see a signal on my scope at the tracking generator output. There was a 10MHz reference out at the back and when I brought up the system config screen (in the bit I could see) it said that all three LOs were locked.

My guess at this stage was (a) the screen issue was a due to a hot-bar attachment on the LCD and (b) the unit was deaf due to having been overloaded by a TAFE student.

Screen Repair

I started taking the unit apart and was pretty impressed by the construction. It is a motherboard with a series of plugin boards and the whole thing is pretty tidy. Each screw is threaded and everything is nicely tied down.


I decided that repairing the screen would make repairing the rest easier so I started there. The screen looked like this when I ran the unit (at this point I had taken the screen out).


I took the diffuser and the backlight off the back of the LCD screen and looked for bad joints etc. I tried heating (with hot-air) and pressing the screen down but none of this seemed to make a significant difference. At a couple of point the screen did change as I was pressing on the board at the edges which made me think it must be the screen.

I couldn't understand how it could be anything but the screen since only half was not working. I felt that if it was a cable issue then all of the screen wouldn't work. 

One confusing thing was that the date/time at the top of the screen was messed up - the year was a big number and the last digit of the seconds would count up to 9 then turn into a '>' symbol.

Do we have a signal?

The LCD signal is generated from the DSP board of the analyzer. The EPSON chip at the top left is a S1D13705. There is an SRAM right next to so I suspected the problem might be with that. There isn't an easy way of determining this however. There is a LVTHR16245  between the EPSON chip and the LCD connector which I think is just a level translator/driver. The board had loads of these at each interface. The nearest datasheet I found for the EPSON chip was a S1D13700 which I thought might cover the whole family and this described the signalling to the LCD.



While we are here I thought I would mention the red/black wires at the top of that photo - these go to a coin-cell battery that was in a holder screwed to the case above the power input. There is a Dallas semiconductor DS1689 that manages the current time plus it holds a small amount of NVRAM that no doubt is where the options configuration is stored! I really didn't want to lose my tracking generator option so I was careful not to short this thing. No idea why an instrument this modern would use this ancient crap instead of EEPROM.

I ordered a replacement EPSON chip as I knew these take weeks to come from China but I decided to try and analyze the signals going to the LCD to see if there was data on the bottom half.

The pin pitch was too fine to directly attach logic probes and there was a series of resistor packs between the driver and the LCD so instead I soldered wire-wrap wires to these.


It's a bit of a mess and hard to connect up when the DSP is in the box as I can't use the microscope. It looked like this connected up.

There are 12 lines but I can only connect 8 as that is how many logic channels I have on my scope.

The EPSON datasheet described the signalling as :
  • FPFRAME that is pulsed once per frame start
  • FPLINE pulses once per line
  • YSCL which clocks once per pixel
  • FPDATA[0-3] which is a 4 bit data bus containing the grey level for each dot.
There is other stuff like a method of disabling the displace etc.

I wasn't sure what line was what but after connecting some up I found the lines described above. Here is a snapshot showing the timing of the FPLINE pulse. As you can see the frame-rate is around 77 frames per second.


If we zoom in, here is the timing of the line pulse. 


And finally zooming further we can see the timing of the pixel clock pulse.

After I was finished being chuffed by how my fancy scope can hold all of this in memory at once I started looking at some of the timing information. So the clock pulse period is 320ns and the line pulse period is 53.4us - that works out to 166.8 dots. This doesn't make sense but reading the EPSON datasheet a bit further it turns out the YSCL only occurs every 4 pixels which gives us 667 which might be near enough to 640 dots.

Then the frames period is 12.9ms and the line period is 53.4us which works out to 241 lines. This isn't nearly enough - in fact it is roughly half of what I expect.

Next I wanted to know if I watched the data at the end of the frame - i.e. just before the frame pulse - if I would see changes. The data doesn't change much unless you adjust the amplitude so there is something going on in that bit of the screen (which is tricky as you can't see it) but I managed to do it and sure enough the data did change (see video below). So that means it has to be the LCD panel right as we have data for the bottom half of the screen.




video

Replacement LCD


I decided I would order a replacement screen. The screens has a bunch of numbers on the back include LM64K112 but the datasheet for these was pretty brief and mostly just the mechanical specs. Some places that sell these list them as 320x200 which makes no sense as there is no way the screen res is that low. 

It also had a part number of LTBGCHB91J1K and I was able to find a screen with this part number on Taobao for a reasonable price. All the matching screens on ebay were $120-$200AUD which is a significant amount of the price I paid for the unit!

I ordered the screen and waited. Taobao was a bit of an adventure actually since it is all in Chinese. I had to use Google translate at every step to figure out what was going on. Amuzingly Taobao roughly translates as 'treasure network' and they refer to items as treasure (baobei). The funny thing about that is google translates this as 'baby' so every screen is filled with stuff like 'baby arrival' and 'baby attributes' etc etc.

The screen came and with some excitement I plugged it in and... it still didn't work! In fact when I first plugged it in it didn't work at all (totally blank) which turned out to be significant later.

So what is it?

At this point I was really stumped. The last number on the back of the LCD was HDM6448-1. I searched on this before but this time I found a datasheet! The interesting thing is this LCD has the same control lines I described above but has two sets of bus lines for the pixel data! One for the upper half and one for the lower half. Also now I know what pins are what it's easier to track. This also explains how half the screen could fail.

But the EPSON chip didn't mention any of this. I searched a bit harder and found there was a datasheet for the S1D13705. It turns out this unit can support either 8 bit colour displays or monochrome 'dual' displays like the one the analyzer has.

So I checked the signal on one of the lower screen data lines at the EPSON chip and sure enough I can see signal. Then I repeated this at the LCD connector on the DSP board and I can see a signal there too. Finally I checked the signal on the LCD panel itself and there was no signal! One line sometimes worked but three didn't,

I pulled the flat-flex cable off at both ends and checked its continuity - sure enough some of the lines were dodgey. Face-palm! All this work and it was a bad cable! Under the microscope you could see the break right at the fold in the cable near where it is stripped back to go in the connector.

I spend a long time looking for a replacement cable and the nearest I could find was expensive and in the UK. I thought I would have to buy it but before I did that I would try soldering the wires back together. This was a disaster as it melted the cable and caused the broken bits to float off and stick to my iron etc.

My next idea was to trim off the end (at the break) and *very* carefully scrape the insulation off to reveal new conductors. This worked pretty well and looked ok. I plugged it into the unit and guess what!


RF Problem

So given that the local oscillators are all fine and the tracking generator is fine, I thought the problem is likely to be at the front-end of the instrument. This part of an spectrum analyzer is highly susceptible to damage from overloading also and the unit did come from a teaching lab.

I pulled the RF module out - it has a billions screws holding a cover onto a diecast frame that is held in with another billion screws from the PCB side.


I removed the covers to see the components.


I was greeted with the usual RF voodoo in the form of distributed element filters, RF absorbing pads and lots of MMIC amplifiers and hybrids,

It's pretty easy to follow as each little box is a different circuit stage. The front-end consists of a capacitor to couple the input followed by some resistors and some very fast diodes. The diodes will open if the input power is too high and the resistors limit the current.

The diodes are a likely source of failure but they are hard to check in circuit. Also they are under the edge of the first box which makes it doubly hard to test them and impossible to remove them. To make matters worse, when I removed the bottom side screws to remove the die-cast frame I realized I couldn't because the N type input connectors is threaded through the frame and soldered onto the board!

So I broke out the soldering iron and braid and had to de-solder the damn thing to get started. I decided I wasn't doing this again so I cut a slot in the die-cast frame so I could get it on and off in the future (Grrr!). Admittedly this is the only part of this unit's physical construction that annoyed me so far.



So here is the front-end circuitry out of the can - first the input cap, resistor pad and two diodes (right where the can edge was). I removed the diodes from the board and tested them. They seemed fine. I'm afraid I lost the details of these but they were a very fast PIN diode in SOT-3 package.


The next stage of the circuit goes through a series of attenuation stages that are switched using these Skyworks AS169 switches. They are a pretty cool bit of gear as they are Galium Arsenide ICs that are good for 2.5GHz or more.


Finally we end up at a MMIC amplifier just before the first mixer (which is under the RF absorber).


The MMIC is a Sirenza SBA-4086 which is a 2GHz 15dbm amplifier.

My plan was that if the problem is the first mixer then I won't be able to tell as it is way outside of the frequency range of my scope. If I inject a low frequency signal into the front-end however I should be able to see this at the output of the MMIC. I soldered a small wire on the board so I could tap the signal at this point and using my spring clip ground I attached a probe, I carefully stuffed the PCB back into the motherboard and powered it up.

I put a 0dbmV signal into the front-end (pretty loud in other words) and sure enough I got a strong 400mV signal at the amplifier. So the amplifier is find and so is the front-end up to that point.

Mixer

The mixer is a mini-circuits SKY-60 part, Unfortunately you can't buy these via RS or element 14 etc. I found some other SKY-60 parts (SKY-60MH etc) on ebay but the specs looked different enough that I didn't want to risk it.

You can buy them directly from Mini-circuits but the minimum order quantity is 10 and they are $12USD each. The Australian distributor is Clarke and Severn but they didn't have this part on their sight. I contacted them and they added it for me and I could order single quantities. The problem is they have to back-order it from the states so it would take a couple of weeks. Pretty cool!

GW INSTEK Support

When I started all this I contacted GW Instek support to see if I could get a service manual and/or parts. They took a little while to respond and initially asked for the serial number of my unit so they could forward it to the correct regional support. I expected they would come back and tell me where I could get it serviced but not provide anything else.

After I ordered the mixer I was pleasantly surprised that they came back and sent me a service manual. They said they couldn't give me schematics as I wasn't an authorized repairer but if I agreed to sign an NDA they could give me some info. This is way more than I expected (way more than many other big brands would offer). Massive thank you and kudos to GW INSTEK. While it's not as good as the old Agilent stuff where they publish full schematics in their service manuals it is understandable given the rampant copyright violations that happen in China.

Another look

Given I was stuck waiting for a part I thought I would try a few experiments. The service manual included block diagrams showing the path through the mixers and indicated the IF coming out of the RF board should be 452MHz. It also showed a 100MHz reference signal that got divided down by 10 to generate the reference signal on the back of the unit and got multiplied by 32 to generate the second LO.

I decided I would start by probing the IF signal since even though it is out of the range of my scope it still might be viewable. Weirdly the signal I saw was a 100MHz slightly distorted sine wave. When I fiddled with the frequency range and settings it didn't change.

The semi-rigid cables in the unit are slightly confusing. They are labelled but for example two of the cables going to the RF board had the same label. Others showed the J number of the connector but often the boards didn't have this silk-screened onto them. There are coax sockets with no connectors in them also.

Someone on the EEVBlog had problems with their unit having a 6dB offset. They published photos of the internals including this one (note they have no tracking generator like mine).


I must have looked at this 10 times but the cables on mine looked to be out of place. Also the markings on the cables indicated one was to go to the IF board and the other to the LO board but in my case one went to the TG board and the other the IF board (so they were one board out).

Here is how they were connected in my unit


Here is the shot from the EEVBlog showing the correct placement again.

Could it be that simple? I re-connected the cables, rebooted the unit and it worked! So now I just have to put all the screws back into the RF board :(


Screen Dim

So while it all works now the screen is very dark. There is a brighness control but it doesn't help much and seems to do nothing until it you get to a certain range and then it suddenly changes a bit.

I looked at how this works and there is a 13-24V bias voltage that goes to the LCD to set the brightness. I traced this on the DSP board and it went over to the other side of the board near the input. I found an 8 bit Analog devices AD5300 DAC (U13). This was connected to an OPA237UA opamp (U12) and a transistor (Q12). The resistor nearby (R88) is what connects directly to the LCD bias line and I noted (under the microscope) it had a hole in it!


Thankfully the service manual had the component values as the marking codes made no sense. It was a 27.4 ohm 0603 resistor and I had a 27 ohm one on hand and replaced it. This still didn't fix it so I watched the voltage first at the output of the DAC and then at the resistor to see what happens when I vary the contrast. The DAC output changed smoothly but the voltage to the LCD did this weird jump I could see from the screen change. The output of the opamp seemed to be going from one rail to the other across this jump.

The transistor was a MMBT3906LT1 accoding to the service manual and it is a general purpose PNP that can handle around 200mA. The only SMD PNP I have is a BC857 and that only goes to 100mA. Given this one died of over-current damage I figured I shouldn't risk it. Jaycar (bless their souls) recently started carrying some surface mount parts and the one surface mount PNP they carry is an BC807. As it turns out this is a fine replacement for the MMBT On-semi part so I bought some and replaced it. This worked just fine!

Conclusion



So here is the SA re-assembled and happily integrated into my bench. I can see the internal cal signal just fine and the power level is within a couple of dbmV of the specified -30. If I use the tracking generator the trace is flat right across the spectrum to within a few db (impressive! with my shit cable!). I found some N-type to BNC converters and am using my BNC cables for now. I've ordered some n-type to SMA and SMA cables. I'll need loads more bits and pieces to use this thing though.

So overall I am very happy indeed. Got very lucky with this purchase.


Friday, 3 March 2017

Using Spring Validation in Angular

I was recently working on a web project where I needed support for substantial modal web dialogs inside web pages. My first approach was to use what what was familiar to me which was  JSP with spring MVC controllers. The problem is this isn't very convenient with modal components as you  you have to manually bundle up  the value in the form with java script code and post it yourself.

So I started exploring Angular JS as a way of doing this and basically it took over my life (in a good way I think).

One are I struggled with was validation but I found a solution I was happy with and hence this post.

Problem

Since I started with Spring I had my models annotated with validation constraints like the example below :

 class User  
 {  
   // ...  
   @Size(min=1, message="First Name must be provided.")  
   private String firstName;  

   @Size(min=1, message="Last Name must be provided.")  
   private String lastName;  

   private String organisation;  
   @Size(min=1, message="Email must be provided.")  

   private String email;  
   @Size(min=1, message="Password must be provided.")  

   private String password;  
   private boolean isAdmin;  
 }  

Then the controller can validate objects of this type using the SpringMVC magic as follows:

 
   @RequestMapping(value="/user",method=POST)  
   public @ResponseBody ModelAndView editUser(  
       @Valid @RequestBody User user,  
       BindingResult    result)  
   {  
     if ( result.hasErrors() )  
     {  
       return ...  
     }  
    ...  

The BindingResult is then available in the JSP so you can write code in your form like this to display the errors (using some tag library stuff).

  
    <form:input path="firstName"   
       cssClass="field input medium"  
       cssErrorClass="field input medium error" />  
    <form:errors path="firstName" cssClass="error" element="p" />  

But if we aren't using JSP then how do we get these errors out? If we are using Angular then we post the form data as JSON asynchronously and the page isn't reloaded.

Outline

When I searched for a solution to this problem, the first things that came up were techniques for implementing validation in Angular. While its good to validate the content before leaving the page the problem is you still have to validate the content at the server as otherwise a rogue user could mess you up.

So there is some desire to not implement this validation in two places and to report the validation errors from the server in the client.

Server

So on the server we define a new type that will carry the validation results.

  
 public class ValidationResponse  
 {  
   public String getStatus()  
   {  
     return status;  
   }  
   public void setStatus(String status)  
   {  
     this.status = status;  
   }  
   public List<ObjectError> getErrorMessageList()  
   {  
     return this.errorMessageList;  
   }  
   public void setErrorMessageList(List<ObjectError> errorMessageList)  
   {  
     this.errorMessageList = errorMessageList;  
   }  
   /**  
    * A general validation error not specific to a field  
    * @return The error text  
    */  
   public String getGeneralErrorText()  
   {  
     return generalErrorText;  
   }  
   /**  
    * A general validation error not specific to a field  
    * @param generalErrorText The error text  
    */  
   public void setGeneralErrorText(String generalErrorText)  
   {  
     this.generalErrorText = generalErrorText;  
   }  
   private String status;  
   private String generalErrorText;  
   private List<ObjectError> errorMessageList;  
 }  


Then the methods that accept REST POST calls and that will validate objects do something like this:

    
   @RequestMapping(value="/user.json",method=POST)  
   public @ResponseBody ValidationResponse editUser(  
       @Valid @RequestBody User user,  
       BindingResult    result)  
   {  
     ValidationResponse response = new ValidationResponse();  
     if ( result.hasErrors() )  
     {  
       response.setErrorMessageList(result.getAllErrors());  
       response.setStatus("FAIL");  
     }  
     else  
     {  
       ....  
     }  
     return response;  

In addition if you want to have an error that applies to the whole form rather than a specific field you can use the general text field in the ValidationResult above.

Web

In the form we define error spans for each field as follows. The error fields us ng-show to toggle if they will be displayed based on a hasError() method in the controller and display the content returned by a getError() method.

 
    <div ng-controller='registerController'>  
    <div class="form-group">  
      <label for="userFirstName">First name<span class="required">*</span></label>  
      <input class="form-control" ng-model="object.firstName" name="firstName" />  
      <span class="help-inline" ng-show="hasError('firstName')">{{getError("firstName")}}</span>  
    </div>  
    ...  
 </div>  

Angular Controller

As this pattern would be applied to every form, I created a base controller that the form controllers could extend to provide the validation methods.

The controller takes the resource and context (to build the URL) from its parameters when it is instantiated. The controller provides the hasError() and getError() methods as well as method for posting the updated object and checking if the result was an error.

 
 angular.module("MyApp").controller("formController",function($scope, $http, $q, object,context,resource)  
 {  
   $scope.formErrors = {};  
   $scope.context = context;  
   $scope.resource = resource;  
   $scope.object = object;  
   $scope.hasError = function(fieldName)  
   {  
     if (typeof ($scope.formErrors) != 'undefined')  
     {  
       return fieldName in $scope.formErrors;  
     }   
     else  
     {  
       return false;  
     }  
   }  
   $scope.getError = function(fieldName)  
   {  
     if (typeof ($scope.formErrors) != "undefined" && fieldName in $scope.formErrors)  
     {  
       return $scope.formErrors[fieldName];  
     }   
     else  
     {  
       return "";  
     }  
   }  
   $scope.postUpdate = function()  
   {  
    var deferred = $q.defer();  
     $scope.formErrors = [];  
     $http.post($scope.context + $scope.resource,$scope.object).then(  
       function(response)  
       {  
         if (response.data.status == "SUCCESS")  
         {  
           return deferred.resolve();  
         }   
         else  
         {  
           for (i = 0; i < response.data.errorMessageList.length; i++)  
           {  
             $scope.formErrors[response.data.errorMessageList[i].field] = response.data.errorMessageList[i].defaultMessage;  
           }  
         }  
       });  
     return deferred.promise;  
   };  
 });  

Then every controller that needs this will extend the form controller like this:

  
 angular.module("MyApp").controller("registerController",function($scope,$controller,$http,$rootScope,$location)  
 {  
   $scope.object = { }  
   angular.extend(this,$controller('formController', {$scope: $scope, object : $scope.object, context: '/MyApp', resource: '/register' }));  
   $scope.submit = function()   
   {  
     if ( $scope.object.password != $scope.passwordConfirm )   
     {   
        $scope.formErrors["passwordConfirm"] = "Passwords do not match";   
     }   
     else   
     {   
        $scope.postUpdate().then(  
          function($location.path('/registersuccessful') } );  
     }   
   }  
 });  

In this case the submit function also does some validation before sending the form content (using the postUpdate() method) to the server. When the postUpdate() completes it invokes the function for moving the URL to the success page (via the promise returned by the post method).

Conclusion

The benefits provided by the method are worth the small additional overhead. While I still find the syntax of Javascript a bit of a puzzle at times Angular is growing on me.

Monday, 23 January 2017

Working with Docker

Docker is an interesting framework for running applications within their own configuration bubble. Docker allows you to create effectively micro-VM images that you can configure with just the minimum amount of software to run a single application. The images can be built from pre-configured templates which makes setting up the image more like just installing a single package rather than building an entire machine.

Getting Started with Docker

The best place to get started with Docker is the documentation.

There are a few interesting points to note however which is:
  • Docker runs well on Mac and some flavours of Linux but on Windows you are restricted to some versions of Windows 10. I found that using centos 6.8 required an alternative installation process (see here)
  • You can run a broad range of linux distros inside Docker but AFAI can tell it isn't possible to run Windows in a docker. Although it appears there are some custom container extensions happening to make this work. https://www.simple-talk.com/cloud/platform-as-a-service/windows-containers-and-docker/
  • Dockers are really command line VMs. Forget trying to run a GUI from a Docker although clearly you could have an application inside the docker connect out to a XWindows system (but why would you?)

Docker for Development

My interest right now is in running Dockers for development. The applications I work in during my day-job (and even some I play with out of hours) require significant configuration and being able to set this up by simply downloading a docker image is attractive.

There are a couple of interesting Docker features that make this attractive:
  • You can expose your host filesystem to the docker. This means you can have a system where the artrifacts of your development build can be run pretty much directly in the docker. Without this you end up with a system where you have to run the product installer/RPM etc to install the system, configure it and then copy over the executables with the ones you just built. Docker makes this easier as you can create a docker where the executable components are links to files on the parent filesystem
  • Often multiple VMs/machines are involved in a system. You the the DB, back end server, web front-end etc etc. These can all be run as docker containers and can even be deployed into the same machine. The docker system provides a local bridge network so then no matter what machine you deploy your system on it will all interconnect.

Creating a Docker

You create a docker by creating a text file called Dockerfile and defining the actions that will occur when the docker image is created and when it is run.

In my case I was creating a Docker for a Java/Tomcat/Spring application running on Centos (as I mostly work with Centos in my day job).

So to start with we specify centos as the base image in the Dockerfile

FROM centos

Then I will install JDK 1.8 using wget so I need wget

RUN yum -y install wget

Then the following snippet gets the JDK installer from oracle and untars it and sets up the environment to run from the JDK

RUN cd /opt/;\
    wget --no-cookies --no-check-certificate \
         --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"\
         "http://download.oracle.com/otn-pub/java/jdk/8u101-b13/jdk-8u101-linux-x64.tar.gz";\
    tar xzf jdk-8u101-linux-x64.tar.gz

ENV JAVA_HOME=/opt/jdk1.8.0_101/

ENV PATH=$PATH:$JAVA_HOME/bin

The next thing is to install tomcat. We wget the tar file, create a directory for it, untar it and ditch the .bat files. Finally we ditch the tar file.

ENV TOMCAT_MIRROR http://mirror.ventraip.net.au/apache
ENV TOMCAT_MAJOR 8
ENV TOMCAT_VERSION 8.0.37
ENV TOMCAT_TGZ_URL $TOMCAT_MIRROR/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz

ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH

RUN mkdir -p "$CATALINA_HOME"
WORKDIR $CATALINA_HOME

RUN wget "$TOMCAT_TGZ_URL" && \
tar -xvf apache-tomcat-$TOMCAT_VERSION.tar.gz --strip-components=1 && \
rm bin/*.bat &&\

rm apache-tomcat-$TOMCAT_VERSION.tar.gz

Now all that's left to do is to specify port 8080 should be exposed to the host so we can access tomcat and to specify that the docker should run tomcat using the catalina.sh run when it starts

EXPOSE 8080

CMD ["catalina.sh", "run"]

Building the Docker

I have been using gradle to build my application and I found there is a cool plugin for building docker images from your build. It means that you can update the docker with your built artifact during the build.

To use the plugin I load it like this in the plugins part of my gradle build file

plugins
{
    id 'com.palantir.docker' version "0.9.0"
}

Then I create a docker task like this:

docker
{
name 'tombi/abbot'
dockerfile 'src/main/docker/abbot/Dockerfile'
dependsOn tasks.war
        files 'build/libs/Abbot3.war'
}

Then I can build the docker just by saying 'gradle docker'


Running the Docker

In my case I want to access the tomcat server running inside the docker so I have to map the port tomcat listens on to a port on the host. 

Cleaning up Dockers

After a few debug runs I found I had a load of  <none> docker images. The quick way to nuke all of these is to do this:

docker rmi -f `docker images | grep "^<none>" | awk '{ print $3 }' | xargs`



Monday, 10 October 2016

Broken 53131A Frequency Counter

While ago I bought a broken 53131A frequency counter from ebay. The repair turned out to be more of a jigsaw puzzle than I expected but I got there in the end.

Buying the Unit

For me to buy a broken unit from overseas means the unit is pretty cheap. In this case it was cheap but not super cheap. I asked the seller what options were included and he said it included the ovenised time-base. I figured just the parts value was high enough to be work it. He also said the unit hadn't been opened.

I placed the order and waited for the ebay shipping program to get it here. The shipping cost wasn't too bad.

When it arrived the first thing was the box was atrocious -I am amazed it made it here at all. The cardboard was soft and while in contained packing peanuts there was nothing to stop it moving around. Total rubbish!

The next problem was that when I picked up the unit it rattled. The screws were loose and it clearly had been opened before. The power supply board wasn't even screwed in and was rattling loose inside the box


So I started looking at the power supply and not only had it been removed but a repair had been attempted. What I found was that
  • The main switching transistor (on the big heatsink) was missing.
  • A Zener diode (ZD1) near the switcher was missing
  • A diode was missing
  • The main filter caps had been replaced (and with 250V rated ones although this turned out to be ok - see below).
  • The fan was missing
I approached the seller about this. I didn't really want to send it back as then I lose the shipping twice. We negotiated a reduced sale price and I notched this up to experience.

He is still selling stuff though and he has another advert with the same misleading text and similar blurry photo. The seller is mannd1deborah Here is one of his dodgy ads. You've been warned!

The Damage

The power supply seems to have suffered a pretty catastrophic failure of the main switching transistor. There is a significant scorch mark on the PCB around it and a few associated components have been removed by whoever attempted the repair.


A few of the tracks lifted in the area around where it got hot. Also on the secondary side there is a diode with some adjacent heat damage to the board. The fact that the fan was missing made me think it must have failed and caused the power supply to overheat.


Missing Parts

I downloaded the component level service manual from Keysight for the unit but unfortunately it doesn't include the schematics of the power supply. The power supply is labeled 'SMP-43DP' in the silkscreen and 'SMP-43DL' on the bottom of the metal plate that holds it and is made by Delta.

I tried contacting Delta but they said they had no records on it and told me to contact Keysight. I contacted Keysight and got nothing. In fact you can buy the power supply as a spare part but when I asked about it my email was never responded to.

In the past when I repaired a switch mode supply like this the first thing would be to look up the datasheet for the main switching controller IC as often the circuit is very similar to the typical one listed in the datasheet. In this case however the unit is old enough that it doesn't have an IC controlling it.

My first assumption was that the main transistor would be some kind of switching MOSFET.  I asked on the HP/Agilent Test equipment yahoo group and someone pointed out that the zener diode going to one of the transistor legs is connected to the positive rail of the main filter caps (i.e. 340V) so that wouldn't be much good for clamping a gate. The conclusion was that it must be a bipolar transistor.

Also the guy on the yahoo group explained why the capacitors are only rated for 250V - the thing is there are four of them and they are not all in parallel but the arrangement is more complicated. It turns out the supply contains a circuit for detecting the mains input voltage and if the voltage is around 100-110V it will use the bridge rectifier in a configuration where it doubles the voltage (there is a good description of how this works here). The upshot is that if the supply is running somewhere like the US where the mains voltage is 110V then the point between the pairs of capacitors is .at the input voltage but in Europe or Australia then the caps in series have 220/240V across them (in total).

There are a bunch of youtube videos where people teardown or repair these counters (such as this one and this one). The problem is that all of these have a different version of the supply. So even if I could see the parts in question then they will be different. Someone called JF2014 even posted a schematic on the EEVBlog of a 53131A supply but it too was for the other version).

Subterfuge and Guile

So then what are the missing parts? So I started looking at transistors rated for 1000V VCE and ones that could handle a couple of amps. What worried me is that guessing might result in a charred mess.

I found a guy selling a replacement power supply for a 53131A on Amazon and US ebay (doesn't ship internationally) which was the same version as mine. He wanted more for the supply than I paid for my unit (and shipping) so it wasn't an option.

I contacted him and asked very nicely if he would read the numbers off the big transistor and the adjacent zener diode. After a couple of emails back and forward (including some pictures and arrows) he said the transistor was a BUV48A. Winner! I didn't have the heart to ask him to go back and get the numbers off the zener however.

So luckily this transistor is available on RS - not cheap but available. So good news as now I have the most critical part covered.

Other Parts

After much googling I figured out the fan is a Fonsan Delta DFB0412M. Believe it or not I found some parliamentary expenditure report that listed 53131A and the fan part number. Since then I found a fair bit of discussion about using quieter fans and Gerry Sweeney modified his counter to switch off the power when the switch is clicked. Anyway I found a seller on ebay that sells the original fan and ordered one (again - not cheap).

So the missing diode was likely to be exactly the same as the one next to it which was a RFP203 - a chunky 4A, fast recovery diode intended for swithmode supplies. I found a suitable (i.e. fast and high current handling capability) replacement and ordered one. The diode on the secondary side was much smaller and was in a tight spot so using the chunky diode wouldn't work. I ended up just ordering a EGP20D.

Zener Diode


So now I only had one part to work out and that was the zener diode. The only thing I could think of was to draw out the circuit and try to work out what voltage rating it needed to have.

Initially I started drawing the entire circuit but this was pretty time consuming and it occurred to me that I only needed to work out the part near the switching transistor.

I sketched out the following:

While probing around to draw the circuit I noticed a short in places where I didn't expect to see one. It looked like the short was Q1 so I removed it from the board and sure enough it was dead. Unfortunately it is a discontinued Sanyo part (2SD1247) so I ordered some NOS from Greece on ebay.

There are a few interesting things here:
  • The frequency is determined by the resonant frequency of the transformer primary and other capacitors
  • The feedback is provided not by an opto-coupler but by a transformer! Talk about old-school.
The problem was I couldn't see what would need the voltage drop of ZD1. It looked too me like if you removed ZD1 the 47K resistor (R6) would limit the current enough that it would all work. I put the circuit into LTSpice and by measuring a few inductances and capacitances using my LCR meter I could get the circuit to run under simulation (even with the wrong transistor models). The problem is it didn't seem to make a big difference if I made the zener 10V or 200V.

I put up a question on EEVBlog but after a couple of days nobody replied. I then put up the same question on the HP forum and I got an answer - basically the zener prevents the supply from starting if the voltage on the capacitors is too low. So if instead of 110 the input voltage was 80V or if it was 180V instead of 220V this would prevent it running. Basically the zener needs to be rated for around 180V to prevent the circuit starting.

Putting it all Together

So the 180V zeners arrived from RS, the transistors from Greece and the fan was here. I had everything I needed and put it together.

I attached the supply to my isolation transformer, stood back and switched it on - I was elated to see the fan spin as this must mean the primary is switching! I did some more testing and found the 12V wanders a bit if the 5V is under no load (I used my dummy load to load up 5V as it is the highest current output). Otherwise all the output voltages were very good. 

Crossing fingers and toes I connected it up to the counter and it worked!


I haven't checked the performance specification but it seems very close. The output of my signal generator measures to within 1Hz up to 20Mhz. They can't both be out by the same amount!

The annoying thing is the fan run constantly even when the device is off. The screen is just a little dim but quite usable. Overall this turned out to be a good score and another valuable learning exercise.

Lab Power supply Pi Control and Auto-Cal


The Raspberry PI power input control boards arrived and I assembled one and tested it. I built a wiring harness for the fan and for the temperature sensors and attached the temperature sensors to the heatsink. I built the final channel and put it in the case.

Also control interface got re-oriented and got a much needed speed up, I figured out how to start the software when the operating system boots and I built some code to automatically calibrate each channel and store the calibration values in EEPROM.

Raspberry Pi Power Control Board


So the power control board came back and looked pretty good overall. One problem was that the holes for some of the connectors were a bit tight. I tried slightly reaming out the holes but stuffed up one board in the process. Then it occurred that I could just file down the pins of the connector and this worked really well.

Oddly the female header connector for interfacing with the Raspberry PI isn't stocked at my local electronics store and so I had to order one from RS. This one connector however cost $5AUD which is totally silly.

I built a small board for the LED and power switch. The idea is the board is mounted on stand-offs that are just tall enough for the switch to protrude through the front panel. The problem was that there wasn't enough space for the connector as it has to mount on the component side (same side as the switch and LED) as it is a single sided PCB. To get around this I just soldered wires directly to the board and hot-glued them for mechanical strength. I screwed standoffs onto the board and hot-glued this to the back of the front panel. This works Ok overall.



Otherwise everything fitted nicely and worked very much as the proto-type did. Pretty happy with it. Only thing is the LEDs are a bit bright.

Temperature Sensors

In hindsight I really wish that I had integrated temperature sensing into each PSU channel and read the temperature via the channel interface.

Instead I built a wiring harness for the temperature sensors that basically wires them all in series. The termination resistor for the temperature sensor's 1-wire bus is on the Pi Power board so this is pretty simple.

The not-so-nice part is that then I have to mount them on the heatsink which involved hot-glue and a lot of cursing. What is worse is that they don't make very good thermal contact so the readings aren't the best (the temperature readings are much lower than what I get with a probe or non-contact thermometer). Still it will do for fan control purposes.

The reading rate of the sensor is quite low. I integrated this into the software by having it only read the temperature every few updates. I added a simple algorithm for fan control that basically ramps up the fan when the temperature goes over 30C. The fan hits full speed at 40C.

Starting the GUI

I didn't realize that Kivy doesn't actually run under X-Windows but basically directly accesses the frame-buffer. As a result I can use raspi-config to change the boot options and not start X-Windows. Now when I run the Kivy application from an ssh session is starts as before and works fine.

I created another daemon script (see previous post) for starting the power GUI. One change I had to make was that for some reason if I run the app as root the app will run but it doesn't respond to user input. A small change to the script allows it to run as the pi user and we are all good.

Now the Raspberry PI will boot and then will go straight into the power supply application.

Here is the daemon script (which again was derived from a template).

#!/bin/bash

### BEGIN INIT INFO
# Provides:          labpower-gui
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Starts the Lab power supply GUI
# Description:       Starts the Lab power supply GUI
### END INIT INFO

# Change the next 3 lines to suit where you install your script and what you want to call it
DIR=/home/pi/LabPSU/LabPowerSupplyCtrl
DAEMON=$DIR/MainWindow.py
DAEMON_NAME=labpower-gui

# Add any command line options for your daemon here
DAEMON_OPTS=""

# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using the Raspberry Pi GPIO from Python.
DAEMON_USER=pi

# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid

. /lib/lsb/init-functions

do_start () {
    log_daemon_msg "Starting system $DAEMON_NAME daemon"
    start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
    log_end_msg $?
}
do_stop () {
    log_daemon_msg "Stopping system $DAEMON_NAME daemon"
    start-stop-daemon --stop --pidfile $PIDFILE --retry 10
    log_end_msg $?
}

case "$1" in
    start|stop)
        do_${1}
        ;;

    restart|reload|force-reload)
        do_stop
        do_start
        ;;

    status)
        status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
        ;;

    *)
        echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
        exit 1
        ;;

esac
exit 0



Ninety Degree Shift

The original GUI design was based around having the outputs for each channel below the screen. With the big 7" display there isn't enough space below to do this so instead the outputs are beside the screen. So to make this more usable I re-oriented the display to be in rows instead of columns. This way the details of each channel roughly lines up with the connector. The GUI is still a work in progress. Later I plan to add graphing and maybe a virtual knob to adjust voltage/current.



Speedup

Now that I have three channels running I found the GUI painful to use. The problem is that when you hit the voltage/current set buttons it can take quite a bit for the dialog to appear. This is because the code is sequentially cycling through and reading back the voltage/current/status from each channel and as the interface to the channels is relatively slow (115200 baud) this takes time. If you hit a button at the same time the code is running through some updates you have to wait for them to complete before it will respond to the button press.

I thought about how to speed this up and I thought perhaps I could increase the baud rate or even create a new 'uber' command that fetched all the relevant details back within a single exchange. The problem is that even if I do this the ADC is relatively slow and so the command will take some time to complete. I don't need it to be super-fast but I just want the GUI to be responsive.

I found out that python has quite good support for threading out of the box. I modified the code that manages each power supply channel so that it kicks off a thread that constantly fetches the status in the background. The thread reads the status into a bunch of class members which can then be read by the GUI whenever it updates. This way the updates to the three channels can happen concurrently and the GUI is never tied up.

The problem then is that if the GUI requests a change (say to set the voltage or whatever) while the thread is running, the commands generated by the thread and by the GUI will clash. Instead when the GUI requests a change the power supply channel code puts the request into a queue. Each time the thread goes round to do an update it checks to see if there is a command to process and sends that to the channel first before fetching the status.

To make accessing the status held by the power supply channels thread safe I created a small class that contains a mutex and which holds a copy of a variable. It has a setter and a getter method and it copies the value passed in and returns a copy on the way out. It locks the mutex before accessing the data in both directions to avoid problems with concurrent writes and reads.

I also changed the channel code so that when a dialog comes up it stops all the updates (not just the updates for the channel being commanded).

So now the dialogs still come up a little slow but when they do come up the buttons are responsive. The update speed of the GUI is fine and overall the GUI now works quite well.

Auto-Calibration

The tables of points used to calibrate the ADC/DAC readings was stored in constants within the code. I generated these constants using a small python script that commanded the supply to go through the full range for voltage/current outputs while reading back values from a LAN attached multimeter.

The problem with this is that each channel needs a different table of values and I don't really like having different firmware images per board.

The logical thing is to update the firmware so the table of points is stored in EEPROM and provide functions to read/write these values. The python code for stepping through the output range can then be integrated into the Kivy GUI.

This turned out to be a bigger job than I hoped. I had to:

  • Create a class for reading the linearizer data from EEPROM
  • Update the Linearizer code so it can be setup after construction.
  • Add a command to set the number of points and to set each point in the linearizer table for the Voltage ADC/DAC and Current ADC/DAC
  • Modify the Kivy GUI to pause everything, allow the GUI to directly (not via the thread) command the channel to move through each point and to communicate with the multimeter to read-back the voltage. The GUI needed a progress bar display and needed to allow the user to enter the hostname/IP of the DMM.

Running out of RAM on the Microcontroller


The first thing that happened was I loaded the code into a channel and it basically was non responsive. After some poking and the addition of text debugging I figured out that the firmware ran out of RAM (the ATMEGA328P only has 2K)

I was able to figure this out by using this bit of code for working out how much free RAM there is based on the stack address

int freeRam () 
{
    extern int __heap_start, *__brkval;
    int v;
    return (int) &v - (__brkval == 0 ? (int) &__heap_start : (int) __brkval);
}
printf("Free ram is %d\r\n",freeRam());

This was I could pepper the code to see where the RAM was being consumed and how much. Because of the way the interpreter works, most of the RAM is allocated when the main object is created.

My biggest problem was that in addition to the tables for each of the ADC/DAC linearizers I had static constants containing default tables (in case the EEPROM is blank). Static constants end up using both EEPROM *and* RAM. The compiler allocates RAM for the constant, reads it from EEPROM and copies it into RAM. I changed the defaults to be dumb one point tables and this brought the usage down considerably.

The other thing was I included a pointer to the LabPSU object in every command. Assuming pointers are 4 bytes, as there are 26 commands this equates to quite a bit of RAM wasted. I changed the code so the pointer is passed in during parsing and recovered loads of RAM.

The final change was I switched from 32 points to 16 points for the tables and this was enough for the code to run reliably.

Next Steps

There are a few things I'd like to add to this project but none of them are urgent. Furthermore I've been working on this for nearly two years and I feel like it is time to do something new for a while. So for now I am calling this complete although I might add some new features in the future.

The future list includes:
  • Virtual onscreen knob for adjusting voltage/current
  • In addition to displaying the current/voltage/power a graph mode where you can see a graph of the output current/power
  • A curve tracer mode for plotting IV characteristics of LEDs etc
  • Maybe a battery discharge simulator.
Here it is on my bench in action.