Monday, 19 November 2012

SSL using a RestEasy client

I had the misfortune today of having to figure out how to enable SSL for a REST client that uses the JBoss RestEasy client framework. This isn't really documented anywhere so I thought I'd make a note of it here.

Usually you would call your rest interface by doing something like this:


ServiceInterface myServiceClass  = (ServiceInterface)ProxyFactory.create(ServiceInterface.class,url);
What I found was if I tried to use a https URL this would fail with a 'PeerUnverified' exception as it (of course) doesn't trust the dodgy CA I used to issue my server certificate. In addition I want to provide client credentials so I can validate my identity when talking to the server. It turns out the way you do this is you provide a ClientExecutor object as the 3rd parameter to the ProxyFactory.create() method and this object is used to communicate with the server. It's a bit worse than that as you have to create a HttpConnection and then get the SchemeRegistry and add a mapping for https that uses a custom SSLSocketFactory. You create a SSLSocketFactory that knows about the KeyStore with your client credentials and your KeyStore in which your trusted CA cert lives. So anyway the code looks like this:
SSLSocketFactory sslSocketFactory = new SSLSocketFactory(clientCreds,credentialsPass,trustStore); 

DefaultHttpClient httpClient = new DefaultHttpClient();ClientConnectionManager conManager = httpClient.getConnectionManager();
SchemeRegistry schemeRegistry = conManager.getSchemeRegistry();
schemeRegistry.register(new Scheme("https",8443,sslSocketFactory));  
ClientExecutor executor = ApacheHttpClient4Executor(httpClient);  

ServiceInterface myServiceClass = (ServiceInterface)ProxyFactory.create(ServiceInterface.class,url,executor);

Thursday, 25 October 2012

Mockito - modifying parameters

My previous post was about how to get Mockito to generate a new instance of a class each time it is called.

Another difficult problem when testing using mocks is how to mock function calls which modify the parameters passed in. Turns out you can actually do this using nearly the same technique. The InvocationOnMock parameter passed in can be interrogated to find the parameters passed in and then the parameters can be altered.

My case was even worse as I needed the parameter to a void function to be modified but it turns out the same technique works.

To do this you use the doAnswer() function but you use it with a void

doAnswer( new Answer<Void>) ...

What I was trying to do was to test a function which called a JpaRepository<>.save() method and then returned the auto-generated ID from within the modifed entity.

So the example looks like this:


doAnswer(new Answer<Void>() 
{
    @Override
    public Void answer(InvocationOnMock invocation) 
      throws Throwable 
    {
        Object[] arguments = invocation.getArguments();

        if (    arguments != null 
                && 
                arguments.length > 0 
                && 
                arguments[0] != null
        {
            SomeEntity entity = (SomeEntity) arguments[0];
            entity.setId(testInternalId);
        }
        return null;
    }     
}).when(mockRepository).save(any(SomeEntity.class));


Friday, 5 October 2012

Mockito stubs

So I was testing this bit of code that is time sensitive.

To test all of the conditions I needed to be able to simulate time passing in my test. The obvious thing to do was to create a facade for fetching the calendar (wrapping Calendar.getInstance()) and then mocking this in my test. Then my mock could return whatever time I wanted.

The problem was that the mock was called more than once and always returned the *same* calendar instance. The code would then manipulate this calendar to do time calculations but because each call returned the same instance this had undesirable effects.

The solution was to use the Mockito stubbing API with the mock. You can create a callback that is called each time the mock interface is called and you can have it do whatever you want. I simple set it up to return a new instance each time

    @Mock CalendarFacade        mockCalendarFacade;

    private void setupCalendarWithTimeInPastMock(final int minutesAgo)
    {
        stub(mockCalendarFacade.getCalendar()).toAnswer(new Answer()
        {
            public Calendar answer(InvocationOnMock invocation)
            {
                return calendarMinutesAgo(minutesAgo);
            }
        });
    }

    private Calendar calendarMinutesAgo(Integer minutes)
    {
        Calendar calendar = Calendar.getInstance();
        calendar.add(Calendar.MINUTE,-minutes);
        return calendar;
    }

Monday, 24 September 2012

Embedding Certificates into Java unit-tests

I was trying to find a more elegant way of embedding test certificates into my java tests. Spying and mocking only get me so far. Occasionally it is easier to use real certificates and then in integration tests I want to use real data anyway.
I don't want to embed certificate files as files into the test folders so I have been encoding the data as constants.
The trick for doing this is this:
  1. Turn it into Base64 or PEM. For certificates I use PEM and to do this I open the certificate file in windows, choose the details tab, choose "Copy to file", choose base64 as the format and save as a .cer. Then open the file in notepad and copy out the PEM.
    For other types of file (PKCS#7 etc) I just base 64 encode it using something like this:
  2. Embed the data as a constant. I use Dev Studio to do this. Open the text PEM or Base64 file in DevStudio and use the match expression of ^{.*}$ and replacement of \t\t\"\1\\n\"\+. This will turn something like:
    -----BEGIN CERTIFICATE-----
    MIICHTCCAYagAwIBAgIIFW/6AIuFtIwwDQYJKoZIhvcNAQENBQAwRzELMAkGA1UE
    
    Into
        "-----BEGIN CERTIFICATE-----\n"+
        "MIICHTCCAYagAwIBAgIIFW/6AIuFtIwwDQYJKoZIhvcNAQENBQAwRzELMAkGA1UE\n"+
    
    Then you past that into the java as
        private static final String testCertPEM =
        "-----BEGIN CERTIFICATE-----\n"+
        "MIICHTCCAYagAwIBAgIIFW/6AIuFtIwwDQYJKoZIhvcNAQENBQAwRzELMAkGA1UE\n"+
        ...
        "...";
    
  3. Read it back in To turn this into a certificate you need this bit of code (using BouncyCastle)
    private X509Certificate parsePEMCert(String pemCert)
        {
            final Reader reader = new StringReader(pemCert);
            final PEMReader pemReader = new PEMReader(reader,null);
            try
            {
                return (X509Certificate)pemReader.readObject();
            }
            catch (IOException e)
            {
                e.printStackTrace();
                return null;
            }        
        }
    

Thursday, 13 September 2012

Welcome Back to Spring

Well the weather in Australia certainly is warming up but this isn't what I mean.

I'm currently doing some work in Java but it has been a few years (maybe 3?) since the last time I worked with this language. As part of the project I am working on we are developing an implementation of a standard protocol that runs over HTTP (ahem... Trying not to be too specific) and we are building the back-end services in Java as REST APIs. The whole thing is tied together with Spring.

Welcome back to Mocking


First of all the guys before me have configured the projects to use Mockito instead of EasyMock (which I am used to). Rather than go drag in EasyMock I thought I'd give it a go and it does seem pretty good and makes for readable tests. The thing I really like are the annotations which make mocking in your tests much easier. Instead of saying

 private GarbageService fixture = mock(GarbageService.class);  

You can instead say

@Mock GarbageService fixture;

And Mockito gets the hint. Well it gets the hint if you tell it to go fill all the mocks which you can either do by including a call to:

 MockitoAnnotions.initMocks(this)  
Or by running the unit-test in the MockitoTestRunner

@RunWith(MockitoJUnitRunner.class) 
class TestGarbageService
{
...


Spring Autowiring



Now we are using Spring but previously the team just used the XML technique for tying together dependencies. This is nice as it minimizes the amount of Spring in your code but on the other hand maintaining the XML file is a pain. Also, I hate how people put data initialization in the XML file as this means you have actual business logic embedded in the XML and not the Java.

I discovered a Spring annotation called @Autowired which you can tag either a setter method, a constructor or even a data member so it will be automatically initialized by Spring. As I said my project is a Servlet so this took a few goes to get working. First of all I created a spring context and enabled searching for implementation by type. You have to specify a base package so I specified the root of my project:


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:util="http://www.springframework.org/schema/util" 
       xsi:schemaLocation="
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd 
http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-3.0.xsd" >

    <context:component-scan
base-package="com.mycompany.myproject">
    </context:component-scan>

</beans>



This was the first Spring I had put in the servlet so I had to edit the servlet web.xml to enable Spring:


<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">

    <display-name>My Project</display-name>
    <description>My Project Servlet</description>

    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>/WEB-INF/spring-config.xml</param-value>
    </context-param>
...

This wasn't sufficient so after some googling I discovered I probably need a listener also which I added to the bottom of the web.xml

    <listener>
        <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>      
     </listener>
    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>

Now I didn't want to use DispatchServlet as I wanted more control over the HTTP. The protocol I am implementing is specific on formatting, content-type etc and it seemed better to do this using a regular servlet. So then I created a ServiceLocator interface that in turn provided all the interfaces the Servlet itself needs. This ServiceLocator is instantiated via Spring and so all of the services then get injected as dependencies. In the init() method of the servlet I do this:

       WebApplicationContext ctx = WebApplicationContextUtils.getWebApplicationContext(
                                        getServletContext());
        
        this.serviceLocator = ctx.getBean("ServiceLocator", ServiceLocator.class);

Ok so this didn't work the first time (or the 10th). Eventually I figured out that in order for the implementations of the interfaces to be found, they have to be tagged. There are a few annotations you can use but the simplest worked for me:

@Component
public class ServiceLocatorImpl : implements ServiceLocator
{
...

And then I can instantiate the ServiceLocator! I had to do the same to all the dependent interfaces but when i did I was able to get this running.

Testing Autowired Code

The next trick was getting my unit-test to work when the dependencies had been Autowired. Usually you would create mocks of the dependent interfaces and pass them into the class under test either via the constructor or a setter but in this case there is no method to do that. I could go back to creating setter methods and tag these as Autowired but thankfully the newer versions of Mockito (1.8.4 and up) have a better solution for this called @InjectMocks

The way this seems to work is that it will inject mocks into the class under test for any members tagged as requiring auto-wiring. The mocks are drawn from mocks declared in your test class. Lets pretend my service depends on an interface called GarbageDAO which is autowired into the ServiceLocator. In the test code I could do this

@Mock GarbageDAO mockDao;
@InjectMocks ServiceLocator fixture = new ServiceLocatorImpl();

The effect is that the ServiceLocator implementation will be injected with the mock GarbageDAO. You can write expectations on the mock and when you call the fixture these should get triggered.

when(mockDao.fetchGarbage()).thenReturn(mockGarbage);
fixture.takeOutGarbage();

Factory

I have this class that acts as a factory for a handler depending on the type of protocol request received. I was trying to figure out how to make this work with Spring. Using XML it is easy as I can just create beans for each handler and pass them via a series of add calls to the factory.

Using the auto-wired method is not so easy. The problem is that as they all implement the same interface there is no telling them apart.

I figured out you can specify a name when you declare them as components and then later you can use this name as a qualifier when auto-wiring. There are more complex forms of qualifier but this did the job for me.

So in my factory I have:

@Autowired
@Qualifier("RetrieveCollectionSchedule")
GarbageHTTPHandler collectionScheduleHandler;

@Autowired
@Qualifier("CollectGarbage")
GarbageHTTPHandler collectionHandler;
Then when I declare the implementations I do this:

@Component("CollectGarbage")
public class CollectionHandler : implements GarbageHTTPHandler

The factory then uses the request type to determine which handler is required and returns it from its list of members.

Testing this was a bit harder. The first time I tried this the @InjectMocks didn't work. After reading a bit about this it turns out the mock injector will first try to match by type and THEN matches by name. So to make this work in my test class I had to make the name of the mock match the name of the member in the factory and then it worked:

@Mock GarbageHTTPHandler collectionScheduleHandler;
@Mock GarbageHTTPHandler collectionHandler;
@InjectMocks GarbageHandlerFactory fixture = new GarbageHandlerFactoryImpl();

Thursday, 2 August 2012

Working on a Mac

I previously wrote about being in a state of expectation while waiting for my Mac to arrive. There is a bit more of a story to this plus it has been a bit of a journey settling into the Mac environment.

Nearly two months ago I decided I wanted a laptop for development so I could use it away from my desk. I have a pretty good setup in my office so this is a machine to use when I have to be somewhere waiting for the kids to do whatever they are doing (sailing, fencing, orchestra etc).

I considered my options and thought if I went Mac I could use it for iPhone/iPad etc development, I could run windows stuff on it and the Mac seems to be an amazing hardware platform. Plus it is Unix under the covers so I can tinker with stuff I otherwise would have run on Linux.

I did some homework and decided I wanted a 17 inch Mac so I would get roughly the same screen real-estate as my desk. It would have as much RAM as I could pack into it (8GBs) and as big a drive as I could sensibly afford. The SSDs were out of the question both in terms of cost and capacity.


I  started setting up the 17 by this point and had installed Parallels (as it is better than VMWare fusion), setup my account, connected it to my Gmail etc accounts, installed perforce, Eclipse and dabbled with XCode. Overall it was all going quite well the hardware is a thing of beauty although the system is a bit quirky coming from a PC. I had a Windows 7 dev VM running, an XP test VM, a 2008R2 VM with a DB running and all was well. It gets a bit creaky with all the VMs running as OSX needs some RAM too and 8GBs is pushing it for all that but it was Ok.


What I didn't check before purchasing were the rumors regarding new models as less that a few days later WWDC happened and Apple announced the new Retina Macs. They only come in 15 inches even though they have the crazy high-res. Overall I thought I would stay with what I had. I played with a Retina Mac at a Mac store a couple of days later and discovered the SSDs they ship with are *seriously* quick (500MB/s). Also they can be expanded to beyond 8GBs and the cost of a 16GB unit with a faster processor was about the same as what I spent on the 17. Ok I give up - I backed everything up and took the 17 back (within my 14 days) and ordered a Retina model.

When I received the new Mac (after a couple of false starts) it was relatively easy to bring everything back. You can't set it up from the time machine backup as it is a different machine and just doesn't recognize the backup. What you have to do is run the migration tool to bring it all back but this will bring back the older user account. At this point you probably went through the basic setup and created another account with the same name as your old account so of course it gets confused and wants to rename your old account. Instead I ditched the new me, created an 'install' account, ran the migration as restore and when it was done logged onto my old me account and ditched the install account. It brought pretty much everything back including installed apps, VMs, data in my home directory and system options. Pretty slick job overall.

The Retina display is strange in that it seems to lie to the applications about the resolution and if they don't know any better it renders at half the resolution. Any text rendered by the system is rendered at full resolution and any Apps that know about the Retina display use it to render images etc at double res.

This means that Chrome looks shit as it renders the fonts and images at half-res (fonts in particular look decidedly fuzzy). There is a Chrome Canary that you can run which uses the new font rendering code and looks much better but it ran badly enough that I stopped using it. It didn't seem to remember my tabs when i closed it and frequently hung and lost its mind.

Also, for all intents and purposes you have a 1440 display except the fonts and some images look exceptionally smooth. The effect of this is you can't stuff more onto the display so when I remote into home it all looks squashed and DevStudio is decided squashed.

To pack more in you can get the system to fake an alternative res but it isn't ideal. What it does is it renders into a buffer using double the fake res and then scales (using the graphics card) to the final retina res. The problem is that 1440 is the only integer divisor of the base res offered so you unfortunately have to kiss goodbye to the super smooth you get in the native res as a result. It isn't incredibly bad however and the extra real-estate is worth the speed penalty.

Settling in to working on a Mac took some time. The biggest headache is the keyboard. Mac laptops have no delete key, no home/end or page up/down key. You can get the effect of home/end using fn right/left. I still occasionally hit ctrl-left to go to the previous word to find it jumps me into the dashboard. Alt left does this on the Mac.

Lack of delete is particularly vexing under windows as to delete a file or hit ctrl-alt-delete you have to go fn-backspace (or fn-ctrl-alt-delete).

A lack of Page up/down is not so bad if you use either fn-up/down but also the two finger scrolling is quite efficient.

It took me a while to discover I can enable right click in the system preferences. Before that I was doing odd thing to bring up context menus in windows. Now I am using right-click in finder as well since there actually are useful context menus on lots of things!

The function keys are odd as by default they are the brightness, volume etc buttons. If you actually want the function key you have to hit fn-f1 etc. Again I swapped this in system preferences so now to change the brightness you have to use the function key. Also I found that you can disable Mac hotkeys when inside parallels which seemed to help with some keyboard weirdness.

The dock is weird. You open something and then you hit the red X to close it but it doesn't go away! It hangs around unless you tell it to *really* go away using the cmd-q. Also launching apps is odd - I never know where to look. It is either in the dock or you can go to finder and go to applications and find the program OR you run this launchpad thing and find it there although not everything goes into launchpad.

Windows get lost frequently as you generally can't just click the icon in the dock as that is likely to spawn another one. They have this mission control thing that shows you all your windows grouped by app but it is too annoying to use in anger (like the 3D ctrl-tab thing in windows - not worth it).

Access windows shares is a breeze generally. I had some issues with wifi dropping out and having to reconnect all the time but since I installed an update this seems to have gone away.

I had to install a codec pack to get my video collection to work but that wasn't too bad.

I found you can download a remote desktop application from Microsoft for accessing machines via RDP. This works pretty well except it seems to crash for time to time when you close a connection. It looks pretty good though and is fast enough. Again the keyboard map is a bit weird.

I got my Retina mac about 3 days before Mountain Lion came out. I of course immediately downloaded it and generally it is running Ok. The notifications bar thing is useful although it isn't clear to me why I can tweet from there but can't see other peoples tweets.

I have a VPN remote access server running now on one of my server VMs and I can PPTP from the Mac back into my home network. This works very well - I nearly forget I am not directly connected sometimes.

So far so good - compile times under Windows 7 are pretty good but not blistering as I had quietly hoped.

Overall it is going well!

Tuesday, 31 July 2012

Active Directory Argh!!!

Working on a dynamic problem with this DCOM service component. The service can get hit pretty hard so as part of our testing we run 500 concurrent clients against it for a couple of days. It isn't clear what the problem is an more annoying is that it only crashes after 24 hours+ of runtime. I've tried running it with the debugger attached just to have numerous sessions trashed by comms timeouts causing the debugger to disconnect.

In the mean time I am looking at a problem with the lookup of user information in Active Directory. The service is a DCOM server that looks up a user based on an NT4 ID passed in (domain\username). The service needs to lookup the users Distinguished Name (DN) , User Prinicpal Name (UPN)  or Service Principal Name (SPN) for services accounts, email address and also for service accounts the GUID and DNS name.

We use a mixture of ADSI and an implementation of IADsNameTranslate to do this. Name translation is used to get the DN and UPN/SPN, ADSI is used to get the user's email address, service GUID and service DNS name.

The problem is that with 500 threads all banging away it barfs. Sometimes it just says it cant't contact the domain (0x8007054B) at other times I get handle invalid (0x80070006) as well as a few others like RPC failed (0x800706BF).

With a single thread repeating the process thousands of times all is well. The more threads I have doing this the worse it gets.

I wrote a test program that just does the lookup many times in a number of threads. This fails more than the actual service. It's not consistent too so it will fail a lot when it starts and then settle down and just work for the rest of the test run. I contemplated adding a retry mechanism but I am concerned this will inundate the domain controller if it goes wrong.

The service is free threaded so I added this to the code:
#define _ATL_FREE_THREADED


When I change this to say apartment threading it seems to fail less but with enough threads it still fails.
#define _ATL_APARTMENT_THREADED


I don't really understand this as I would have thought that if the Name Translate object was not thread safe it would have declared itself as apartment threaded and when my free-threaded client invoked it then it would have been invoked in an apartment.

The logical thing to do is to put a big fat lock around the code so only one thread does the lookup at a time but this is a serious performance hit! This guy seems to do just that http://pyyou.wordpress.com/tag/userprincipalname/

I tried just locking the calls the NameTranslate (and not the ADSI calls) but this gives the same result.

I contemplated using DsCrackNames but haven't tried this yet.

I don't understand why this doesn't just work...

Wednesday, 18 July 2012

TNT Tracking is weird

I am still waiting (with breath held) for my retina MacBook Pro. They weren't kidding about 3-4 weeks and are about to take it to 4 weeks to the day.

Anyway it went to shipped on Tuesday evening but the tracking number came up with no information until late yesterday. By then it showed the package had left the airport in Shanghai but nearly a day earlier.

Today the status is the same which seems odd. I noticed you can choose your country (the default when you click through from the Apple store is UK for some reason) at the top of the page so I changed to Australia, copied and pasted the tracking number in again and now I see it has arrived in Australia! (but none of the details from China).

Then I thought I would try making china my country (you can choose English language) and this time I have a bunch more scan entries from china.

I have to ask - what is the point of a tracking system where you have to search multiple countries for the status of your package? I reckon the package will be delivered before the initial UK tracking site updates the status.

I'm sure TNT is under some stress if Apple are using them to ship all their new shiny MacBooks but still... Useless...

Sunday, 15 July 2012

Deadlocks

Ok this seems pretty obvious but for whatever reason I never noticed this before.

I was hunting a deadlock today that happens when a service loses the database connection. We use windows critical sections to implement our mutexes.

There are upwardly of 30 threads performing a variety of actions in response to request messages. Some have noticed the DB connection loss and are trying to handle it while others are locked waiting for stuff the threads trying to deal with the DB connection loss still have locked. Urgh... And so a dead-lock was born...

Anyway while trying to untangle who locked what I figured out that the if you shift-F9 a critical section is has a OwningThread member which is the ID of the thread that locked the critical section!

Told you it was obvious and I should have noticed it earlier.

Well it helped me today so I thought maybe someone else may not have noticed this.

Wednesday, 11 July 2012

Bloody DHCP

Ok so in my recent posted I talked about how damn Optus has this DNS that redirects unknown addresses to looksmart or some such and that you can get around it by using an alternative Optus DNS.

Well it took quite a few attempts to change the DNS setting! I thought that as my DC had a second (hidden) LAN interface after the VM having been moved from VMWare Server to ESX it might be handing out the DNS configured in the hidden interface. My primary connection was coming up as 'Local Area Connection 2' and whenever I tried to change anything it would warn me that there was another interface with a duplicate IP address but I can't see the duplicate device.

Apparently what you do to make hidden devices visible is you run a cmd and

setdevmgs_show_nonpresent_devices=1

Then you run the device manager snap-in (devmgmt.msc) and go to view and select 'show hidden devices',

Now I can see the disabled LAN interface and I just uninstall it.

Back to DHCP - so I tried deleting the lease for my machine from the DHCP management console and doing an ipconfig /renew but I still get the same (wrong) DNS. I tried doing a /release before I did a /renew but still the same.

I broke out wire-shark and found down in the details the DHCP server was handing out the wrong DNS!

Hmm Back to DHCP console. So I right-clicked the server-options and chose configure options. I selected option 6 in the general tab and added the correct DNS servers. I restarted the DHCP server and tried the /release /renew cycle but no luck. Wireshark still sees the wrong address being handed out.

I rebooted the server - still no luck. I checked the logs to see if it was saying anything about the problem but no luck.

I noticed that the reservation for one of my server machines had the wrong DNS server also. I removed it and re-added the reservation - again the DNS was wrong!

I then gave up and deleted the scope and created a new scope. I selected the DNS and WINS servers during the setup options and enabled the scope. /release /renew and voila! It works! Created the reservation and again it gets the right DNS.

God it's hard!


Monday, 9 July 2012

ESXi 5.0 Update 1 Auto start VMs

As I said before there is a bug preventing autostart of the VMS after a startup of the box. Fixing this  turned out to be easy once I figured out the commands.

You can log onto the hypervisor by enabling SSH - go to configuration tab in vSphere, select security profile (in the software box) and hit properties in the services area. Enable and start SSH.

Using putty, logon by going ssh -lroot <machine> and entering your password.

Ok now you can run this command to figure out the IDs of the VMs you want to start (and the order). The ID is the first column:

vim-cmd vmsvc/getallvms


Then edit the rc.local file (vim /etc/rc.local) and at the end add a series of commands to start each VM and sleep for a few seconds between commands. The command to start is

vim-cmd vmsvc/power.on <id>


So mine looks like this now:


#!/bin/sh


export PATH=/sbin:/bin

log() {
   echo "${1}"
   /bin/busybox logger init "${1}"
}


# execute all service retgistered in ${rcdir} ($1 or /etc/rc.local.d)
if [ -d "${1:-/etc/rc.local.d}" ] ; then
   for filename in $(find "${1:-/etc/rc.local.d}" | /bin/busybox sort) ; do
      if [ -f "${filename}" ] && [ -x "${filename}" ]; then
         log "running ${filename}"
         "${filename}"
      fi
   done
fi


vim-cmd vmsvc/power.on 3
sleep 10
vim-cmd vmsvc/power.on 4
sleep 10
vim-cmd vmsvc/power.on 6


Now when it boots it should start the VMs in the order you specified!

Sunday, 8 July 2012

Intel processor - Choose Carefully!

Sigh... Unfortunately the store (MWAVE in Lidcombe) did NOT allow me to return the i7-3770K and exchange it for an i7-3770. You may recall I discovered (the hard way) that the faster 3770K model DOES NOT support VT-D even though the cheaper one does.

I can sort of understand this and would have been happy to pay a re-stocking fee but what bugs me about this situation is that the description of these units on the vendors site said nothing about VT-D. There was no way to tell the difference and in fact the specs on their site were identical.

Exhibit A
http://www.mwave.com.au/sku-19010245-Intel_Core_i7_3770_Quad_Core_3rd_Gen_Processor_-_Socket_LGA1155_-_3_4GHz_(Turbo_

And exhibit B
http://www.mwave.com.au/sku-19010244-Intel_Core_i7_3770K_Unlocked_Quad_Core_3rd_Gen_Processor_-_Socket_LGA1155_-_3_5G

Ok I should have checked on the Intel site but this seems harsh.

Oh well I was just starting to like this place. Time for a new computer store.

Tom

Bloody Optus DNS

To make a domain work, the DNS server has to be under the control of the domain controller. Windows domains use all sorts of magic host names and records to find stuff.

The initial router I was supplied by Optus (a Cisco unit) allowed me to specify the DNS IP addresses handed out by the built-in DHCP server so all was good - the primary DNS was the domain controller and the secondary was Optus' own.

The router was fast enough but it would occasionally reset at random times (less random when it was hot). One day it gave up entirely. Optus were good in that they sent somebody out pretty much straight away, replaced the router and I was back up and running.

They replaced it with a Netgear unit which also seems pretty fast but it doesn't allow me to specify the DNS! So after some head scratching I decided to try running DHCP on my DC. I've had problems with this before as for whatever reason the switch will not pass on the broadcasts and I found this problem to be worse on wireless.

Anyway I've configured the DHCP server and this has been going Ok.

In the past I found the routers would act as DNS proxies so you would configure the DC as the primary DNS and the router as the secondary. This router doesn't do this and just passes the IP of the DNS it has been given out to the DHCP clients on the network. I didn't realize this and configured the router as the secondary so the effect was that lookups worked as the DNS server would forward up to the network but then if the DC was down (say because I shut it down over night) DNS wouldn't work. I figured this out recently and configured this correctly so even if the DC is down, so long as the computer has a cached IP from a previous DHCP it works.

The problem then is that if you try and access an address that isn't in the DNS the stupid Optus DNS redirects you to this true local search provider. This meant that lookups for my local servers by name often resolved to true local! This is pretty frustrating and not helpful.

Turns out there is an Optus resolver that doesn't do this. This guy posted details of the IP addresses thankfully and this seems to work.

http://justlocal.blogspot.com.au/2009/11/annoying-optus-dns-assist-feature.html

So life is good. If only I could get my VMs to auto-boot with the box...

Thursday, 5 July 2012

My Diffs now have Oil

It's been two years since Robin and I bought the Nitro car (an Thunder Tiger EB4 S2) and while we don't use it so much in summer, we've put a few litres of fuel through the thing since we had it.

It needs a bit of attention at the moment. Last weekend at the bashers track (Lansvale) we broke the wing mount, the aerial and cover. We still have problems with the exhaust coming off the rubber joiner every time we bump something.

So I spend some time this week stripping it down including repairing the wing mount and just generally cleaning it. I massively tightened up the screw holding the exhaust and spent some time with some wire wool and some polish cleaning the crap off the tuned pipe.

Tonight I got the diff oil and spent some time pulling down the front and centre diff, wiping all the grease off and filling them with diff oil. I've never stripped the diffs before and was very much looking forward to doing it. I find the internals of the diffs just amazing. The front one (pictured below) was in pretty good condition - it was tight and relatively smooth although it was a little noisy. The centre diff has developed some play and you can see from the colour of the gears it has gotten pretty hot. I suspect the centre diff will need replacing.



It took a bit of effort to get to the front diff. In the end I decided the easiest way was to undo the screws on the server saver posts, undo the strut attaching the plate at the front to the centre diff and then undo the four screws holding the whole front unit in. Then I unscrewed the two really long screws going through the front toe plate, the two screws going through the shock tower and the two long silver screws holding the front cover on the diff housing. This got me into the diff housing so I could remove the diff unit.



.
I think if I repeat the procedure when pulling apart the back it will work. I'll have to undo the strut going from the wing mount down to the chassis also.

I went with advice and am running 5K in the front, 5K centre and 1K oil in the back. It is quite viscous however and I wonder if I will end up changing the front for 3K later. It may settle down after we drive the car however.

Anomaly

Finished reading Anomaly yesterday and it was quite good overall. Anomaly is a budget (self published) Sci-Fi book off Amazon.

Anomaly is a story about a strange physical anomaly that turns up in the middle of New York and which turns out to be an alien artifact. The story centres on a science teacher and a reporter who inadvertently get dragged into the analysis of the artifact. The anomaly captures a large sphere of road and buildings and rotates it relative to the land around it.

The characters struggle to think through what the anomaly might be and why it manifests itself as it does. They think through how to communicate with it and what its goals for being there might be. The book also imagines what the global consequences of the anomaly might be.

I really like the way the book portrays an interesting scientific anomaly without resorting to magic but also without getting bogged down in scientific detail. I like how the main character thinks through it all just using simple logic. (Spoiler) the rainbow texta idea for instance for showing the alien the range of the human visible spectrum is great.

What bothers me is that I think there are other possibilities  that the characters could have come up with based on the evidence but magically the one they chose seems to become the next step (well mostly).

The other aspect that really bothers me is the global impact of the anomaly - the world turns to crap (riots and deaths) just because some weird alien artifact turns up? Seriously?

Yes I know my last book rant was about religion but again this author choose to portray most of the clergy as morons. He effectively sets up a few characters as the 'good cops' with a sensible world view but in order to make his point he has to create a few moronic clergymen for them to argue with. This is at best clumsy and at worst just insulting.

In the book the author deals with the issues arising from the anomaly having turned up in the US and the US controlling its exploration. That part seem very realistic to me as I can imagine the scientists of the world going nuts when they were excluded. The only part I think the book gets wrong is the belief that this situation would be acceptable or necessary.

(Spoiler) and then the aliens turn out to be some sort of galactic police that ensure no new up and coming race does damage within the galaxy. Hmm I wonder if this alien race invades planets to hunt down weapons of mass destruction too?

And I think that's what REALLY tarnishes this book for me - the nauseatingly american-centric viewpoint.

Otherwise the story trundled along and had lots of cool ideas so probably worth the $2.00 or whatever it cost.

Wednesday, 4 July 2012

Multiple Inheritance

Found what is probably a 'classic' C++ error today.

We have

Class Message;

Class LogonMessage : public Message

Then we have the processor that is invoked to handle this message which for whatever reason is defined like this:

class ProcessRequest

class ProcessLogonRequest 
    :    public ProcessRequest, 
         public LogonMessage

Now in an exception handler somewhere the code processing a ProcessLogonRequest catches an exception and dies:

void Service::process( ProcessRequest *processRequest )
{
    try
    {
...
    }
    catch( const SomeError& e )
    {
         Message *asMessage = (Message *)processRequest;

         //
         // This crashes
         //
         generateErrorResponse( asMessage->getSomeField() );
    }
}

I amazed this hasn't occurred earlier. I think that because the processor mostly handles all the exceptions (it is only a system error that gets up this far like a DB going away) that it didn't crash earlier.

The problem is of course that Message and ProcessRequest are siblings in the class hierarchy and you can't cast from one to the other. You could first down-cast to a ProcessLogonRequest and then cast to a Message but this isn't possible as the code doesn't know what its got.

Thankfully RTTI is enabled so we just changed this to:

Message *asMessage = dynamic_cast<Message *>(processRequest);

But then there is also a risk that processRequest is an instance of a class NOT derived from a Message so we also needed to handle the case where dynamic_cast returns NULL.

The other way to solve it would be for ProcessRequest to define a virtual method that returns a pointer to itself cast as a Message but then I would have to change every sub-class of ProcessRequest (and these are numerous).

Tom

Monday, 2 July 2012

Working at home

Given the name of my blog I thought I had to share this (thanks to Dan). Given the image of the developer after 1 year of working at home, what must I look like after nearly 10!

http://theoatmeal.com/comics/working_home

Sunday, 1 July 2012

Server Rebuild

I've been trying to figure out why my VMs don't start when the machine is powered on. Apparently it is a bug :( Have to wait for Update 2 of ESXi 5.0 for a fix. http://communities.vmware.com/message/2014677

The other problem I have been facing is that when I first configured the domain I used the domain name I had based on my old company name (consultancy company).  Initially I was using the domain for testing Autoenrollment functionality but over time I became more reliant on authentication. Anyway - unfortunately I no longer have that company and someone else is using the domain name. This causes havoc with name resolution. I decided to take the plunge and rename the domain. It shouldn't be too hard as I only have a single domain controller. Unfortunately it isn't straight forward and is a 10 step process involving a tool provided by Microsoft. The instructions are here http://technet.microsoft.com/en-us/windowsserver/bb405948.aspx


I started the process and got an error from rendom /list "The Behavior version of the Forest is 0 it must be 2 or greater to perform a domain rename: The server is unwilling to process the request. :8245"

Turns out I had to raise the domain functional level which is easy - go to Active Directory Domains and trusts, right click the domain and choose the 'Raise Domain Functional level'. It still didn't work. After some googling it turns out I need to also raise the forest functional level to do this you right-click on the 'Active Directory Domains and Trusts' entry in the tree and choose 'Raise Forest Functional Level' Then the rendom /list worked.

I hit another problem with this Framdyn.dll not being found by the gpfixup.exe took which turned out to be because there was an error in my path variable! It's been there for ages! Just a missing semicolon between the path to the wbem directory and whatever was next in the path.

I also ended up re-starting every computer in the domain so the new domain name would take effect but overall the process was moderately painless.

Name name resolution works again!

Thursday, 28 June 2012

Server Rebuild

So the next step is getting the TV server running again. I created a 2008R2 Standard VM for this and installed Mezzmo.

The trick is getting the 1TB or so of video, music photos and software installers that were previously shared on the network by the previous server onto the new TV server box. I tried moving the HDD physically over to the ESX box but for whatever reason you can't add a raw disk to an ESX vm the way you can in workstation. Sure you can add a SAN as a raw disk but not a NTFS disk which I found odd.

Ok... Have to do it the hard way. I created a big temporary disk for the TV Server on the main HDD of the new server, cranked the old server up again and began copying the files across the network. About 6 hours later this process was complete.

Then I put the HDD into the new server, added it as a datastore (which formats it for VMFS and wipes anything that was on it - god I hope I got everything off!) then I added it as a disk to the TV Server. I then began copying the files BACK from the temporary space I created onto the HDD from the old server. In  a few hours  it might be complete... sigh...

Mean while I was looking the the new VT-D magic. Now the Asrock pro3 says it supports it and in the BIOS there is a setting for it but the BIOS says it isn't supported by the processor. I found this odd so I did some more digging.

For some bizarre reason the i7-3770K model (which is a tiny bit faster and unlocked) DOES NOT support vt-d but the cheaper 3770 (without the K) DOES. I am talking to the store now to see if I can swap it. I don't like my chances. Probably not the end of the world but it would be nice in terms of IO speed (both disk and LAN).

Tom

Wednesday, 27 June 2012

Server Rebuild

So last night I began the process of moving my VMs (test domain controller, two Oracle DBs) onto the new server i7-3770 ESXi server from the old HP xw8200 machine.

VMWare vCentre Converter does the job - simply a matter of pointing it at the VM file and the ESXi server and letting it go. The only problem was the first VM (the main DB) took a little over 8 hours to complete!

I use the domain controller as an SSH hub as well as for testing so I thought I'd copy the VM before I began. This also took 2 hours! It was only 15GBs or so. Hmm there is something going on here. A disk speed test shows the old server drive bandwidth at a little less than 4MB/s - that's pitiful!

So I kill some apps, I pull the server apart and go looking for loose SATA connectors etc but no luck.

I noticed that in the device manager there are the two ultra ATA storage controllers and two primary/secondary channels. The Primary/Secondary channel devices, in their Advanced Settings tab said the transfer mode selected was DMA if available but the current transfer mode was PIO. Hmm that's not right.

I dig a bit further (which involved downloading and re-installing device drivers from HP, lots of rebooting and poking things). Eventually I find in the BIOS that the configuration of the drives is PIO. I found an article saying that after updating from 2.02 - 2.04 these settings can be messed up and the performance suffers. It recommended using the option to reset the BIOS to defaults so I did this.

Resetting the BIOS to defaults didn't seem to help much and it did enable the damned SCSI adapter (which I don't use and don't have a driver installed for). I shutdown and dive back into the BIOS configuration. I changed both the default settings and the settings for each drive so it read 16 blocks at a time and it uses Max UDMA. I reboot and this works! Disk speed back up to 20MB/s. Copied 40GB of data from an IDE disk to a SATA disk in about 40 minutes or so. I kicked off the next conversion and this is running much more quickly!

I also started work on the UPS configuration. Turns out Cyberpower have a 'business' edition of their software suite that works on linux and ESXi. It comes as an 'Agent', 'Client' and 'Centre', From what I can tell you need an Agent to talk to the UPS (as mine is USB. Some have network cards) and you install a client on each VM (or computer) that you want to shutdown when the power goes.

To get ESXi itself to shutdown (and to get it to talk to the USB connection) you have to install a (free) VMWare Management Assistant (vMA) image. It is just Linux with extra tools for managing the ESX server.

The next trick was getting the installer in there. You have to copy it into a folder somewhere on the data store and then use vifs command inside the vMA to get it into the vMA VM. Then you can install it.

I gave the vMA a fixed IP address and then when i installed the client I just point them at the IP of the vMA and then can get information on the state of the UPS.

Ok the rebuild is getting closer to completion!

Tuesday, 26 June 2012

Server Rebuild

So all the hardware for my new server arrived yesterday.

The first hurdle was that the power connectors on my old Antec True power supply are different. The PSU has a 20 pin main power connector (instead of 24) and 4 pin 12v CPU connector (instead of 8). I did some searching through the manual and it indicated that if I install the connector at at the bottom and leave some holes open it will work.

I was curious what risk I might be facing doing this so I went googling. Turns out the additional pins are there just to provide higher current carrying capacity. See here http://www.playtool.com/pages/psuconnectors/connectors.html. Now this guy's estimates on the max power for each connector are far outside what I will be running. As I am using the built-in graphics hardware the machine will be using much less power than the potential maximum. So I decided to give it a go and upgrade if the wires/connectors got hot.

I pulled all the old Pentium 4 parts out of the case and began sorting out the wires. The case has a window and a cold-cathode light plus an illuminated window fan which are nice but generate a load of wires. I began tidying these up with cable ties to get them out of the way. I can't do all of them until I get all the drives in there and this won't happen until after I retire the old server. 




Robin Helping

The next challenge the CD drive. The old PC had an IDE drive which is fine for the few times I would use it but the new motherboard has no IDE connector! The solution was unfortunately to rip my desktop open and steal the DVD drive out of that just long enough to install the OS.

I left all the drive wires loose and only put in the new 2TB drive that will be the system drive for the new server. I removed the old floppy drive at the same time and I found covers (from the old motherboard box) for the hole where the floppy drive went. While at it I found spare covers for the expansion board spots where the old video and wireless card went (don't need these any more).

Cleaned out
Robin helped me plug all the case connectors on (HDD light, power switch power LED, reset switch etc). It was getting on for bed time for Robin so I sent him on his way and continued with the installation of the fron USB connectors. No firewire on the new motherboard so I just left that one loose. I connected the USB to the USB 2.0 connector (not 3.0) as I'm not sure of compatibility - will check that later.

Next step is boot! I connected the server to a screen, pulled the wireless USB keyboard/mouse thing from my desktop and plugged it into the new server, plugged it in, turned on the power switch at the back and hit the power switch on the front panel. I put in the ESXi CD and it booted!

ESX goes through its thing, asks me to sign in blood and then tells me there are no drives it can install on. It basically doesn't seem to recognize the SATA drive. I reboot, go into BIOS and sure enough the BIOS sees it. I do some googling (from my work laptop as my desktop is in pieces) and find an article saying that ESXi doesn't support SATA on the Z77 chipset and you have to use a USB drive! But I checked this - other people said it was compatible!

After a while it occurred to me that the ESXi CD is one I downloaded ages ago to install on the old HP box. I put the DVD back into my desktopm boot it and go to the VMWare site and there is an ESXi 5.0 Update 1. I download and burn that, move the DVD drive back to the new server and try to boot again. Phew it now detects the SATA drive and we are in buisness.

About 5 minutes later ESX is installed. Step 1 complete.



Monday, 25 June 2012

Wired

Just finished reading Wired by Richard E Douglas on Kindle (on my phone).

Lately I've been working my way through the cheap Kindle titles and have had some luck. For example the Star Force series by B.V Larsonm while not fantastically written, is pretty entertaining. In other cases I've not been as lucky and I feel this is one of them.

On a side note - has anyone else noticed how the $0.99 books all became $2.99 books? And how the new releases for $9.00 all moved to $13 and higher? Did you notice this increase in price was timed to be just after the demise of the major paper book retailers? (Especially in Australia).

Anyway back to the book (some spoilers) - the basic premise is that a slightly damaged ex-special forces dude gets brought back out of retirement to hunt down a scientist that has gone rogue and is threatening mass destruction. He discovers she has actually developed technology to massively enhance her intellect for short periods (among other things). She has been setup and they (the special forces dude, the scientist a hacker and another army guy) all go on a meandering journey to find who set them up and right the wrongs.

The pace of the book is excellent and certainly at the start some of the situations setup by the scientist to avoid capture were very clever. The action scenes are tense and occasionally the characters are in some pretty tight situations. As the book progresses it relies more and more on the brain enhancement tech to get the characters out of trouble which quickly gets tiresome.

I have a bunch of problems with this book. The main one I think is that for whatever reason I found it hard to accept the basic premise of brain enhancement implemented *genetically* via a retrograde virus. The capabilities they had just seemed implausible and at the end of the day no matter how big or fast your brain is  you can't know what you can't know. Sure the capability gives them perfect and total recall but occasionally I feel the result doesn't follow.

Now let me preface this with the fact that whilst I was brought up a catholic that my views on religion are at best quirky and probably undeveloped. The book however argues that the more intelligent people are the more anti-social they become. Super-enhanced people begin to see the people around them like normal people would apes. Furthermore it argues that if you are super-enhanced you see the fallacy of after-life and therefore tend to be less altruistic and more selfish. Then having absolute power (cough) and a sense of limited time compels you to act more selfishly as there are no risks associated with this behaviour.

First of all there is just so much research out there on co-operation and how it increases the overall effectiveness of a group that I would have thought a hyper-intelligent being would see this. Secondly if the author argued religion was invented so people could cope with their own mortality or loss of loved ones or if the author said religion was invented to explain the reason of your existence I could bite but really, there are enough examples of humans acting selflessly independently of religion. Even arguing religion was constructed to enhance altruism is a stretch if you consider early religions.

Overall I was pretty disappointed and won't be going back for the sequel.

Sunday, 24 June 2012

Server rebuild

I have an old HP xw8200 machine that I use as a server at home. It runs the two Oracle DB VMs I use for work and our cache of TV we can watch via Mezzmo (a DNLA server).

Unfortunately the stupid thing is starting to get flaky. I bought it before ESX was free and so I ran XP on it with VMWare Server (which was free) to run the VMs. This worked Ok as some software I wanted to run on it was only available in XP (the TV card software) but it still ran some servers to support my work needs.

It really needs more RAM and a while back I bought 8GBs to upgrade it and planned to rebuild it using ESX. Well unfortunately ESX won't run on it! While the machine is 64 bit it is only sort-of 64 bit and doesn't support the VT-X instruction. I tried 2008R2 and even 64 bit XP but nothing worked. Even worse was that when I put 8GB of RAM in the machine XP thinks there is just 2GBs and consequently all the VMs run quite badly.

So what to do? Well while the processors in the machine are not quite what I need, the hardware is very robust. My office is not air-conditioned and can get pretty hot in summer (40C on a bad day). The HP survived all that.

My plan was to get another 'tower' server. First of all I was alarmed to discover *none* of the HP tower servers are listed as compatible with ESXi. Also the cost is nearly double a desktop equivalent. For example an i3 based server like a Proliant ML110 G7 is around $AUD900-$1000 with a pitiful amount of RAM. A basic Xeon one is more like $AUD1500 and over $2000 with a decent amount of RAM.

BTW I found this site enormously useful when working out the relative performance of different processors as well as comparing the performance of server/desktop line processors.
http://www.cpubenchmark.net/high_end_cpus.html

I looked at Dell servers as they do appear to be ESX compatible but again a E3-1220 based unit (which is roughly equivalent to a i3)  with 16GB of RAM is around $AUD1400.


Ok do I really want an off the shelf server? I have an old Antec case around the house that has a dead P4 motherboard etc in it. What about if I build a server inside that box? Hmm.


I could buy a server Mobo, Xeon chip ECC RAM right and build one. I looked at the ASUS P8B WS motherboard http://www.asus.com/Motherboards/Intel_Socket_1155/P8B_WS/ and looked at a few Xeons (like a E3-1280) and while this would work and give me a server with reasonable specs the cost ends up being well over $AUD1200.

I was talking to a friend about this and he asked if  I really need a server machine? Ok so non ECC RAM may not be as reliable when running for hours but my desktop runs for many hours a day and I shutdown the server over night anyway. Maybe a custom built desktop would do the job.

So then I dive into the world of desktop chipsets, Ivy bridge vs Sandy Bridge and so on. Turns out that as Ivy Bridge integrate GPU functionality into the chip, most Mobo include graphics output which is nice for a server as I don't have to waste money/power on a Gfx card.

Also I can get a pretty fast processor (i7-3770) for the cost of the Xeon and a Z77 motherboard plus fast ram for 3/4 the cost of the server stuff.

Then I do more hunting... Apparently many Z77 motherboards don't work with ESXi. Sigh... The problem appears to be the LAN card. The ones with a Realtek based LAN card apparently work but with others either you need to hack the install or they simply don't work at all.

Thankfully I found this spreadsheet (https://docs.google.com/spreadsheet/ccc?key=0AjymuQhfM0vYdHZtNThGSllMeU1SMU9ldVltUmp4NWc#gid=0) which lays this out. In the last three builds or so I have tended to go with ASUS motherboards but it turns out the ASRock unit is the best one as it has a Realtek card. The ASRock one is also quite a cheap unit at a little over $110 too.

By this point my head was positively spinning with compatibility matrices, motherboard chipsets, processor chip lineups etc.

I don't understand why it has to be so hard? Anyway I've ordered the desktop stuff based on an ASRock z77 motherboard and the intel processor. I bought some fast G.Skill RAM as when I did this in the past I found the performance boost worthwhile.

Should be here in the next couple of days and I'll post more then.

Work at home Zombie?

Welcome to my inaugural post!

For those who are curious - Some years ago I took an overseas posting with an up-and-coming startup. The company had interesting products but a typical dot-bomb business plan and so its market cap graph looked like the cross section of the Matterhorn except it very much made it back down to sea-level on the after-side.

Consequently when I returned from my secondment there was actually no company to employ me as it had been sold off/chewed up/spat out. Instead they kept me on but I worked from home (10 hours out of sync with the office!)

When I left to return home from my overseas posting I found it a bit like leaving the company - they had a lunch and wished me well etc. I wasn't quite leaving though as I was still in contact via conference calls and email. I found it very much like the employment version of being un-dead (un-fired).

And hence the WAHZ label was born.

Anyway I am a software developed specializing in PKI software, security, C++ and general Windows development. I like Java, have written way more perl than I'll admit and I like designing stuff before I cobble it together. I have threes children that very much fill my schedule outside of work and a busy wife that teaches primary schoolers.

My plan is to use this blog to publish tid-bits of software techniques or design information I come up with. Also I'll publish information about my few home projects (both technical and construction) and stuff the kids are up to and what I'm reading.

Hope you enjoy it. I'll try not to get bored with posting to it too soon.