We are where we are...

Posted by Richard Newstead on 10th May 2017

As a teenager back in the 1970s I spent a lot of time building HF antennas with a couple of friends. One, James G4EZN, lived on a farm and that was the centre of our activities. All sorts of antennas were built and tested. Every new antenna yielded unexpected DX opportunities. They all worked! Testing was limited to the time-honoured ham radio method of seeing what DX we could work.

Roll on a few years and antennas were still a passion. Like many people, I have always lived in houses with limited real-estate but contesting gave me the chance to experience bigger and better antennas at other locations. At home I ended up in a cycle of putting up an antenna, using it for a few months and then replacing it. Each replacement antenna was accompanied by a period of increased activity as it was tested and inevitably those periods included more DX contacts than had gone on before; or so it seemed. In reality I was falling prey to an effect well known by psychologists where people tend to overestimate the benefits of something new. Like many people, when I had two antennas available I did my testing by swapping antennas and listening or transmitting. It’s still a popular method - there are many Youtube videos to prove it.

About a decade ago various developments in the hobby gave rise to new ways of testing antennas. At about the same time my interest in the details of HF propagation increased. I did a great many experiments with Peter Martinez G3PLX (mostly looking at Doppler effects). With a professional background in science and radio systems engineering it was perhaps inevitable that my interest would move towards making measurements of antenna performance.

A first step was to investigate the utility of the reverse beacon network. Lots of people seemed to be using this for antenna experiments and amazing results were being claimed. I saw posts on the internet from one person who said that he could measure co-ax loss from even a short length of cable using the RBN. Very few people actually provided the data that they used to draw their conclusions but those that did seem to be basing their conclusions on tiny data sets. I set out to investigate how good such measurements might be with the simplest experiment (but oddly one that no-one seemed to have done). Using a precision 6dB attenuator and a single antenna I made a few transmissions with and without the attenuator. Sure there were differences, but they were not 6dB and neither were they consistent. It seemed to me that people were deluding themselves.

I was in at the start with WSPR and that too gives a signal to noise figure for each report. People were using that for antenna tests but the same problems seemed to be present: small data sets and inconsistent readings, poorly interpreted by users. While I didn’t do many tests with WSPR it seemed little better than the RBN.

Moving on a few years and I was in a position to look once again at these techniques. People were now suggesting that WSPR gave good results if you did A/B switching in adjacent time slots. Given the fading characteristics of the HF channel and the 2 minute length of the time slots that appeared quite unlikely; especially as the readings seemed to need to be done with a single receiving station and you have no idea of how their noise floor might vary. I was also a bit uncomfortable about the potential, with this measurement protocol, to compromise the WSPR system by using un-randomising the transmissions.

As we were developing a WSPR product at work, I wondered if there might be another way of analysing the WSPR data to remove some unknowns. As part of our development work we been been ranking reception reporters (not reports) in order of distance and over a given time-frame. We them decided to add some additional metrics including the average distance. Running with that for a few days it occurred to me that a graph showing the average range might be interesting. As it transpired it was extremely interesting, showing the ebb and flow of the bands in a very clear way. This metric underwent some refinement and ended up as “DX10”, a handy performance metric.

But what about antenna comparisons? We decided to add a facility to allow people to compare their DX10 graph with others. This was another step forward and it was now easy to see who had the best system (DX10 was evaluating both the antenna and the site of course). Tests with similar antennas on an open site showed similar results and it was easy to see the effect of small (3 dB) power differences on identical antennas. DX10 seemed to have got around the main difficulties by using a metric based on two simple measures: where the receiving station was an whether they actually received you. Using the built in power-control in the WSPRite it was even possible to work out a difference between antennas in decibels.

But DX10 opens a wider, almost philosophical, discussion as to what exactly is being measured. It’s clearly something to do with system performance and with care it’s possible to attach a single number to it in decibels. However, almost all tests compare antennas with different polar patterns and/or different polarisations. Having a single metric to evaluate DX performance is certainly a useful tool, but it’s only one tool.

Because WSPRlites are relatively cheap and very easy to use, lots of people bought more than one to do comparisons: Frank Donovan W3LPL was one such person. As part of his analysis he looked at the signal to noise reports and corresponded with me about them. As my previous experience of using WSPR S/N had not been very positive, I was not actually that interested (sorry Frank!). I get emails weekly from people who read far too much into flaky data. But something was different with Frank’s interpretations, firstly he had huge amounts of data as his antenna system was so large, second he filtered for wave angle but most interesting to me was the third factor; he claimed he could see some quite small differences in antennas that he had always suspected but never been able to measure. This seemed worthy of further investigations.

I asked Frank to do some controlled tests where we knew the answer - tests with attenuators and power combiners. Usually I never hear from people again when I suggest verification tests. After all who wants to waste time measuring something they already know? But to my surprise Frank was up for the ride and he did a number of careful tests. These gave convincing results.

To support Frank’s experiments, we developed some tools him help him gather the data and as time went on we developed them further to do the basic analysis completely automatically. We also developed some special firmware for Frank’s WSPRlites that allows them to run synchronised - which greatly increased the data rate and thus reduced the time to get results - but without compromising the WSPR system.


We released this to the WSPRlite community a few days ago. I’m wondering where we will go next?