About Author: Philip

Born and raised in England, moved to France for 10 years, then to the USA (Oregon) in 1992. Various interests including Ham Radio, Shooting and Photography.

Posts by Philip


Evil Bastards

Evil+ClownFor a variety of reasons, I prefer to run my websites, and those of a few friends, email and various other services on my own server. it is sitting here beside me, occasionally winking its disk access light at me.

Doing this is apparently an open invitation to half of the population of China, 30% of Russia and practically the entire populations of Poland, Vietnam and a host of other countries to attempt to break into the machine, either for purely vandalistic reasons, or to try to use the system to launch further attacks or to deliver SPAM.

Last Christmas I treated myself to the bits to build a more compact, but much faster and more capable server. I, of course, loaded the latest and greatest versions of all the software and frequently load all the latest patches. So far, there have been no direct successful break-ins.

However, there are a continuous stream of people “rattling the doorknob”.
I noticed that some of the log files were getting big. Fast. It was hundreds of attempts per minute to remotely log into the system, trying random user names, and long lists of passwords.
One or two of these, I would just ignore, because they are not going to get in with just a username and password anyway. But the huge volume was just annoying.

I have a firewall between the Internet and my internal network, so I set up a set of rules to allow people three attempts to successfully login (after all, I want to be able to do it myself), and on the third failure, block that IP for 24 hours.

After 24 hours, the list was hundreds of entries long. But there were now many fewer people rattling the doorknob.

Next, I woke up one morning to email on my cell phone with several hundred bounced email messages.
I don’t allow people to route email via my system unless they can authenticate, but someone by continuous trial and error had somehow guessed my wife’s password and was using that to send SPAM – by the thousand. A quick change of password fixed that, but I wasn’t happy looking at the continuous stream of attempts to guess passwords on the email server.

Back to the firewall, and a new set of rules to do basically the same thing that I did for remote login. As soon as it had blocked a few hundred IP addresses, things quietened down.

Looking at the IP addresses in the block lists, most are from China, followed by Russia, Poland and a variety of smaller countries.

A while later, there was another deluge of bounced email messages. A bit of digging around revealed that these were coming via the web server.

I run a few websites. For simplicity, I have tended to set up WordPress for most of these. Its relatively easy for people to use (I don’t have to get too involved) and can be made to look not too bad relatively easily. The problem with WordPress is that it is a pile of s**t. Full of holes for hackers to exploit. I had already somewhat locked down user registration, requiring anyone wanting to register on one of these sites to respond to an email to activate the account. Since 99.99% of the “new users” come from bots, they tend not to reply to email. But that doesn’t stop them trying dictionary attacks, guessing usernames and passwords.

So, you say, what is the worst they can do? Post a few spammy messages?, Well, no. There is a utility in WordPress intended for things like “trackbacks” (which I consider useless anyway), but which also allows basically execution of arbitrary commands — once you have authenticated. That had been used to add extra code to the website to go fetch SPAM and a list of addresses, and to blast it out to those addresses.

A quick delete of the entire website, and a restore from a backup removed the problem. But I had to stop it happening again. First thing I did was to change the permissions on that evil little tool (xmlrpc) to completely disallow access.

My logs now began to fill with error messages caused by attempts to access this file. There were also various other things they were trying to access, including the login process.

Unfortunately, since this was a website, there was no easy way to detect/block these at the external firewall. There is a tool (fail2ban) that can be used to scan log files for certain patterns, and block IPs locally on the machine when it finds those patterns. I took a look, and decided that it was overly complex, so let it drop, and decided to just ignore the garbage in the logs.

Then, someone found a new bug in WordPress, and within a day hacked every WordPress site I had to convert them into sites via which to download all sorts of undesirable content (mostly stolen/hacked software, documentation etc). A new WordPress release was rushed out to fix this hole, but everything was already hacked. So, one by one, delete the website, restore from backup, upgrade WordPress. Of course, there were still thousands of hits on the old download URLs (which no longer worked), as well as attempts to restore the hack.

I gave in. I installed fail2ban.
I set it up such that anyone just touching the xmlrpc file was immediately blocked.
Anyone attempting to access the old hacked URL was immediately blocked.
Anyone trying to authenticate more than N times within M seconds was blocked.

There was still a lot of traffic trying to authenticate. It was coming from random IPs. These people have vast numbers of hacked machines that they use, so you tend not to see the same IP too regularly.

But, this website is a little different. Its mine. Only I log on to it. So I set this one up so that any authentication attempt not coming from my computer gets blocked immediately. That locks out that IP from all the other websites too.

The lockout is about two weeks.

looking at the list of IPs blocked just by fail2ban, there are currently 1,936 of them, and growing steadily.

I do wish these people would go find another job. I find doing this stuff annoying.


Lifetime Retrospective

On my FaceBook timeline I have several groups, including some for the area where I live, and also the area where I grew up. Reading those Wednesfield and Wolverhampton groups, they are mainly populated by people that grew up there, and still live there. They have spent their entire lives probably within 30 miles of where they were born, just as their ancestors did for hundreds of years previously.

I couldn’t help wondering if my life might have been better spent if I had remained in the area rather than moving around, and I also notice that their lives seem to revolve less around work than mine seems to.

One time, when I was laid off (AOL), they paid for some counseling courses to help find new jobs. I already had a job lined up, but decided to go anyway just to see if I could learn anything useful. One of the first group exercises was to sit in a circle and introduce ourselves. “Who are you” was the question. The circle answered, one by one. At the end, the counselor said, “Fine. Now lets do it again. Tell us who you are, but DO NOT tell us what jour job or trade is. I am not interested that you are a manager, a computer programmer etc. I want to know who YOU are.”

It was really very hard to answer. We are our jobs. But we shouldn’t be.

So, remembering this, I tried again to ask myself if my life had been better than if I had remained in the area where I was born, and had taken a more classic job, which may well have paid as much, or even more, and would have kept me with people I knew.

To answer this, I decided to forget about the jobs I have had, but to concentrate more on what have I done in my life. The list I drew up probably answers my question.

  • I have lived in three different countries, taking my family with me, and dropping off kids in each as we moved (England, France, USA).
  • I have learned French. (I learned American too, but most people don’t understand the distinction).
  • I have surveyed Paris from the top of the Eiffel tower.
  • I have stood on a bridge over the Seine, which was vibrating to the loud music played celebrating Bastille Day, having my teeth rattled by the explosions of the best fireworks display I have ever seen.
  • I have stood on the top of a mountain in Switzerland after midnight, seeing the entire mountain range lit by starlight, and the glow of the village thousands of feet below.
  • I have sledged down the above mountain, to the village with only stars to light my way.
  • I have crossed the Atlantic at twice the speed of sound.
  • I have been in a 747 as an engine blew up.
  • I have flown in a hot air balloon.
  • I have fired a machine gun.
  • I have walked on the deck and ridden on the lift of an aircraft carrier.
  • I have eaten BBQed sardines in Portugal.
  • I have visited the catacombs in Paris.
  • I have seen the Coliseum, and walked the same streets as Cesar.
  • I have explored lava tubes.
  • I have learned to dive.
    • I have seen “walking stick” eels 300′ down.
    • I have seen a 15′ white tipped shark cruise by me.
    • I have heard the whales sing.
    • I have had a giant sea turtle swim beside me.
    • I have seen a Moray eel too close for comfort.
  • I have walked on the black sands beach in Hawaii.
  • I have looked down into an erupting volcano from a helicopter.
  • I have walked on still hot lava, and stood as close as heat permits to flowing lava.
  • I have experienced a -40˚C Calgary winter.
  • I have experienced a 2 hour long day, with the Sun just kissing the horizon.
  • I have seen England change until it is no longer the country I grew up in.
  • I have stood at the base of the New York Twin Towers, looking up and wondering how much of a mess it would make if one fell down.
  • I have walked part-way down into the Grand Canyon (past the notice that says “Do not proceed beyond this point unless you have food and at least two gallons of water per person” — who would not, upon encountering such a sign?)
  • I have experienced the heat of the Arizona desert.
    • I have experienced being in a car in the Arizona desert, miles from a main road, with its wheels sunk into soft sand (I was not driving).
  • I have experienced the desolation of the Eastern Oregon high desert lava plains.
  • I have seen the desolation caused by the explosion of Mt. St. Helens.
  • I have been in a large earthquake in San Francisco.
  • I have seen the giant Redwood Trees.
  • I have been to a Beach Boys concert in California.
  • I have watched an auction where several billion dollars changed hands.
  • I have eaten live shellfish in Malta.
  • I have flown more than one million (physical) miles.
  • I have been to New Orleans during Mardi Gras.
  • I have held a (baby) alligator in the Everglades.
  • I have experienced Independence Day celebrations in Boston.

Many of these experiences have been related to, or as a result of my job. Many have not.

Do these sorts of experiences make for a “better” life? I don’t know, but looking back, I can hardly complain that my life has been boring.


Revolver vs Semi-auto

Perhaps even older than the perennial argument about the relative superiority of 9 mm vs .45 ACP is the argument about the viability of the revolver vs the semi-auto as a self-defense weapon.

There are still those that claim that the revolver is a better choice for some classes of people, most notably women. The argument is that a revolver is simpler, and easy to operate. Presumably, the implication here is that women are also simple and mechanically inept. Not only is this untrue (at least in my experience) but it is inviting (at minimum) a poke in the eye with a sharp stick.

So lets take a look at the relative advantages:

  • Capacity. The smallest semi auto is competitive in terms of capacity. Small revolvers firing decent power ammunition will typically have 5 or maybe 6 rounds. A small semi-auto will typically hold 10 rounds. Even the minute Ruger LC9 9mm holds 7+1.
  • Reloading. Reloading a semi auto typically involves pressing a button to drop the empty magazine, sliding a full magazine into place, then releasing the slide. A revolver typically involves pressing a button to release the cylinder, swinging the cylinder out, hitting the ejector to extract the used cases, then loading rounds, typically one by one into the cylinder and finally closing the cylinder. This can be sped up by the use of speed-loaders, or moon-clips. The only disadvantage to these being their bulk.
  • Physical size. One of the problems with a revolver is that there just isn’t much you can do about the diameter of the cylinder. For a given capacity, a semi-auto is always going to be smaller.
  • The slide. The slide on a semi-auto is probably the  most intimidating part. Having that chunk of metal whiz back at blinding speed 1/8″ above your hand is disconcerting until you get used to it. Then there is the problem of racking the slide. The spring on some guns making this a challenging task for the uninitiated. However, I contend that with a bit of coaching, I can get people who claim that they just can’t do it, happily racking their slides in a couple of hours. It is mostly technique, although there will undoubtedly be some people with real physical problems for whom this will always be a problem.
  • The cylinder gap. The gap between the front of the cylinder and the barrel leads to a blast of hot gas (flame) and potentially small pieces of metal blasting out with each shot. Unless you fire a revolver in the darkness, this is often unnoticed until someone gets a finger or hand in the way. As with the semi-auto slide, dealing with this is simply a matter of training yourself to keep your hands well away.
  • Mechanical complexity. The revolver is typically touted as being mechanically simpler. In fact, it is arguably considerably more complex. Leaving aside the trigger/hammer/sear which is reasonably consistent across revolvers and semi-autos, a semi-auto (non 1911) consists of basically a chunk of metal (the slide), the barrel and a spring. You can’t get simpler than that. If you want, you can add in a box and spring for the magazine. A revolver is more complex. As you start pulling the trigger, the cylinder has to un-lock so that it can rotate. The cylinder has to rotate the next cartridge in line with the barrel, accurately, to a precision of a thousandth of an inch or so. The hammer is rising at the same time. Before the hammer falls, the cylinder has to be locked in place, then the hammer falls. There is a lot of precision placement and timing going on during that trigger pull. From the outside a revolver may look simple, but internally it is relatively complex.
  • Jamming. Semi-autos seem to find an endless variety of ways in which to jam. In reality, they are all variations on  a couple of themes: extracting and ejecting the empty case and feeding the next round from the top of the magazine. Short of the dreaded double-feed, most jams can be fixed by the slap-rack-bang technique. Revolver jams are usually due to a single cause: There is a lot of leverage cylinder to trigger. Just try holding the cylinder between two fingers and pulling the trigger — you can’t. So the revolver depends ipon a very freely moving cylinder (when unlocked). Small amounts of dirt from just about any source in the wrong place will make the cylinder rotation stiff, and the trigger pull impossible.
  • Ammo problems. There are two (rare but important) ammo problems to consider. The first is a squib load – too light a charge of powder. The result is the same for revolver or semi-auto: a bullet lodged in the barrel, and the distinct possibility of losing at least a finger or two if you pull the trigger again. The second is a hang-fire. On the range these are easily and safely dealt with, just keep the gun pointe down range for 30 seconds, and if it doesn’t go bang, it is safe to remove the dud bullet and continue. In a self defense situation you cant do this. With a semi-auto, you just rack the slide – taking care to keep fingers and eyes away from the open action in case it does go off. With a revolver, you really can’t pull the trigger again, because the fizzling round will rotate to a position where the bullet has nowhere to go. If it does fire it will probably take the side out of the cylinder, and maybe half your hand with it. All you can do is a full eject/reload. Rare as hang fires are with modern commercial ammo, they do happen, and this, above all, is probably why I would not use a revolver for personal defense.

The One Tree

Everyone in the world must have seen the climate hockey-stick graph. A history of global temperature over a thousand years derived from tree ring data used as proxies for temperature.

It has taken time, but the hockey stick is no more. The credit for the first nail in the coffin goes to Steve McIntyre who demonstrated that the statistical methods (mis-)used to generate the graph created by Michael Mann and made famous by Al Gore would generate a hockey stick shaped graph if fed with random data. There  was also the question of a downturn in temperature in recent years being hidden by replacing the tree ring data with data from thermometers (the famous “hide the decline” trick).

The next version of the hockey stick graph was generated by good statistical methods, but some detective work by Steve McIntyre (again) showed that to create this graph most sample trees had to be excluded from the data, and that the uptick at the end was due to a very small number of trees from a specific area of Russia (Yamal), of a type that are known not to be good to use for temperature proxies, and to one tree in particular. Remove that tree and the hockey stick disappeared.

However, the climate gang at the University of East Anglia hung onto this tree for dear life.

yamal_chronology_compare-to-b13Now, a new paper has been published in which they have finally dropped The One Tree from the data, and show a graph generated from a much better sample of trees.

The graph to the left illustrates the original hockey stick graph, including The One Tree (red) overlaid with the new version.

Surprisingly, the hockey stick is no more.

In addition, if we compare the new graph to the one that Steve McIntyre generated in 2009, and a second analysis from 2011 using a broad sample of tree ring data, we yamal_chronology_compare4see remarkable agreement.

That one of the scientists involved in generating the first (and second) hockey stick graphs finally comes around to agreeing that the Earth is not flat is a huge step forward in bringing some sense and credibility to climate science.

Kudos to Keith Briffa for having the integrity (and the courage) to do some real science and publish the results, even if it does contradict previous work (as well as the frantic cries of the “climate change” doomsayers).

Of course, none of this would have happened without the work of Steve McIntyre, and his dogged persistence in getting people to see the truth.

See his article on this topic here: http://climateaudit.org/2013/06/28/cru-abandons-yamal-superstick/


Benchmarking LDAP


Benchmarking an LDAP server can be more difficult than it may seem at first sight. Benchmarking several different LDAP server products for comparison purposes can be even more complex.

The basic problem is that unless care is taken, a benchmark test can end up measuring something other than the LDAP server performance characteristics; typically a bottleneck in the supporting infrastructure (Server cache, Hardware, OS, File-system, TCP/IP stack, network infrastructure or a combination of these),  or the performance of the LDAP client(s) creating the load.

Even when care is taken to avoid or at least minimize these problems there is often a temptation to load the server to the maximum to see what its extreme performance is like. This is usually done by sending nose-to-tail requests over multiple connections.

Unfortunately, this often yields some very unhelpful results.

In a real production environment, care will be taken not to run servers at their limits. In fact, careful system design will try to ensure that any predictable traffic spikes will be somewhat less than the maximum capacity of the system.

In this article we examine the effect that the number of connections to an LDAP server in a benchmark can have for different types of traffic.

The systems used in the following set of tests are 2 CPU, 4 core 2.53GHz machines with 24GB of memory running Centos 6.2. The LDAP server is  configured with a 16GB cache and loaded with one million entries. All the entries and indexes fit into memory. Beyond configuring the cache no tuning was performed as would typically be the case for initial benchmarking  runs. Similar characteristics can be expected with virtually any modern LDAP server.


A typical benchmark will consist of using multiple clients, each running some number of threads, and sending requests as fast as possible over each connection to the LDAP server. The results obtained this way can be deceiving. A typical curve of number of connections vs. request rate (throughput) looks like this:

What stands out is that with nose-to-tail requests on each connection, throughput maximum is reached with ~30 connections. In fact, as the number of connections increases, throughput actually drops slightly. Looking at the request response times is instructive.

Once maximum throughput (around 30 connections) is reached, traffic is being queued somewhere (most likely in a combination of the work queue within the LDAP server, awaiting worker threads, and possibly within the TCP/IP stack(s) of the client and/or server machines.

Without taking care of what was being measured, a simple interpretation of a benchmark run with 600 connections would conclude that this server is capable of around 74,000 searches with a response time of around 8.5 ms.

In reality, if too many connections are not used, it is capable of 75,500 searches with a response time of 0.5ms. Not a big difference in number of requests handled, but a very big difference in response time (roughly 16x).

The decrease in number of requests handled and increase in response times as connections are added beyond the maximum capacity point is almost entirely due to the overhead of handling the additional connections, which contribute nothing to throughput, but do contribute to overhead and request queuing time.


If we look at timings of a typical authentication sequence consisting of searching for an entry based upon an attribute value (uid) then performing a bind against the DN of the entry located by the search, we see a similar curve (response time is for the entire search/bind sequence).

Again, the “sweet spot” for this particular HW/OS/Server combination is ~30 connections carrying nose-to-tail traffic.

There is a gradual degradation in the throughput as the number of connections is increased. This would lead us to suppose that there may well be a fairly dramatic increase in response times as for search operation.




As indeed we do see in this graph.

For this sort of benchmark to be meaningful, there need to be several runs to determine the response characteristics as above. Even then, it is still not a really useful test since in production no system would be designed to be carrying maximum supportable traffic on each LDAP server instance.

In reality, there would be multiple instances, probably behind a load balancer to ensure that under normal conditions each received an amount of traffic well within its capabilities.

But what if we can’t have that much control over the number of connections? In that case we may want to look at how the throughput and response time varies if we limit the authentication rate.


It is perfectly feasible to limit traffic rates with decent load balancers and/or proxy servers, so this is not an unrealistic test. Picking some reasonable value, in this case 5,000 authentications per second, we vary the number of connections.

There is no perceptible degradation in throughput, as we would expect, since we know from the previous tests that the server is capable of much higher throughput than this.




Response times remain acceptable, although this curve does clearly illustrate that many managing many connections does have a measurable (but probably insignificant) impact.







MOD requests, particularly on a system with relatively slow file-store as on this system (single internal disk) are typically limited more by disk IO bandwidth and than anything else. So we would expect to see different response curves.

In fact, they turn out to be quite similar, with maximum throughput being reached with a relatively low number of connections:


MOD operations are inherently slower, so the lower maximum request rate is not a surprise.












Response times are also heavily influenced by the number of concurrent connections to the server.






Other Factors

When pushing servers to their limits, where they (hopefully) will not be operating in a production environment, it is worth noting that there are other factors which can make a noticeable difference to performance.

For example, in the search test above, three attributes were returned (sn, cn mail).

What happens if we only return one attribute (mail)?





Overall, the effect is marginal, but quite measurable.





Normal logging operations become noticeable at the limits. For example, the same authentication test as previously performed with the access log turned off:


Note that this is for authentications, search and bind operations only, no write activity. The effect would almost certainly be more pronounced if the same (slow) disk was used for both database and logs.

Other factors related to logging which can have a significant impact on performance are the type of logging performed (write to file, vs write to a RDBMS vs write to syslog), the level of logging and the number of logs being maintained.


Benchmarks – How To

The most useful benchmarks are based upon production traffic patterns, with the same mix/rate of all types of requests that will be used in practice.

It is not always possible to determine this, but best estimates are much better than measuring individual request types, or some trivial mixture.

If the test is to determine the suitability of some product to replace an existing system, using the same request/rate mix gives a base to compare the existing system to a proposed replacement.

Once the system is characterized for the expected traffic, rates and number of connections can be increased, but always try to change these independently, determining the best number of connections to achieve the maximum throughput.

Next, determine the expected maximum throughput, which hopefully will be significantly less than the server limit. Some experimentation with numbers of connections will soon determine if there is a maximum that you do not want to exceed, and careful tuning of connection pools can ensure that this is not exceeded in practice.

On load generation

In order to be certain that what is measured is the LDAP server characteristics, and not those of the LDAP client(s) some care needs to be taken in understanding the client. For example, using SLAMD it is tempting to use the “Mixed” client to measure a mix of MOD and SEARCH traffic. This will often produce somewhat disappointing results due not so much to the LDAP server as to limitations in the SLAMD client. Much better results are typically achieved by running two SLAMD jobs in parallel, one performing SEARCH operations and on MOD operations.

When testing a large, load-balanced system, several machines should be used to host clients, and care taken to ensure that CPU and/or network bandwidth is not exceeded, both on the LDAP server, LDAP clients and all intermediate network segments and network devices.

To achieve maximum throughput, LDAP client threads should be restricted to a small multiple of the number of CPUs on the machine on which they run.


Understanding the BBC 2006 Seminar Issue

Those encountering the story about the infamous BBC “policy seminar” in 2006 and its subsequent revelations  for the first time (which you will not have seen on the BBC or almost any other mainstream media at this point) might not appreciate all its implications.

The story is not simple, and drags out over many years. This is an attempt to give a brief history of the issue and explain why it is important.

The BBC Charter

The BBC is a somewhat peculiar institution in that it is funded by what is essentially a tax, but not controlled by the government. It operates under a charter which makes it independent of the government, and free to operate in whatever way it sees fit, subject to some fundamental constraints. One of the most important of these is that of impartiality. The BBC is required to be impartial. This has been one of its greatest strengths  and the reason why it came to be regarded as the benchmark against which all other news organizations were judged. Since it has no shareholders, and gets its money independently of what it reports about the government, it is in a unique position in the entire world.

Adherence to the charter, in spirit as well as its letter is what made the BBC respected/great.

Impartiality is spelled out in the charter. If there are multiple sides to any story, the BBC is required to present all sides of the story.

The Charter Broken

It became apparent some years ago that the BBC was being less than impartial on the subject of Global Warming (I will stick with that name, even though it has gone through several marketing make-overs and name changes). Only one side of the story was appearing in BBC output, that of the impending end of the world if “something” was not done. Now, there are many very respectable scientists that disagree with this, and have very valid questions about the validity of the models and their results upon which these predictions are based.

As time progressed, empirical evidence, actual physical measurements, began to disagree with the model predictions, and research into into the data and methods being used started to show some unsettling discrepancies.

Rather than explore these, the BBC essentially closed ranks, ignored these and doubled down on the side of predictions of doom (if you don’t all pay up one way or another).

Questions were raised about the requirement of the charter to explore all sides of  any story. When it became obvious that these questions would not go away, the BBC Trust (those responsible for ensuring that the charter is observed) produced a report. There is one famous paragraph which is the center of today’s problems for the BBC:

The BBC has held a high-level seminar with some of the best scientific experts, and has come to the view that the weight of evidence no longer justifies equal space being given to the opponents of the consensus. But these dissenters (or even sceptics) will still be heard, as they should, because it is not the BBC’s role to close down this debate. They cannot be simply dismissed as ‘flat-earthers’ or ‘deniers’, who ‘should not be given a platform’ by the BBC. Impartiality always requires a breadth of view: for as long as minority opinions are coherently and honestly expressed, the BBC must give them appropriate space. ‘Bias by elimination’ is even more offensive today than it was in 1926. The BBC has many public purposes of both ambition and merit – but joining campaigns to save the planet is not one of them. The BBC’s best contribution is to increase public awareness of the issues and possible solutions through impartial and accurate programming. Acceptance of a basic scientific consensus only sharpens the need for hawk-eyed scrutiny of the arguments surrounding both causation and solution. It remains important that programme-makers relish the full range of debate that such a central and absorbing subject offers, scientifically, politically and ethically, and avoid being misrepresented as standard-bearers. The wagon wheel remains a model shape. But the trundle of the bandwagon is not a model sound.

On the basis of this, the BBC gave essentially no opportunity for those with a viewpoint contrary to “the consensus” and exposure, and when they did, tended to choose the least articulate and well prepared and contrast them to very articulate and well prepared proponents of the consensus.

Before looking at the central issue here, it is interesting to observe that science does not operate by consensus. Even their own example “flat-earthers” was once the scientific consensus. Consensus has no part in real scientific debate. Science is based upon evidence. The output of models, while useful in formulating a hypothesis, is not in itself evidence.

But back to the core subject. The paragraph begins with this assertion:

The BBC has held a high-level seminar with some of the best scientific experts, and has come to the view that the weight of evidence no longer justifies equal space being given to the opponents of the consensus.

There were some that questioned what self-respecting group of scientists would agree with that statement. When pushed to be more specific about this meeting the BBC said that it was a seminar held in 2006.

Being interested in who the scientists were that would formulate such a statement, Tony Newbury asked for more information. In particular who attended this meeting. The BBC refused to divulge that information. Tony then issued a request under the Freedom Of Information Act (FOIA) asking for the same information.

After a long series of refusals to deliver the information as the law requires, the case was taken to court. The court case was interesting. On one side, we have Tony, an old age pensioner with very limited means, and on the other, the BBC had retained no less than six barristers to represent them in court. The legal bill for these has been estimated at £40,000 per day.

After some rather strange statements by  the judge, indicating and expressing a distinct bias towards the BBC case, the final verdict was that the information need not be revealed because for these purposes the BBC (totally funded by tax) was a private organization.

The verdict is likely to be appealed simply on the basis that there is not, and should be no possibility that the BBC can be considered a private organization.

But at that point, it appeared that there was little possibility of the names of the attendees at this rather important meeting ever being revealed.

The WayBack Machine

At this point, we will take a small diversion to mention a project that few people are really aware of.  For many years a non-profit organization based in San Francisco, California has been taking snapshots of many servers on the Internet. It is essentially a history project, documenting the evolution of the web. It is possible to go back many years and request a page on a given web server as it appeared at any specific date. Not all pages and not all servers are covered, but enough to make this a very useful research tool.

Enter Maurizio Morabito

Maurizio was looking around to see what he could find on this infamous seminar and found a tantalizing reference to it, which ended up pointing nowhere. He had the idea of using the WayBack Machine to locate the document.

A version of the complete document was found there, containing the names of all the participants. It appears that these had been published on the Internet, then, when questions began to be asked, the document had been heavily edited to remove the participants names in one case, and deleted in others (for example, the broken link that Maurizio found).

The website on which this was located (International Broadcasting Trust) was interesting in itself, as were some associated documents found there.

It appears that the BBC deliberately obstructed, and spent something of the order of £100,000 of taxpayer money to suppress the release of information previously freely available on the Internet.

One has to wonder why.

It is also clear that this meeting was not organized for the purposes stated. It is much more likely that some justification was needed for the failure to observe the charter, and this meeting was selected to be that justification, even if it served some other purpose entirely.

 Why This is Important

A quick look reveals that the attendees at this seminar we not “the best scientific experts” at all, but mostly representatives from groups with a vested interest in perpetuating the global warming story.

In addition, there were high-level representatives from almost every branch of BBC  programming.

Since this was obviously not what the BBC had stated (a meeting with the best climate scientists), what was its purpose? It is also interesting to note that the attendees and purpose of the meeting were misstated, under oath, by several BBC executives. They appear to have committed deliberate  perjury. One can only wonder if there will be any repercussions from this. Ordinarily, it might have been “overlooked”, but with the current focus on the BBC for its dealing with sexual predator issues, there may be a less forgiving attitude.

So what was the meeting about? There was a clue in the climategate emails:

date: Wed Dec  8 08:25:30 2004
from: Phil Jones <p.jones@uea.xx.xx>
subject: RE: something on new online.
to: “Alex Kirby” <alex.kirby@bbc.xxx.xx>

At 17:27 07/12/2004, you wrote:

Yes, glad you stopped this — I was sent it too, and decided to
spike it without more ado as pure stream-of-consciousness rubbish. I can well understand your unhappiness at our running the other piece. But we are constantly being savaged by the loonies for not giving them any coverage at all, especially as you say with the COP in the offing, and being the objective impartial (ho ho) BBC that we are, there is an expectation in some quarters that we will every now and then let them
say something. I hope though that the weight of our coverage makes it clear that we think they are talking through their hats.
—–Original Message—–

Prof. Phil Jones
Climatic Research Unit

The aim is stated fairly clearly in the document that Maurizio unearthed:

The aim
The aim of the seminars is to change minds and hearts. We want to talk about the
developing world in a way that is interesting, engaging and provocative, so that the BBC
participants and independent producers come away convinced that this is an area which
their programmes should no longer ignore. We are not pitching ideas and have no
guarantees that specific programmes will be commissioned on these issues. Our goal,
therefore, is to bring to life stories and issues from the developing world. We shall not be
talking in detail about tv coverage so we do not need participants to have a detailed
knowledge of British television.

It appears the aim, which seems to have been achieved, was to insinuate the global warming meme into every thread of every class of programming produced by the BBC and ensure minimal to no airtime for anyone with a different viewpoint. Anyone listening/watching any amount of BBC programming will appreciate how ingrained references to global warming have become, it is mentioned at every appropriate opportunity (and at some totally inappropriate ones too).

This is propaganda that Goebbels would recognize and appreciate. Force ideas into everyday life, get them unconsciously assimilated. They have become the propaganda wing for (lunatic) Green fringe.

Not only is this a most egregious trashing of the BBC charter, but it has undoubtedly cost the British (and probably other) taxpayers untold millions and diverted government policy into what may well be  the biggest waste of money ever. There are also those that have suffered and died through being unable to afford heating due to the effects of this atrocity.


2m/70cm Installation in 2012 Grand Cherokee

If you are interested in ham radio and you buy a new car, the question of whether you want to install a radio in the car comes up, and if you do want to install it, how and where does it fit.

For some people, these are easy questions to answer: Yes, and wherever is most convenient.

But if you are like me, you may have an aversion to making holes in a brand new car, and also worry at least a little about the look of the final installation.

I have a Yaesu FT8800 radio which I had installed in my previous car (a 2005 Jeep Grand Cherokee). In that case, the radio was mounted under the dash, power came from a cable running through the engine compartment  firewall, directly to the battery. The antenna was mounted on a K400 mount secured to the edge of the hood, a few inches from the rear, and the antenna cable run around the driver’s door weather seal directly into the engine compartment.

When I started looking at the new Jeep (2012 Grand Cherokee), there were a few problems with simply using the same scheme.

The first problem was that the dash construction has changed significantly. On the 2005 version, the dash under the steering wheel was fairly thick plastic, hinged at the bottom with clips at the top. A firm tug would open it up, and it would fold down exposing all of the wiring, and access to the firewall. On the 2012, the plastic is much thinner, and is held in place with screws and (many) clips. It is a non-trivial task to open that up.

Next, I looked in the engine compartment to see if there were any conveniently unused holes through the firewall, or if it were possible to piggy-back on existing cable holes.

There immediate answer was no. No spare holes, and suspiciously few things actually passing through the firewall, which now seemed to consist of two well spaced metal barriers.

But the biggest surprise was … no battery.

A bit of Googling informed me that on these cars the battery is now inside the passenger compartment, under the passenger seat.

Ok, so how do I get to it? Well, the official Chrysler method seems to begin with “remove the passenger seat”. Hmm… really??  Yes. Really.

I took a look, the passenger seat is also home to the audio system amplifier, and being an electrically operated seat, and heated, there are quite a few connectors that would need to be undone before you could even start to remove the seat.

However, it appears that the instructions had been written with non-electric seats in mind. The electric seats have much more travel, and most importantly, can be raised up, giving more room to get access.

Moving the seat right forwards and up gave enough access to allow the one edge of the battery compartment cover to be lifted. Moving the seat right back gave access to the other end of the cover, and wriggling fingers under it and lifting popped off that end of the cover, revealing the battery.

Now, how to connect to it. Unlike previous cars, there are no convenient nuts and bolts securing the cables. In this case, there is what appears to be a plastic wedge arrangement which is tightened with a nut and bolt. Of course, the plastic does a fine job of insulating the securing nut and bolt from the terminal. Closer inspection revealed a secondary connection on the positive terminal, so loosening that off allowed a spade terminal to slide under, and tighten back up for a good connection to the positive side of the battery. For the negative connection, there was no hope of connecting directly to the terminal, so a connection was made to the car body via the nuts securing the battery clamp. That worked fine.

The FT8800 has a detachable face, and when I bought it, it came with a remote mount kit to mount the face separately from the radio.

I looked around for somewhere to mount the face, and eventually settled on the idea of using the compartment with a door, in the middle of the dash console below the radio and heater controls.

For the radio body, I explored several options, but decided that under the driver’s seat was probably going to be the best location.

There is a similar panel to the battery cover under the driver’s seat. Lifting this revealed a number of connectors to pass wiring looms to the outside of the car. This provides a relatively secure location to mount the radio mounting bracket without having to make holes in the bodywork.

The power cable was fairly easily passed across the center console by removing the plastic panels on each side, and running the cable in front of the gear selector. The cable tucks up under the plastic panels on each side and so is not visible.

The small compartment chosen for the radio head has no easy location to fix the plastic holder for the head. It does have a rubber tray in the base, which fits fairly tightly into a shapped depression at the bottom. I used a piece of metal bent to fit, with each end sticking up, one end to attach the radio bracket, and the other to attach to the rear of the compartment. Exterior quality double sided tape was used to secure the bracket to the rubber mat, to the radio head bracket and to the rear of the compartment. A small hole, just big enough to pass the connector on the cable connecting the head to the radio body was drilled in the rear of the compartment (invisible, unless you know it is there, and go looking for it with a light).

The same panels removed to run the power cable gave access to this connecting wire and allowed it to run alongside the power cable.

There is room to place the microphone inside and close the door when not in use.

The next obstacle to face was mounting the antenna.  Purists would go straight to drilling a hole in the center of the roof and installing an NMO connector. I am not enthusiastic about making holes in the roof of my new car, and besides, it has a moon-roof and sun-roof and so the majority of the roof is glass.

I initially looked to my K400 mount. But this requires a flat edge around 4″ long. When I looked, this car is amazingly … curved. Hardly a straight edge anywhere. The only flat edge on the hood would be pretty close to the front. Not only would that look odd, but getting the cable into the engine compartment would be a challenge.

I eventually settled on the idea of using a glass-mount antenna, but various obstacles prevented me from going in that direction. I eventually decided that a section of flat metal about 2″ long on the rear door might work. Especially since there is a convenient hole with rubber plug at the top left of the body underneath the door.

I obtained a K412S mount, which is similar to the K400 but only requires about 1.5″ of space to mount. This fit perfectly.

The hard part was getting the cable from the hole under the door, to the space behind the plastic trim at the roof level. I could get a piece of plastic “string” through, but there was just not space to pull the SMA connector through. After about an hour of trying, I gave in and removed more plastic trim around the top and left of the rear door. This enabled me to thread the cable through fairly easily. When it got to the rear door, I simply pulled the door seal away, and tucked the cable behind the exposed interior trim, down the side of the car, under the door sill and out to the radio.

I initially had some doubts about whether the radio would be really audible in that position, or if I would need to look at obtaining and mounting an external speaker. As it turns out, the radio is perfectly audible.

The antenna position is not ideal, but in practice seems to work very well. I don’t think that I would want to mount my larger (5/8 wavelength) antenna on that small mount, but the 1/4 wave seems to work just as well for most purposes.


The only slight disadvantage to having the antenna in this position is that the glass door panel can’t be opened without unscrewing the antenna. In practice I virtually never used the glass door panel on the previous car, so don’t expect this to be a significant problem.


Gun myths and disinformation

A friend posted a link to an article pushing gun control in the wake of the Aurora, Colorado killings, and I promised a response to that. The comment section of Facebook is not really suitable for something of this length, so I am choosing to compose my response here, and will post a link to this article in the Facebook thread.

I will begin by saying that I think it highly inappropriate to use such an event to push a personal agenda, especially at such an early stage where full facts are not known, victim’s families are suffering, and emotions are running high — unless of course, an emotional response is the aim. If that is the aim, it says much about the character and political morals of the writer.

Facts surrounding this are mostly based upon media reports, and leaks from ill-informed “police sources”. For example, initial  reports claimed than an AK-47 had been used. Hardly surprising, since the illiterate US media tend to call any semi-automatic rifle an AK-47, totally ignoring the fact that an AK-47 is a true assault weapon, capable of full automatic fire, and not readily available to the general public in the USA.

There were also claims that the perpetrator was wearing full body-armor. It now appears that he was actually wearing what marketing people call a “tactical ballistic vest ” and “ballistic leg protectors”. What that translates to in common English is a nylon (so-called ballistic nylon, a trademark of Cordura, the sort used to construct back-packs, purses, flight bags etc.) with pockets to hold magazines, flashlight, radio, water bottle etc. The leggings are typically used for thermal and abrasion protection. A throat protector falls into two categories, one is when talking about real armor, and extends the protection around the neck – most often used in scenarios such as bomb disposal, not often used outside those specific applications because it is uncomfortable and restricts movement too much. A much lighter weight version is available to protect against knife attack and or low velocity shrapnel. Gloves are to protect against things like barbed wire and cold weather and to allow (to put it bluntly) beating people up without damaging your hands and leaving incriminating evidence.

So the AK-47 turns out not to be  an AK-47 and the body armor turns out not to be anything that would stop anything beyond (perhaps) an air rifle pellet.

It now appears that the rifle actually played a minor role, that most damage was done by a shotgun and a pistol, so concentrating on the rifle appears strange.

On to examining what was actually written in that article.

Lets begin with:

I do not understand people who support public ownership of assault style weapons like the AR-15 used in the Colorado massacre.

Being pedantic, it sounds as though he doesn’t support government organizations owning AR-15s (public ownership) but is quite ok with private ownership.

He then says “assault style weapons” so his issue appears to be with the look of the gun more than anything else, he has (apparently) no issue with assault rifles, but only with rifles syled to look like them. Fuzzy thinking indeed.

The term “assault weapon” probably deserves some discussion. The term “Assault Rifle” was coined in Germany during WWII by non other than Adolf Hitler when he was shown the Maschinenpistole 43 which became the Sturmgewehr 44  (translates as storm rifle 44). Assault rifles have a defined set of characteristics:

  • It must be an individual weapon with provision to fire from the shoulder (i.e. a buttstock);
  • It must be capable of selective fire;
  • It must have an intermediate-power cartridge: more power than a pistol but less than a standard rifle or battle rifle;
  • Its ammunition must be supplied from a detachable magazine rather than a feed-belt.
  • And it should at least have a firing range of 300 meters (1000 feet)

This is an internationally and industry recognized term, which is why when the US “assault weapon” ban was introduced, they used that term rather than “assault rifle” since “assault rifles” are actually already banned. This ban concentrated on cosmetic issues (what the gun looked like) rather than on any functionality characteristics. It was an end-run around the constitution, since it didn’t actually ban any class of firearms, only a set of cosmetic features.

The key here is the second bullet – it must be select-fire, which means that it must have a switch to select either semi-automatic fire (one pull of the trigger, one shot) or automatic fire (one pull of the trigger, multiple shots). Also note that it does not fire high-power rounds, so the phrase beloved of reporters and their editors (high-power assault rifle) is nonsensical.

The term “assault weapon” (as opposed to assault rifle) is tracable back to one Josh Sugarmann, director of the National Coalition to Ban Handguns, who wrote a memo which said:

 “…the semiautomatic weapons’ menacing looks, coupled with the public’s confusion ..[that] anything that looks like a machine gun is assumed to be a machine gun – can only increase the chance of public support for restrictions on these weapons

The term was deliberately coined to confuse.

Back to the article under discussion – he throws in the word “massacre” to begin the emotional wind-up.

Not a good start.

He continues:

 Despite these massacres recurring and despite the 100,000 Americans that die every year due to domestic gun violence – these people see no value to even considering some kind of control as to what kinds of weapons are put in civilian hands.

In his “recurring massacres”, he again uses an emotional term, and conveniently forgets other “massacres” that have occurred which did not make use of assault weapons, and in some cases, not even firearms.

The reference to 100,000 dying as a result of domestic gun violence is a complete fabrication. According to the FBI statistics for 2010 (latest complete statistics available) there were 12,996 TOTAL murders in the USA. Of these  8,775 were by firearm, 6,009 of them by handguns and 358 using a rifle. Putting number of murders into perspective, there were 33,963 road traffic deaths in 2010.

After a rambling discussion on the US constitution, where he carefully avoids mentioning that US federal law specifically states that the militia is every able-bodied man, and that even if that were not the case, the Supreme Court  of the US has ruled that firearms possession is an individual right, we move on to this gem:

What purpose does an AR-15 serve to a sportsman that a more standard hunting rifle does not serve?

Where did “sportsman” come from? There is no reference to that in the second amendment to the US constitution, and what is his definition of sport? Why does he think that the only sporting use of firearms is to kill animals (seems to have killing on the brain…). Has he considered those sports where these types of rifle are actually required?

Based upon his own words, a standard hunting rifle serves the same purpose as an AR-15, if that is so, does he want to ban those too?

In the same paragraph:

Let’s see – does it fire more rounds without reload? Yes. Does it fire farther and more accurately? Yes. Does it accommodate a more lethal payload? Yes. So basically, the purpose of an assault style weapon is to kill more stuff, more fully, faster and from further away. To achieve maximum lethality.

Does it fire more rounds without a reload? Well, yes, he got that part right. One might ask the question compared to what? but since he specified a “hunting rifle” we will take that as the benchmark. It may be worth pointing out that the AR-15 is a legal hunting rifle for small game in many states, but he obviously has a pre-conceived notion of what a hunting rifle is.

Does it fire further and more accurately?

Well, lets look back at the definition of an assault rifle: “It must have an intermediate-power cartridge: more power than a pistol but less than a standard rifle or battle rifle”  Hmm… so it doesn’t shoot further than the benchmark “hunting rifle”.

For comparison, here is a 5.56mm bullet (used by the AR-15, compared to a 7.62mm round as used in full-power “battle rifles” (as well as many hunting rifles). It is also worth noting that military sniper rifles (that is, rifles specifically chosen for range and accuracy) are almost invariably versions of standard hunting rifles, most often a Remington 700, and are bolt action, not semi-auto.

“More lethal payload”

The 5.56mm (or .223 if you insist on using inch measurements) is significantly smaller, with significantly less powder behind it. One of the original arguments made for using 5.56 in a military rifle was its DECREASED lethality – more likely to wound than kill, and that wounded soldiers absorb much more military resources than dead ones do.

A quick application of Google will find numerous stories of the US military on the ground in the Middle East complaining that the 5.56 round is underpowered and not lethal enough. Those that have the ability to choose their own weapons often use the M16 predecessor, the M14, which uses 7.62 ammunition.

So this paragraph is complete, unadulterated garbage. Someone talking through his hat, simply reciting the mantra of gun control groups.

The article only indirectly touched upon the other factor that is a gun-ban favorite: Magazine capacity.

An unattributed “expert” claimed that the 100 round magazine (found practically full in Aurora) would enable someone to fire 50 to 60 rounds in a minute, I don’t know where this “expert” comes from, but I can tell you that with marginal practice, I would be able to fire in excess of 60 rounds in a minute using the small 20 round (30 round is standard) magazines. Whether I would be able to hit anything is a different question, but the same applies to the 100 round drum. There is a reason why these are not used by the military (and its not all related to their tendency to jam). There is also a reason why the current version of the military M16 does not have full auto fire as an option – only a three round burst – that being that full auto fire is notoriously inaccurate and ineffective — even when used by trained professionals. It may look good in films, but then so does Harry Potter’s wand.

For those that think that pulling the trigger around once per second is as fast as you can go and that reloading is slow, take a look at the following video, using the slowest firearm around — a revolver:

This person has also fired 8 rounds from his revolver in one second – that’s 480 rounds per minute.

When gun-ban proponents accuse others of not being willing to engage in discussion, the reason is that they are immune to fact, immune to logic and continue to push an argument based upon falsehood and emotion.

There is no rational discussion with these people.

As for what you need to defend your house, lets finish with a slightly humorous take on the subject:



The Police State takes a half-step backwards

A while ago, I posted an article about a court decision in Indiana which basically said that people were not allowed to defend themselves against police invading their homes, even if such actions were illegal.

It seems that the residents of that state were less than happy with this, and in March of this year the Indiana legislature passed a law to explicitly allow citizens to use deadly force against public servants, including police, who illegally enter their homes. The governor signed this legislation and it is now law.

Predictably, the police are very unhappy with this state of affairs, basically saying that they have a right to expect to be able to go home safely at night, even if that involves the citizens they are supposedly there to protect suffering harm, or even death at the hands of rogue police officers.

A typical reaction:

“If I pull over a car and I walk up to it and the guy shoots me, he’s going to say, ‘Well, he was trying to illegally enter my property,’ ” said Joseph Hubbard, 40, president of Jeffersonville Fraternal Order of Police Lodge 100. “Somebody is going get away with killing a cop because of this law.”

Somewhat reminiscent of the familiar story of  “I thought he was armed, I feared for my life” excuse that is trotted out almost daily in many cities when police gun down unarmed citizens. Strange how when it is reversed it becomes something BAD.


An Open Letter to Advertising Executives

As an executive responsible for managing a large budget and safeguarding the image of the various brands that you manage, you may be interested in one of the principal reasons why I, and I suspect a lot of other people are not seeing the advertisements that you are paying a lot of money to have shown on prime-time national TV networks. Even more to the point, I (and many others) are actually paying good money to avoid watching the programs that you are betting upon to attract attention to your advertisements.

The problem isn’t, as you may be thinking, that the sheer volume and length of advertising slots during programs is the issue. That is an issue, but its one that we have learned to live with over the years. No the real issue is that the TV networks are making the programs themselves — you know, the filler material between the ads, the stuff that attracts the eyeballs in the first place — completely unwatchable.

They do this with those incredibly annoying and disruptive animated things at the bottom of the screen. They were bad enough when they started as little more than ticker-tape height, but over time have grown to fill the bottom third of the screen, filling it with highly animated and brightly colored distraction. Most often, seemingly timed to completely disrupt the atmosphere that the program producer has spent his talents building.

Apparently, the networks think that this is just a free slot they can use to pimp their future programming, not having to use that valuable time that they can sell to you.

Unfortunately, this makes watching any serious programming impossible. This is why I, and presumably many others, will wait, and actually pay money to watch the exact same program via the Internet.

It’s not only being able to watch on my schedule as opposed to that of the networks.  It’s not to avoid your advertising (although that too has its attractions), but is very definitely to avoid the garbage that the networks insist on slapping on the screen during the program I am trying to watch.

I accidentally tried to watch House on Fox last night. It was impossible to not be continually be distracted and lose track.

It left a very bad taste in my mouth, not only for Fox, but for the morons that spend good money paying for advertising slots that I will ensure I never watch again. Back to Amazon and Netflix for me.

I suggest that you, who really control what goes on here with your advertising budget, apply as much pressure as you can to ensure that programming around your paid for time does not include this garbage. Then, one day, I may actually see some of your advertising material once again.