Latest Headlines

History of LDAP directories

 

There is often some confusion as to the exact history of the different directory server versions that were sold by Netscape, iPlanet, Sun Microsystems, AOL, Red Hat and now Oracle.

For anyone interested in the lineage of these different directories, this is my recollection of events, some from the inside, and some from the outside:

Netscape Directory versions 3 and version 4 were where directory server as a commercial product really started to take off. Netscape Directory Three was based directly on the work of Tim Howes at University of Michigan. It was really more of an LDAP front-end, with provision for different back-end databases (at least in theory). Netscape Directory Four recognized that to get good performance, there needed to be tight coupling between the front-end (LDAP) and the back-end (database), so the facility of pluggable backends was dropped.

At this point, AOL buys Netscape and only wanting the browser, the website and its attached eyeballs, forms iPlanet with Sun to offload the other products it has no use for.

Sun takes over development, integrates components from its existing directory product and Directory 5.0 is born. This was a bit of a dead-end, with terrible performance mainly due to the replication scheme used.

Directory 5.1, which should really have been 6.0 because of the huge difference (ripping out the unworkable replication and replacing it with a loose consistency model) and different schema file format was more than just an incremental change. I also remember this being the time that the ACI format changed.

At this point, iPlanet disolved, and the code was shared by both Sun and AOL.

AOL took the code and tried to sell it as their own directory server. It never sold well, and eventually Red Hat bought it from AOL and open sourced it. 389 was derived from this open source (389 is the development version, the Red Hat directory is the stabilized commercial/supported version).

Sun continued to evolve the product, and with directory 5.2 had a replication model which supported up to four masters (actually, it would work with more, but the performance implications caused Sun to limit official support to a maximum of four).

This was probably the most successful in the entire product line. Four masters was enough to cover multiple data centers, replication would work over a WAN and the directory server itself would scale up tens of millions of entries with suitable hardware behind it.

Directory 6 was an evolution which tried to resolve some of the limitations of DS 5.2. It removed the four master limit and used a later version of the SleepyCat database. Scalability was improved.

At this point it became obvious that the fundamental design of the product was the limiting factor in getting significant performance improvements, so a next generation directory server project was started. This being Sun, it had to be written in Java. OpenDS was born. A lot of people were skeptical about performance of a Java DS, but early testing showed some surprisingly good results. Using a more modern back-end DB not only helped performance but improved resilience and reliability too. Fortunately, this happened at a time when Sun were experimenting with open source, so OpenDS was an open source project.

Sun then made what in my opinion was a strategic blunder. Trying to cut costs, they decided to combine their two directory engineering centers into one. They chose to continue with Grenoble, and shut down the Austin group, laying off a group of highly talented directory engineers and marketing people (this is not to say that the Grenoble group were not also talented).

Being unemployed, the Austin (ex)employees looked around for what to do, and UnboundID was created. They had been working with Sun customers for many years, and knew exactly what enterprise customers wanted from a directory, and had seen some of those needs continually slip along the roadmap timeline, or get dropped time after time. They took the OpenDS code and added those items to it (as proprietary extensions).

Back at Sun, DS 6 was supposed to be the end of the line for the C based directory, with DS 7 being based upon the OpenDS project.

There were still a few performance tweaks that could be applied to DS 6, so DS 7 was actually still based upon the legacy code – essentially taking ideas tested out in OpenDS, such as compression of database entries and back-porting them into the legacy DS code.

OpenDS was still intended to be the future.

Enter Oracle.

It soon became clear that whatever marketing spin they put on it, Oracle just wanted the customers, and not the directory technology. They were going to transition existing DSEE customers to Oracle Internet Directory (Oracle’s directory product sitting on top of an Oracle database), and since OpenDS has no customers, it was dead. At the same time, they put OpenSSO into a state of living death.

There were many customers using Sun’sOpenSSO product who were not thrilled at the prospect of losing their investment in OpenSSO, or the forced transition to what many considered to be an inferior product. ForgeRock was formed to provide support and a product evolution roadmap to OpenSSO customers that didn’t want to transition to Oracle’s access manager (was Oblix). OpenSSO (OpenAM) really needs a LDAP server, and being an ex Sun product had lots of dependencies on DSEE. ForgeRock needed an open source directory to complement OpenAM. UnboundID was certainly a possibility, but with the strong open source ethic at ForgeRock and the proprietary ethic at UnboundID, the fit was not there. OpenLDAP was another possibility, but although this had followed its own evolutionary path, and is a competent LDAP server, it is written in C and would require porting and support specific to each platform.

ForgeRock decided to do their own support of OpenDS. They acquired some of the key talent from the (Sun/Oracle) Grenoble directory engineering center, and OpenDJ was born. Initially, the idea had been to simply participate in the OpenDS community and provide commercial support, but for various reasons it soon became clear that it would be necessary to fork the project. There is still active participation in the OpenDS project, and with both being open source projects, some cross-pollination of ideas.

One of the biggest hurdles faced by ForgeRock (and UnboundID) was that Sun had provided the documentation effort for its open source projects (openDS and OpenSSO) and has copyrighted  the result, now owned by Oracle, which meant that they were faced with the herculean task of completely re-documenting. In ForgeRock’s case, for two products; OpenAM (OpenSSO) and OpenDJ (OpenAM).

—-

Things didn’t really go as Oracle had planned with DSEE. Existing customers would not transition to OID in most cases, and with viable alternatives (UnboundID, OpenDJ and OpenLDAP) which did not require building an Oracle database infrastructure and employing DBMs, they continued with DSEE (or ODSEE as they insist on calling it).

Of course, they recognized what Sun had several years before, that the current code base had reached the end of the line, and that if they wanted to keep the existing customers they had to provide a path which was not OID. So back to OpenDS, plus a few tools to ease the transition from DSEE, and the Oracle Unified Directory came into existence.

Tags:

Global warming cools

Things are not going well for the purveyors of doom and gloom, otherwise known as Cataclysmic Anthropogenic Global Warming (CAGW).

Their climate models continue to show increasing temperature, but reality  continues to disagree, as global temperatures remain stubbornly flat, or even to fall slightly depending upon how you look at the numbers (see graph on left), while the evil CO2 concentration continues to rise. According to the models, more CO2 means higher temperatures, but its not happening. What to do?

Well, initially refusal to acknowledge reality seemed to work, but only for so long. As the gap between the virtual reality of the models and actual reality continued to grow, cracks began to appear in the CAGW wall of “settled science” and “consensus” .

The first indication was a leak of the next report from the IPCC. This appears to be a deliberate leak, to test the reaction to them beginning to back off from their predictions of the imminent fiery demise of the world if humanity doesn’t go back to living in caves and eating grass while paying huge taxes to finance a world-wide carbon market.

The leaked IPCC report suggests that previous predictions may have been overly pessimistic, and that the next 30 years or so may actually see flat temperatures, if not declining temperatures. Apparently, the natural climate variations that CO2 was supposed to be completely overriding is, in fact, the dominant determinant of climate temperature, masking any CO2 effect. But don’t worry! eventually (when the current generation of “climate scientists” are retired or dead and buried) CO2 will prevail!

However, another study, which was even reported by the BBC (who typically publish nothing which contradicts the CAGW orthodoxy), claims that the CO2 sensitivity, that is, the temperature rise for a doubling in CO2 concentration, is much lower than that being used in climate models. It did so by taking the models themselves, and the CO2 concentrations from ice cores covering time back to the last ice age, and determined that with the sensitivity set to the values currently being used, the ice sheets would have extended way past the 40 degree lattitude that it actually reached, right down to zero degrees (the equator) and that recovery would have been impossible, the world would still be one big ball of ice. Since that didn’t happen, the models are obviously wrong. They could be adjusted to reflect reality by reducing the CO2 sensitivity drastically, to something in the range of 1.7 to 2.0 degrees, which actually agrees with what many of the more skeptical climate scientists have been saying all along, rather than the previous claims of up to 11 degrees C.

The graph to the left shows the temperature record for central England, which is the longest continuous temperature record in the word, plotted with the CO2 concentration. The CO2 effect is rather difficult to pick out from the gradual temperature rise which os recovery from the cold “little ice age” temperatures. Actual temperature readings are not showing anything like the CO2 sensitivity used in the climate models.

To add to the discomfort of the CAGW merchants, both the public, and governments, who were stampeded into action my stories of doom, and the idea of being able to levy a tax on the air we breath were starting to get a little twitchy. The IPCC, once seen as a rock solid institution of scientific research has shown itself to be anything but, using scare stories from non-scientific sources and heavily filtering the actual science it does use, selecting only that science which backs its cause. For a really good explanation of  how badly the IPCC is broken, see the book by Donna LaFramboise – “The Delinquent Teenager Who Was Mistaken for the World’s Top Climate Expert“.

With a meeting coming up in Durban to discuss what to do when the Kyoto protocol expires, this could not have come at a worse time. Combined with economic problems and the dismal failure of those carbon markets which have been established, various countries have made it clear that they will not be signing up for more of the same, let alone a stronger version. The real kicker was Japan, who hosted the original Kyoto discussions, announcing that it will not be signing up to a renewal.

Then bad news from the USA, which now refuses to back the UNFCCC, the parent organization of the IPCC, disagreeing with its funding and structure.

The only light in the CAGW universe seems to be the suicidal decision of the Australian government, alone amongst all countries in the world, to introduce a carbon tax. They rammed this through in time for a G20 meeting. At that meeting, they fully expected their lead to trigger many other countries to pledge to follow. Instead, they received pats on the back, and comments about how brave they were. The Australian labour party committed political suicide for “the cause”, and “the cause” reciprocated by doing … nothing.

The bottom also appears to be falling out of the renewable energy pipe-dream. Prince Philip going so far as to call the UK’s windmills “absolutely useless“. The Solyndra affair continues to fester in the US, and there are hints from China that they may be closing some of the solar pannel factories which are uneconomic to continue to run.

To top things off in the run-up to Durban, the same person (or persons) that released the original “Climategate” emails, released another batch of 5,000, now referred to a Climategate 2.0. These not only reinforce the case for the unacceptably bad and unscientific behavior of the IPCC scientists, but also adds some more light into the relationships with and behavior of various other institutions. This includes the BBC, who are seen to be anything but impartial in their reporting, pre-existing relationships with the “independent” investigators of the original climategate emails, and somewhat surprisingly, evidence that Professor Phil Jones, far from being a simple bumbling, absent minded professor, is, in fact a rather unpleasant and scheming character. Michael Mann just looks even worse (if that’s possible). If there is any justice in the world, this batch of email will result in a few lost jobs, and possibly a few prison sentences. Traction in the MSM is,as to be expected, somewhat limited, but it is there, and they are obviously finding it much harder to ignore these emails than they did the first batch.

As an added bonus, along with the 5,000 emails, is another zip file, which is heavily encrypted, but contains an additional 220,000 emails. Presumably, at some point in the future, the password to unlock there may/will be released. The most current theory is that the first batch of emails centered on the IPCC and its behavior. The second set has centered on relations with, and behavior of certain institutions, and that the final batch may well expand the net to include political figures. If that is the case, they are potentially dynamite. A ticking time-bomb under highly placed politicians and civil servants.

What we are seeing, is quite probably the beginning of the end of the CAGW scam. Rats are deserting the sinking ship, but still trying to do so without endangering their huge cashflow (government grants).

The Delinquent Teenager

If you don’t know much about the IPCC and why you need to worry about it even if it is broken, then you must read this book.

If you think that global conspiracy to rob whole countries of trillions of dollars and subjugate  (almost) the entire population of the world are only the province of James Bond or Jason Bourne you are wrong. There is one in progress right now that WILL affect you, your children and your children’s children (unless you are one of the privileged few).

If you believe that thousands of the world’s top scientists all agree that global warming is attributable to man, and that all life on earth is in danger because of it, then you need to read this.

Donna has taken the time to fully document every claim she makes, unlike those who simply urge you to “Move along, nothing to see here” and tell you that “The debate is over”. Reading the book itself is easy and you can get through it in a couple of days (or even one day if you get hooked), however, you will almost certainly find yourself going back to re-read some sections, and if you start to follow the references, there is a huge amount of background material to get through.


If you need an example of the sort of lies and character assassination that those behind the IPCC employ, you have to go no further than to look at the the single one-star rating in the Amazon reviews. It was written by someone who had not bothered to read even the first page of the book, who is using what he sees as his position of authority due to his qualifications to attempt to stop people reading this book. Compare those claims against what you read yourself and can validate from multiple sources, many of which are given as references in the book itself and you begin to get a tast of the vicious conspiracy that this book lays bare. Buy and read this book. Your future and those of your decendents may depend upon your being educated on this subject.

At the moment, the book is only available on Kindle, or as a PDF download (see Donna’s website for links to the PDF). A paper version should be available in the very near future.

Why they want growth

Following on from my previous post on growth, the next obvious question is why do economists, governments and particularly financial institutions want the economy to grow?

To understand that, we need to understand how financial institutions and many individuals make money. Particularly those that make vast amounts of money.

It used to be that they would invest in companies that produced large profits. Shareholders got a share of those profits. Basically, all the profit after running costs (salaries, taxes, capital investments, R&D etc.) were accounted for, split amongst the shareholders in proportion to the number of shares that they held.

The price of shares was based mainly on the dividends (share of the profits) they paid. The more the dividends, the higher the price people were willing to pay for a share. Share price also had a relatively small (in most cases) component based upon future estimates. If a new product was coming out which was thought to be a winner, for example, people would want to buy the shares before the dividends (and hence the share price) increased. Similarly, if business was expected to decline, the share price would drop, even though dividends were still the same, because it was expected that in future they would decline.

Some financial institutions found that if they guessed right, they could make significantly more money by buying (or selling) shares ahead of the rest of the market.

In other words, they moved from viewing shares as an investment in the company, to simply being betting chips. They had no interest in the company or its customers beyond their influence on the price of the stock.

The financial companies became so enthusiastic about the huge profits that could be made this way that they used their resources to buy a change in the law such that companies were no longer required to share their profits with shareholders. They could keep all the money and use it in pretty much any way they wanted, not the the advantage of their customers, and not to the advantage of those “owners” of the company who just wanted a steady income from their share of the company profits. The “logic” was that shareholder profit was to be derived from the increased value of the stock, not from income from the stock.

The law in most western countries requires that companies be run to the benefit of its shareholders, which means that they now had to concentrate on share price, rather than good products, satisfied customers and long term stability. As the tech bubble showed, share price is almost totally divorced from actual profit. There are still a large number of companies around that make no real profit, but have astronomically high share prices.

Those companies that do make a profit are required to show growth in that profit each and every quarter. In many cases they have no real use of the profit, and it just accumulates until they blow it in huge chunks to acquire another company to push up their share price.

As noted in the previous article, continuous growth is only sustainable as long as the “growth medium” holds out.

In this case, the growth medium is the economy.

As soon as the economy goes flat, the prospects for growth for companies disappears, and even though the company may still be generating a very healthy profit, chances are that its not paying dividends, so share price will flatline, or worse, decline. The financial companies are not interested in making a steady profit, they want to ride the exponential curve of continuous growth in share price.

With a flatline economy some momentum can be maintained by marketing and some by using the vast reserves of cash for mergers and acquisitions, customers can be screwed for more and more money, but none of this can last.

Governments need to realize that sustained growth is actually bad. But they have two problems, one is that they are mostly owned by the financial institutions and another is that they are themselves hooked on the idea that they will have more money tomorrow than today.

Growth is good. Isn’t it?

With all the wringing of hands and prophesies of doom from politicians, industrialists and TV talking-heads when they begin to discuss the fact that growth of the economy  is running at a feeble 2% or so, one can only assume that growth is good, and lack of growth is bad.

However, if we accept that the economy should grow, and that the growth should be sustained at some annual percentage rate, we have to accept that we are then looking at an exponential growth, as illustrated in the graph on the left.

The interesting thing about exponential growth is that it is unsustainable. What exponential growth actually means is that over some period, whatever we are measuring the growth of doubles in size. Over the next period it doubles again, then again, then again.

The actual period for doubling is 69.3147/n, where n is the annual growth rate expressed as a percentage. 69.317 is hard to remember, so lets call it 70, which is much easier o remember because it is the average life-span of humans. So a 3.5% growth rate, which is on the low side of where many economists would like to see it, means that the economy doubles in size every 70/3.5 = 20 years. For the USA, in 2010 the GDP (which is one way of measuring the size of the economy) was 14,526,550 million dollars. A 3.5% annual growth rate means that in 2030, the GDP would be 29,053,100 million dollars, 58,106,200 in 2050 etc.

Thats a lot of dollars.Thats a lot of production. How is this going to be achieved? In the past, economic growth has been sustained by essentially two things: an exponential growth in population, and increased productivity of workers.

Until he 1960’s the US population was growing exponentially, but then leveled off, and has stayed relatively constant, and would actually be falling without immigration (1960’s was when the contraceptive pill became widely available).

So its clear that population growth can’t be counted on to produce more workers, to produce more widgets to sell to grow the economy.

Since the beginning of the industrial revolution productivity has been increased by the application of power and technology.

The first stage was mechanization. From the spinning wheel to the spinning jenny, from the horse drawn plough to the tractor and combine harvester, from hand-built coaches to the production lines of Detroit.

Following mechanization came automation. Applying advances in electronics and computers to replace man workers with a smaller number of workers overseeing the automatic production. Faster production of more widgets by smaller numbers of people means higher productivity, since productivity is measured essentially in widgets per person per hour.

These advantages had two effects, one was to allow each worker to produce more, and what was produced was cheaper. Production increased, and so did the market as items became more affordable, not only in America but in foreign markets.

The problem is, we have reached a point where not only are there no more workers to tap, but technological productivity multipliers have reached their limits. There just isn’t a new technology to produce cars faster and cheaper. There isn’t any new technology to make the same agricultural increases as moving from horse drawn plough to tractor, no more forcing crop yields beyond what is currently achievable with fertilizer and genetic modification.

So how is the economy to grow? Short term solutions such as making use of spare (and cheaper) population in other countries takes up some of the slack, especially since many of these countries populations are still increasing exponentially. Importing workers helps a little, artificially boosting the population, but longer term these are destined to fail. Of course, these measures really reduce productivity since the less educated, less skilled (cheaper) workers require more people to do the same work, but by only counting the managers overseeing the “outsourcing” in the head count and not the much more numerous actual workers, productivity appears to be rising.

Population growth is a problem. There is limited space on the planet, there is limited space to grow food, becoming even more limited when population competes with space to grow food to feed that population.

Nature has ways of taking care of exponential growth. If a single bacteria is placed in a culture medium and allowed to divide, the bacteria population will grow at an exponential rate. Eventually, the limited nutrient available, limited space and the bacteria’s own waste products cause a catastrophic  collapse in the population. In the more general case, there are a variety of things which nature creates to stabilise a population: predators, disease, famine and war. Humans are not immune to nature. Relying upon growth in “offshore” populations is not going to last very long.

As the above chart shows, productivity among countries is not uniform. As worker availability becomes more and more of a resource contraint, how can those countries with lower productivity improve? One way is to compte with those with higher productivity for their (offshore/immigrant) workers, for energy and raw materials resources which also need to keep pace with an exponential growth.

One of the buzzwords of politicians when talking about energy is “sustainable”. In the same breath they often talk about sustainable growth. A little bit of thought shows that once a certain stage of development is reached, there is no such thing as sustainable growth. Growth needs to be zero. Over time, the balance of industries will need to change, some will grow, but others have to contract to compensate. Failure to recognize this leads inevitably to the catastrophic crash that wipes out entire populations.

Humans are mostly incapable of dealing with exponential growth. They are much more familiar with linear processes and almost always underestimate the speed with which the final stages leading to a crash occur. In the video series linked to below there are some examples of the rapidity with which things go bad. One of the examples is to take a jar and place a single bacteria in the jar. The bacteria will split into 2, each of those will split giving 4, then 8, then 16, 32, 64, 128 etc. A typical exponential growth.

After exactly one hour, the jar is full. What is amazing to most people is to realize that after 59 minutes the jar is only half full. At 58 minutes it is a quarter full, at 57 minutes one eighth. The end comes very rapidly.

The following link will take you to a page with a series of links to 10 minute videos of a lecture given by Prof. Bartlett. The whole lecture is approximately one hour long. It will be well worth your time to watch the entire series. You will never look at those innocent looking few percent growth rates in the same way again.

http://www.albartlett.org/presentations/arithmetic_population_energy_video1.html

 

 

Faster than the speed of light

There have been lots of headlines and quotes from various people about the press release from CERN about observations that indicated that they have observed neutrinos travelling faster than the speed of light. There have been a number of “authorities” claiming that these results must be wrong, because its absolutely certain that nothing can travel faster than light.

Of course, most of the media coverage more or less completely ignores the content of the press release from CERN. What they actually said was that over three years and many (thousands) experiments they have consistently observed that the time taken for the neutrinos to travel the 730km between CERN and Gran Sasso the time taken has been 0.00000006 seconds shorter than light takes to cover the same distance. This is equivalent to the distance being 20m shorter than it actually is, so the difference is actually very small, but measurable and consistent.

The scientists at CERN are doing what all good scientists do when their observations clash with accepted understanding. First, they look for errors and alternative explanations to account for their measurements. After exhausting all the explanations that they can think of, and not finding any of them account for the discrepancy, before announcing the demise of one of the cornerstones of current physics, they have made their methodology and data available to others, and asked them to verify the methodology and to examine the data for alternative explanations. Only when heir results are confirmed, and quite possibly replicated (or not) by others, will they feel somewhat confident in claiming that there are indeed some particles which can exceed the speed of light.

Its good to see that at least in some quarters, the scientific method is alive and healthy, with scientists freely and voluntarily sharing their methodology and data with others, and asking them to validate or disprove their findings.

Contrast this with so-called “climate scientists” who, along with the organizations that employ them, spend millions of dollars to avoid having to share any of their methodology or data, and famously wrote to somone asking for a copy of the data: “Why should I share my data with you when I know that all you will do with it is look for problems”.

Of course, looking for problems is exactly how real science works, not by having a few mates read a publication, declare it good, then claiming that the science is settled.

Apparently, England is still special.

Someone using the name Younger Dryas (that’s not a real name, by the way, if you don’t know what it refers to, look it up on wikipedia) has noticed something very odd about the temperature in England.

There is a series of temperature measurements that has been kept of central England since 1772. This is the longest continuous temperature record anywhere in the world. It is based upon real thermometer readings, not tree rings or any other proxy measurements (although, to be accurate, a thermometer is actually a proxy for temperature, since we are not directly measuring temperature but its effect upon the measuring device). This temperature seris is known as the Central England Temperature (CET) series. It covers a roughly triangular area between Lancashire, London and Bristol.

The UK MET Office produces temperature series for the whole of England (as well as Wales, Scotland and Northern Ireland).

What ‘Younger Dryas” did was to take the temperature readings for the CET, and subtract the average temperature for some period. This leaves the differences in temperature, known as “temperature anomalies”. These are what climate scientists prefer to work with, since it shows only the changes (from some arbitrary temperature) and isn’t masked by the large value of the actual temperature.

He then took the temperature anomalies generated by the MET Office for the whole of Englang, and subtracted these from the CET anomalies. Since there is going to be some difference between temperatures over the whole of England as compared to a the small part of it covered by the CET, you would expect to see variations up and down around zero as different weather patterns affected different parts of England, and in fact, this is what we do see until fairly recently:

CET compared to whole of England

Around 1992 there is a large difference between the CET temperature, and the MET Office temperature for England. England apparently getting warmer than the CET region.

Intrigued by this, our friend then decided to compare the CET series against those maintained by the MET Office for Wales, Scotland and Northern Ireland.

He was surprised to discover that the very obvious spike in temperature for England was completely absent when he compared the CET series against these other regional series.

For example, the series for Wales compared against the CET series.

Exactly why parts of England outside the CET triangle, but excluding Wales, Scotland and Northern Ireland should be getting hotter every year is something of a mystery.

My initial thought was that it might be due to the fact that the CET series has been “adjusted” for Urban Heat Island (UHI) effects since 1974. This would lower the actual temperature readings, and if the temperatures for the whole of England were not so adjusted the effect would be to show England getting warmer.

However, its clear from the graph that the problem doesn’t begin in 1974, not until around 1992. Also, the effect would be apparent when comparing to other regions.

Strangely, this effect kicks in just about the time that the “Oh no! We are all going to die! CO2 is burning up the planet!” craze kicked in.

The New Zealand MET Office has been caught adjusting their temperatures to rise every  month. Surely the UK MET Office would NEVER stoop to such underhanded tricks?

CET and Wales anomalies

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Original thread is here: http://theweatheroutlook.com/twocommunity/default.aspx?g=posts&t=4732

Tags:

Just when you thought life couldn’t get any worse.

In a previous post, I described some of the setbacks that those pushing the idea of man made global warming (or whatever today’s name is) were facing. Things have continued to go downhill at an ever increasing rate. If you are committed to the idea of man-made global warming (AGW) life sucks.

First came an opinion poll in the UK which showed that the British Government faces a backlash to their AGW policies caused by the rapid and steep increases in energy prices, with the prospect of worse to come.

Only 25% of the population are in favor of continuing the current policies if it means increased energy prices.

Reuters, 25 July 2011

This was accompanied by a paper published in Remote Sensing, which shows that satellite measurements and those of the global warming models have huge discrepancies. Of course, the discrepancies are in the direction of showing that some of the basic assumptions used in the models are vastly exaggerating the potential for future warming.

The paper is available here.

The reaction was predictable – when there is a clash between the models and reality, then there has to be some problem with reality.

Really.

Then there is another paper published in Proceedings of the National Academy of Sciences which tackles the idea that the reason that global warming has been absent for the last 10 years while CO2 levels continue to rise is because of all those coal-fired power stations (you know, the ones that we have to shut down and replace with windmills!) that China is firing up (about one per week).

Of course, claiming that more coal-fired power plants reduces global warming while saying that we have to shut down coal-fired power plants to reduce global warming strikes even the dimmest minds as maybe a rather strange idea. So to explain this, the claim is that Chinese coal is different. Very special coal, which creates lots more sulfate particles which reduce incoming sunlight intensity.

Well, this paper knocks that idea firmly on the head.

Again, this research is based upon actual observations and measurements, so again it is refuted on the basis that models must be right, and reality is obviously flawed.

That argument is used a lot, isn’t it?

Next, comes a paper published in Nature Geoscience,  which looks at the AGW proponent claim that hydro-electric reservoirs are huge emitters of greenhouse gasses (CO2 and methane). The Green Lobby doesn’t like hydro-electric generation, because it falls under the heading of “renewable”, but actually produces large quantities of cheap electricity, whereas their view of the world requires small quantities of hyper-expensive electricity, so they came up with the idea that the reservoirs cause global warming. Somehow, these reservoirs are aware that the water is intended to produce electricity, as opposed to be for drinking, washing etc.

Anyway, this paper takes a real scientific look at the idea and determines that the actual emissions are approximately 1/6 of those claimed.

Next comes a report from the university of Copenhagen, which examines the claim that there is a “tipping point” for Arctic ice, beyond which there is no possibility of recovery, and once reached, will ensure that the Arctic becomes totally ice free, forever.

This is patent nonsense, of course, because the Arctic has been ice free in the past, and since we have ice there now, obviously “recovered” — of course, recovery implies that there should be ice at the North pole, which is a bit of a stretch. There is absolutely no such requirement.

Anyway, this study, to be published in Science, looks at the last 10,000 years and determines that there is no “tipping point” for Arctic ice.

Next, a poll by Rasmussen finds that 69% of people believe that Climate Scientists have very likely falsified global warming research data and findings.

Another nail in the coffin comes from Prof. Murry Salby  the Chair of Climate, of Macquarie University. He takes a look at the claim that the rise in CO2 that we see is all due to man. The argument was that it is possible to deduce that CO2 comes from fossil fuels rather than other sources by looking at the ratio of two carbon isotopes (C12 and C13). Prof. Salby takes a close look at this and comes to the conclusion that atmospheric CO2 is most likely increasing due to increasing temperature, not increasing temperature increasing due to rising CO2 — which is agrees with observations based upon ice cores covering many thousands of years, which have always shown a CO2 rise lagging a temperature rise. This was even visible on Al Gore’s graphs, which were actually shown reversed, to make it appear that CO2 led temperature rises unless you paid very close attention to the graphs X axis.

There is a very good write up on Prof. Salby’s work on Jo Nova’s website.

Perhaps the most telling comment he made after doing his research is this one:

“Anyone who thinks the science of this complex thing is settled is in Fantasia.”

All this almost makes you feel sorry for those who have hitched their fortune, credibility and finances to the AGW hype.

 

 

Taming the Lion

Apple has released their latest version of OS X, named Lion.

Having downloaded and installed this a week ago, I have spent a number of hours fighting with this beast, trying to disable most of the new “helpful” features which are anything but helpful.

Given the success of Apple’s other operating system iOS, which runs on the iPhone and iPad, Apple seems to have decided that the two operating systems should share a common interface.

Now if Apple had paid attention, they would have realized that this was exactly the reason why Microsoft have done so appallingly badly in the smart phone and tablet markets. Microsoft tried to make everything look like Windows, and that just doesn’t translate to the small screen (hand held devices). Now Apple are trying the same trick, and unless they learn their lesson very quickly, they will destroy the gains they have made on the desktop and laptop market.

So what is so bad about this feline monstrosity, and how do we go about taming it?

The first thing that hits you in the face like a dead fish is that they reversed the scrolling direction. They forgot the metaphor of all desktop systems, right back to the time they ripped off the design from Xerox – you scroll the screen by grabbing the scroll bar and sliding it up or down, where up is the beginning of the document, and down takes you to the end. On the small screen its different, the metaphor there is that you grab the document and slide that.

What this translates to is that on Lion, move your cursor down and the document goes up (!). The scrollbar goes in the opposite direction to the cursor, unless you actually grab the cursor, then everything goes the right way. This is obviously somewhat disconcerting, so to top the distraction of the errant scrollbar, they hid it. Well, then they obviously realized that the scrollbar performs other functions, such as giving you an idea of the length of the page, and your location in it, so hiding the scrollbar was not going to work well.

So they hide it, except when you are scrolling.

The answer to this is that in the System Preferences there are checkboxes to restore the correct scroll direction, and to show the scrollbar. Actually, the scrollbar shows in most applications anyway, its only Apple’s apps that have been modified to allow hiding where it flips in and out of existence.

Recent versions of OS X have implemented gestures. That is, using multiple fingers on the MacBook Pro trackpad to achieve various functions. These are generally good, things like tapping once with one finger on the pad for a “left click”, tapping with two fingers for a “right click”, using two fingers moving up and down to scroll, using two fingers moving left and right to move back and forward a page.

In Lion, they not only added more, they changed some of them, so the two finger left and right only works in some contexts. Then they added new features, such as full screen mode, which takes some multi-finger gymnastics to get out of, and back into, inviting lawsuits from people with not enough fingers due to various accidents, and those with arthritis who will have great difficulty performing these gestures. Actually inviting a much simpler gesture which only requires one finger…

Again, you have some control over these gestures via System Preferences. For example, you can turn on an option to allow two or three finger paging, then three finger works (almost) everywhere. Turning them off is possible, but then there are going to be some states you will find yourself in which may be somewhat hard to escape from.

Another evil change is restoring the last state of any application. This is a royal pain. Most times you fire up a text editor, its to edit a new document, not the last one you wrote. It can also be slightly annoying if your boss asks you to look at something, you fire up the editor, and it displays a copy of your cover letter for a job application to another company, or you start up your browser to find that when you let your brother use it, he was browsing three legged midget porn sites.

Apple say you have control on a per application basis, but in reality this means that when you quit the application you have to check a box to tell it not to remember the current state. No way to (for example) tell Safari to never open on the last site visited.

There is an option to turn this off globally, but its hidden in the options settings fot TimeMachine (the Apple backup software), so not trivial to find.

But possibly the worst abomination is file versioning. What this does is to save the current state of a file you are working on every few minutes. It will try to save this to your TimeMachine backup volume if it is connected, but if not, it will store it to your local disk. What this does for you is to eat huge quantities of disk space, and you have no clue where the space is going. It also keeps your disk from going to sleep, and so gobbles battery up rapidly on a laptop.

Of course, not every application does this, only the ones that Apple have converted. So its not implemented in the filesystem, where something like this really belongs if you are going to do it, so you never really know if your work is being autosaved by whatever application you are using or not.

Another interesting facet of this is that you no longer get a simple “save” or “save as” option in (converted) applications, but a confusing array of options asking if you want to create new versions of the file etc.

An added bonus is that if you don’t touch a file fo a while (supposedly 2 weeks by default), it gets locked, so next tim you go to work on it, you are again asked confusing questions about whether you want to unlock it, create a copy etc. There also appears to be a bug. It is merrily locking files much less than two weeks old on my system.

In the pre-release version of Lion, there was a checkbox to turn off versioning hiden away in the TimeMachine options. In their infinite wisdom, Apple removed that option in the released version.

Fortunately, it is still possible to turn this stuff off, but it requires running a command from the command line, as root:

# tmutil disablelocal

This stops backup copies being stored on your local disk.

Execute this, and you disk goes wild for a few minutes, deleting all the saved versions. All your vanished disk space returns, and the disk now happily goes to sleep and you battery lasts that much longer.

There are a few unkind people who have compared Lion to Vista. It isn’t Apple’s Vista, but it is certainly a creditable attempt.

Tags: ,

Back to the dark ages

The world was on an upwards path. The hand to mouth existence of the past was just a memory for many. Cities were being built, universities were spreading knowledge and libraries were storing that knowledge for future generations. Trade was spreading, and publicly financed sanitation projects were driving disease and pestilence back into the darkness. War was something that happened far away, at the edges of the empire.

Then something happened. The Roman empire collapsed and was overrun by barbarians. The world descended into an age of ignorance, superstition and fear. The Dark Ages had begun, and would last for 1,000 years before the renaissance (around 1500 AD) slowly re-established civilization, and put the world back on course.

Exactly what caused the collapse is not entirely clear because much of the written history of the period was destroyed.

This was not a unique event. Previous great civilization in Egypt and Greece had gone the same way. Undoubtedly the people alive even as the descent into chaos began never thought that it could happen to their civilization. Too much invested, a world class army, trade and influence covering unthinkable distances.

There was no single event that triggered the fall, it was a long term degeneration. The lack of political will in Rome allowed the military to degenenerate to the point that when the Huns forced the Visigoth migration, there was nothing to stop them flooding the empire’s borders and ending up with the sack of Rome in 410 AD. In 476 the last Roman emperor, Romulus Augustus, abdicated. Not a big deal in itself, since he held no real power either politically or militarily, but effectively he was the last one to leave who put out the lights on the Roman Empire.

Modern historians like to play down how bad things were, even to the point of rejecting the name “Dark Ages”, but in fact it truly was darkness that descended.

But that is just ancient history. There is no way the world can go any direction but onward and upward, is there?

Well, I might argue that we are already on the downwards slope.

Lets look at a bit more history. When Victoria came to the throne in 1837, it was in an England that had not really changed for the last 1,000 years. Someone transported from an earlier period would not find much changed. People lived off the land using the same farming techniques that previous generations had used. Trade was carried by wind powered ships.

By the time of her death, Victoria had seen the rise of England to dominate the globe, driven by an industrial revolution which had replaced wooden ships with iron, sails by steam, muskets by rifles, machine guns and artillery. Medical practices began to actually become effective. Electrical power distribution was on the horizon. The internal combustion engine was being fitted into cars, trucks and busses. Radio was in its infancy, one year after her death the first trans Atlantic radio transmission was made by Marconi. Three years after her death the first powered flight was made by the Wright brothers.

A huge change in one lifetime.

In the next lifetime, even more changes took place. Antibiotics meant that previously fatal disease could now be cured, immunization bought plague outbreaks under control,  electricity was in most people’s homes, radio and television became ubiquitous, the power of the atom was harnessed producing weapons capable of leveling entire cities and generating limitless power, jet engines made mass air travel possible, Yuri Gararin orbited the Earth, starting a new exploration phase that ended with men walking on the moon, computers began to become truly general purpose and available as consumer items, faster than sound commercial flight began, the network which would evolve into the Internet was created.

The rate of change was exponential. Science fiction became reality, or shown to be hopelessly short-sighted.

So where are we now?

An image by a FaceBook friend (on left) probably illustrates this quite well. The thing to notice is that really isn’t anything new there. The cell phone has become smaller and offers more features, but its not really that much different, its still a cell phone. The car hasn’t changed much, more bells and whistles and clear-coat paint, but essentially the same, the game console is still … well .. a game console. The PC has evolved into a laptop, and has much more power, but is still just apersonal computer.

The space shuttle has … well … gone.

Where are all the new things wich didn’t exist in some form or other 30 years ago?

The stream of new inventions has dried up and been replaced by “innovation”, which is basically just re-applying or adding bells and whistles to already existing things.

Not only has the creation of new inventions and concepts dried up, but in some cases we are actually moving backwards.

We used to have supersonic commercial air transport. It is no more.

We used to have the means to put men on the moon. But no more, it was replaced with something that could only reach low earth orbit, destined to be itself replaced with what is actually little more than a glorified bottle-rocket. The people that knew how to put men on the moon have retired or died. The methods used to produce some of the materials they used is now unknown. The programs they used are stored (if not destroyed) on media that readers no longer exist for, and if the media could be read, the processors on which it ran no longer exist.

There are even a number of people that now believe that there never were people walking on the moon.

Malaria was under control, and heading for extinction. Its now back in full swing, killing millions every year, and making the lives of millions more a living hell.

Cheap farm machinery allowed third world countries to begin to produce enough food to keep their populations fed and healthy, even to build up stocks to see them through lean times. The rising cost of fuel will soon stop that.

We had cheap and abundant power, slowly but surely the power systems are degrading with power outages becoming more rather than less common. We also have the prospect of power becoming so expensive that we will go back to the time when people dreaded the onset of winter with the prospect of illness and death from the bone-chilling cold and damp.

We are moving from the age of atomic power to the age of windmills, a technology that never really worked, and won’t now.

We had the possibility of personal transport which we could use to drive from one side of a continent to another. It is now rapidly coming to the stage where using that transport simply to get to and from work may be no more than a dream.

We have gone from walking into a room, flicking a switch to instantly light the room, to sutumbling around in the semi darkness waiting for the feeble glow of our CFLs to grow into the harsh monochromatic light that we are now forced to live with. The supposed savings they produce burned up (and more) by leaving them on to avoid the long warm-up time, and having to replace them seemingly more frequently the old incandescent bulbs due to them expiring if turned on and off too frequently.

The evidence is all around that technologically and sociologically thing have come to a halt, and may even be going backwards.

The great armies built to maintain peace are disintegrating. The USSR is no more, England is finding it difficult to provision even minor engagements on the middle East. The US military power is more and more dependent upon technological superiority, at a time when domestic technology is on the decline. The US doesn’t even have the capacity to manufacture its own LCD displays.

The Visigoths may no longer be a threat to civilization, but their modern barbarian counterparts are continually present at the fringes, and announce their continued presence with random acts of terrorism.

Invasion is taking place, destabilizing societies. Continual influx from external societies is necessary for any healthy civilization, its the sociological equivalent of new DNA in the gene pool, but just as infusing new DNA by mass rape is not a good idea, there is a maximum rate at which foreign culture and people can be absorbed. Western society is well beyond those limits, building up tinder-box conditions which once ignited will be very difficult to suppress.

When the Roman Empire faded, its place was taken by the Church, which was not the warm and welcoming Church of today, but an organization typified by the Spanish inquisition and brutal suppression of any ideas of which they didn’t approve. They were responsible for holding back scientific progress as Galileo and his compatriots discovered.

The Church’s likely equivalent in the event of a new Dark Age may well be the transnational corporation. Failing that, there are many other pseudo religions (Green, Gaia etc.) who see their role as being to reduce the world population to what they consider manageable proportions, and to ensure that those population employ only green-approved technology.

Pray to whatever gods you believe in that its the transnationals that take over. If its the other group, Pol Pot’s Cambodia is going to look like a holiday camp.