Perhaps even older than the perennial argument about the relative superiority of 9 mm vs .45 ACP is the argument about the viability of the revolver vs the semi-auto as a self-defense weapon.
There are still those that claim that the revolver is a better choice for some classes of people, most notably women. The argument is that a revolver is simpler, and easy to operate. Presumably, the implication here is that women are also simple and mechanically inept. Not only is this untrue (at least in my experience) but it is inviting (at minimum) a poke in the eye with a sharp stick.
So lets take a look at the relative advantages:
Capacity. The smallest semi auto is competitive in terms of capacity. Small revolvers firing decent power ammunition will typically have 5 or maybe 6 rounds. A small semi-auto will typically hold 10 rounds. Even the minute Ruger LC9 9mm holds 7+1.
Reloading. Reloading a semi auto typically involves pressing a button to drop the empty magazine, sliding a full magazine into place, then releasing the slide. A revolver typically involves pressing a button to release the cylinder, swinging the cylinder out, hitting the ejector to extract the used cases, then loading rounds, typically one by one into the cylinder and finally closing the cylinder. This can be sped up by the use of speed-loaders, or moon-clips. The only disadvantage to these being their bulk.
Physical size. One of the problems with a revolver is that there just isn’t much you can do about the diameter of the cylinder. For a given capacity, a semi-auto is always going to be smaller.
The slide. The slide on a semi-auto is probably the most intimidating part. Having that chunk of metal whiz back at blinding speed 1/8″ above your hand is disconcerting until you get used to it. Then there is the problem of racking the slide. The spring on some guns making this a challenging task for the uninitiated. However, I contend that with a bit of coaching, I can get people who claim that they just can’t do it, happily racking their slides in a couple of hours. It is mostly technique, although there will undoubtedly be some people with real physical problems for whom this will always be a problem.
The cylinder gap. The gap between the front of the cylinder and the barrel leads to a blast of hot gas (flame) and potentially small pieces of metal blasting out with each shot. Unless you fire a revolver in the darkness, this is often unnoticed until someone gets a finger or hand in the way. As with the semi-auto slide, dealing with this is simply a matter of training yourself to keep your hands well away.
Mechanical complexity. The revolver is typically touted as being mechanically simpler. In fact, it is arguably considerably more complex. Leaving aside the trigger/hammer/sear which is reasonably consistent across revolvers and semi-autos, a semi-auto (non 1911) consists of basically a chunk of metal (the slide), the barrel and a spring. You can’t get simpler than that. If you want, you can add in a box and spring for the magazine. A revolver is more complex. As you start pulling the trigger, the cylinder has to un-lock so that it can rotate. The cylinder has to rotate the next cartridge in line with the barrel, accurately, to a precision of a thousandth of an inch or so. The hammer is rising at the same time. Before the hammer falls, the cylinder has to be locked in place, then the hammer falls. There is a lot of precision placement and timing going on during that trigger pull. From the outside a revolver may look simple, but internally it is relatively complex.
Jamming. Semi-autos seem to find an endless variety of ways in which to jam. In reality, they are all variations on a couple of themes: extracting and ejecting the empty case and feeding the next round from the top of the magazine. Short of the dreaded double-feed, most jams can be fixed by the slap-rack-bang technique. Revolver jams are usually due to a single cause: There is a lot of leverage cylinder to trigger. Just try holding the cylinder between two fingers and pulling the trigger — you can’t. So the revolver depends ipon a very freely moving cylinder (when unlocked). Small amounts of dirt from just about any source in the wrong place will make the cylinder rotation stiff, and the trigger pull impossible.
Ammo problems. There are two (rare but important) ammo problems to consider. The first is a squib load – too light a charge of powder. The result is the same for revolver or semi-auto: a bullet lodged in the barrel, and the distinct possibility of losing at least a finger or two if you pull the trigger again. The second is a hang-fire. On the range these are easily and safely dealt with, just keep the gun pointe down range for 30 seconds, and if it doesn’t go bang, it is safe to remove the dud bullet and continue. In a self defense situation you cant do this. With a semi-auto, you just rack the slide – taking care to keep fingers and eyes away from the open action in case it does go off. With a revolver, you really can’t pull the trigger again, because the fizzling round will rotate to a position where the bullet has nowhere to go. If it does fire it will probably take the side out of the cylinder, and maybe half your hand with it. All you can do is a full eject/reload. Rare as hang fires are with modern commercial ammo, they do happen, and this, above all, is probably why I would not use a revolver for personal defense.
Benchmarking an LDAP server can be more difficult than it may seem at first sight. Benchmarking several different LDAP server products for comparison purposes can be even more complex.
The basic problem is that unless care is taken, a benchmark test can end up measuring something other than the LDAP server performance characteristics; typically a bottleneck in the supporting infrastructure (Server cache, Hardware, OS, File-system, TCP/IP stack, network infrastructure or a combination of these), or the performance of the LDAP client(s) creating the load.
Even when care is taken to avoid or at least minimize these problems there is often a temptation to load the server to the maximum to see what its extreme performance is like. This is usually done by sending nose-to-tail requests over multiple connections.
Unfortunately, this often yields some very unhelpful results.
In a real production environment, care will be taken not to run servers at their limits. In fact, careful system design will try to ensure that any predictable traffic spikes will be somewhat less than the maximum capacity of the system.
In this article we examine the effect that the number of connections to an LDAP server in a benchmark can have for different types of traffic.
The systems used in the following set of tests are 2 CPU, 4 core 2.53GHz machines with 24GB of memory running Centos 6.2. The LDAP server is configured with a 16GB cache and loaded with one million entries. All the entries and indexes fit into memory. Beyond configuring the cache no tuning was performed as would typically be the case for initial benchmarking runs. Similar characteristics can be expected with virtually any modern LDAP server.
Searches
A typical benchmark will consist of using multiple clients, each running some number of threads, and sending requests as fast as possible over each connection to the LDAP server. The results obtained this way can be deceiving. A typical curve of number of connections vs. request rate (throughput) looks like this:
What stands out is that with nose-to-tail requests on each connection, throughput maximum is reached with ~30 connections. In fact, as the number of connections increases, throughput actually drops slightly. Looking at the request response times is instructive.
Once maximum throughput (around 30 connections) is reached, traffic is being queued somewhere (most likely in a combination of the work queue within the LDAP server, awaiting worker threads, and possibly within the TCP/IP stack(s) of the client and/or server machines.
Without taking care of what was being measured, a simple interpretation of a benchmark run with 600 connections would conclude that this server is capable of around 74,000 searches with a response time of around 8.5 ms.
In reality, if too many connections are not used, it is capable of 75,500 searches with a response time of 0.5ms. Not a big difference in number of requests handled, but a very big difference in response time (roughly 16x).
The decrease in number of requests handled and increase in response times as connections are added beyond the maximum capacity point is almost entirely due to the overhead of handling the additional connections, which contribute nothing to throughput, but do contribute to overhead and request queuing time.
Authentication
If we look at timings of a typical authentication sequence consisting of searching for an entry based upon an attribute value (uid) then performing a bind against the DN of the entry located by the search, we see a similar curve (response time is for the entire search/bind sequence).
Again, the “sweet spot” for this particular HW/OS/Server combination is ~30 connections carrying nose-to-tail traffic.
There is a gradual degradation in the throughput as the number of connections is increased. This would lead us to suppose that there may well be a fairly dramatic increase in response times as for search operation.
As indeed we do see in this graph.
For this sort of benchmark to be meaningful, there need to be several runs to determine the response characteristics as above. Even then, it is still not a really useful test since in production no system would be designed to be carrying maximum supportable traffic on each LDAP server instance.
In reality, there would be multiple instances, probably behind a load balancer to ensure that under normal conditions each received an amount of traffic well within its capabilities.
But what if we can’t have that much control over the number of connections? In that case we may want to look at how the throughput and response time varies if we limit the authentication rate.
It is perfectly feasible to limit traffic rates with decent load balancers and/or proxy servers, so this is not an unrealistic test. Picking some reasonable value, in this case 5,000 authentications per second, we vary the number of connections.
There is no perceptible degradation in throughput, as we would expect, since we know from the previous tests that the server is capable of much higher throughput than this.
Response times remain acceptable, although this curve does clearly illustrate that many managing many connections does have a measurable (but probably insignificant) impact.
Modifications
MOD requests, particularly on a system with relatively slow file-store as on this system (single internal disk) are typically limited more by disk IO bandwidth and than anything else. So we would expect to see different response curves.
In fact, they turn out to be quite similar, with maximum throughput being reached with a relatively low number of connections:
MOD operations are inherently slower, so the lower maximum request rate is not a surprise.
Response times are also heavily influenced by the number of concurrent connections to the server.
Other Factors
When pushing servers to their limits, where they (hopefully) will not be operating in a production environment, it is worth noting that there are other factors which can make a noticeable difference to performance.
For example, in the search test above, three attributes were returned (sn, cn mail).
What happens if we only return one attribute (mail)?
Overall, the effect is marginal, but quite measurable.
Normal logging operations become noticeable at the limits. For example, the same authentication test as previously performed with the access log turned off:
Note that this is for authentications, search and bind operations only, no write activity. The effect would almost certainly be more pronounced if the same (slow) disk was used for both database and logs.
Other factors related to logging which can have a significant impact on performance are the type of logging performed (write to file, vs write to a RDBMS vs write to syslog), the level of logging and the number of logs being maintained.
Benchmarks – How To
The most useful benchmarks are based upon production traffic patterns, with the same mix/rate of all types of requests that will be used in practice.
It is not always possible to determine this, but best estimates are much better than measuring individual request types, or some trivial mixture.
If the test is to determine the suitability of some product to replace an existing system, using the same request/rate mix gives a base to compare the existing system to a proposed replacement.
Once the system is characterized for the expected traffic, rates and number of connections can be increased, but always try to change these independently, determining the best number of connections to achieve the maximum throughput.
Next, determine the expected maximum throughput, which hopefully will be significantly less than the server limit. Some experimentation with numbers of connections will soon determine if there is a maximum that you do not want to exceed, and careful tuning of connection pools can ensure that this is not exceeded in practice.
On load generation
In order to be certain that what is measured is the LDAP server characteristics, and not those of the LDAP client(s) some care needs to be taken in understanding the client. For example, using SLAMD it is tempting to use the “Mixed” client to measure a mix of MOD and SEARCH traffic. This will often produce somewhat disappointing results due not so much to the LDAP server as to limitations in the SLAMD client. Much better results are typically achieved by running two SLAMD jobs in parallel, one performing SEARCH operations and on MOD operations.
When testing a large, load-balanced system, several machines should be used to host clients, and care taken to ensure that CPU and/or network bandwidth is not exceeded, both on the LDAP server, LDAP clients and all intermediate network segments and network devices.
To achieve maximum throughput, LDAP client threads should be restricted to a small multiple of the number of CPUs on the machine on which they run.
If you are interested in ham radio and you buy a new car, the question of whether you want to install a radio in the car comes up, and if you do want to install it, how and where does it fit.
For some people, these are easy questions to answer: Yes, and wherever is most convenient.
But if you are like me, you may have an aversion to making holes in a brand new car, and also worry at least a little about the look of the final installation.
I have a Yaesu FT8800 radio which I had installed in my previous car (a 2005 Jeep Grand Cherokee). In that case, the radio was mounted under the dash, power came from a cable running through the engine compartment firewall, directly to the battery. The antenna was mounted on a K400 mount secured to the edge of the hood, a few inches from the rear, and the antenna cable run around the driver’s door weather seal directly into the engine compartment.
When I started looking at the new Jeep (2012 Grand Cherokee), there were a few problems with simply using the same scheme.
The first problem was that the dash construction has changed significantly. On the 2005 version, the dash under the steering wheel was fairly thick plastic, hinged at the bottom with clips at the top. A firm tug would open it up, and it would fold down exposing all of the wiring, and access to the firewall. On the 2012, the plastic is much thinner, and is held in place with screws and (many) clips. It is a non-trivial task to open that up.
Next, I looked in the engine compartment to see if there were any conveniently unused holes through the firewall, or if it were possible to piggy-back on existing cable holes.
There immediate answer was no. No spare holes, and suspiciously few things actually passing through the firewall, which now seemed to consist of two well spaced metal barriers.
But the biggest surprise was … no battery.
A bit of Googling informed me that on these cars the battery is now inside the passenger compartment, under the passenger seat.
Ok, so how do I get to it? Well, the official Chrysler method seems to begin with “remove the passenger seat”. Hmm… really?? Yes. Really.
I took a look, the passenger seat is also home to the audio system amplifier, and being an electrically operated seat, and heated, there are quite a few connectors that would need to be undone before you could even start to remove the seat.
However, it appears that the instructions had been written with non-electric seats in mind. The electric seats have much more travel, and most importantly, can be raised up, giving more room to get access.
Moving the seat right forwards and up gave enough access to allow the one edge of the battery compartment cover to be lifted. Moving the seat right back gave access to the other end of the cover, and wriggling fingers under it and lifting popped off that end of the cover, revealing the battery.
Now, how to connect to it. Unlike previous cars, there are no convenient nuts and bolts securing the cables. In this case, there is what appears to be a plastic wedge arrangement which is tightened with a nut and bolt. Of course, the plastic does a fine job of insulating the securing nut and bolt from the terminal. Closer inspection revealed a secondary connection on the positive terminal, so loosening that off allowed a spade terminal to slide under, and tighten back up for a good connection to the positive side of the battery. For the negative connection, there was no hope of connecting directly to the terminal, so a connection was made to the car body via the nuts securing the battery clamp. That worked fine.
The FT8800 has a detachable face, and when I bought it, it came with a remote mount kit to mount the face separately from the radio.
I looked around for somewhere to mount the face, and eventually settled on the idea of using the compartment with a door, in the middle of the dash console below the radio and heater controls.
For the radio body, I explored several options, but decided that under the driver’s seat was probably going to be the best location.
There is a similar panel to the battery cover under the driver’s seat. Lifting this revealed a number of connectors to pass wiring looms to the outside of the car. This provides a relatively secure location to mount the radio mounting bracket without having to make holes in the bodywork.
The power cable was fairly easily passed across the center console by removing the plastic panels on each side, and running the cable in front of the gear selector. The cable tucks up under the plastic panels on each side and so is not visible.
The small compartment chosen for the radio head has no easy location to fix the plastic holder for the head. It does have a rubber tray in the base, which fits fairly tightly into a shapped depression at the bottom. I used a piece of metal bent to fit, with each end sticking up, one end to attach the radio bracket, and the other to attach to the rear of the compartment. Exterior quality double sided tape was used to secure the bracket to the rubber mat, to the radio head bracket and to the rear of the compartment. A small hole, just big enough to pass the connector on the cable connecting the head to the radio body was drilled in the rear of the compartment (invisible, unless you know it is there, and go looking for it with a light).
The same panels removed to run the power cable gave access to this connecting wire and allowed it to run alongside the power cable.
There is room to place the microphone inside and close the door when not in use.
The next obstacle to face was mounting the antenna. Purists would go straight to drilling a hole in the center of the roof and installing an NMO connector. I am not enthusiastic about making holes in the roof of my new car, and besides, it has a moon-roof and sun-roof and so the majority of the roof is glass.
I initially looked to my K400 mount. But this requires a flat edge around 4″ long. When I looked, this car is amazingly … curved. Hardly a straight edge anywhere. The only flat edge on the hood would be pretty close to the front. Not only would that look odd, but getting the cable into the engine compartment would be a challenge.
I eventually settled on the idea of using a glass-mount antenna, but various obstacles prevented me from going in that direction. I eventually decided that a section of flat metal about 2″ long on the rear door might work. Especially since there is a convenient hole with rubber plug at the top left of the body underneath the door.
I obtained a K412S mount, which is similar to the K400 but only requires about 1.5″ of space to mount. This fit perfectly.
The hard part was getting the cable from the hole under the door, to the space behind the plastic trim at the roof level. I could get a piece of plastic “string” through, but there was just not space to pull the SMA connector through. After about an hour of trying, I gave in and removed more plastic trim around the top and left of the rear door. This enabled me to thread the cable through fairly easily. When it got to the rear door, I simply pulled the door seal away, and tucked the cable behind the exposed interior trim, down the side of the car, under the door sill and out to the radio.
I initially had some doubts about whether the radio would be really audible in that position, or if I would need to look at obtaining and mounting an external speaker. As it turns out, the radio is perfectly audible.
The antenna position is not ideal, but in practice seems to work very well. I don’t think that I would want to mount my larger (5/8 wavelength) antenna on that small mount, but the 1/4 wave seems to work just as well for most purposes.
The only slight disadvantage to having the antenna in this position is that the glass door panel can’t be opened without unscrewing the antenna. In practice I virtually never used the glass door panel on the previous car, so don’t expect this to be a significant problem.
There is often some confusion as to the exact history of the different directory server versions that were sold by Netscape, iPlanet, Sun Microsystems, AOL, Red Hat and now Oracle.
For anyone interested in the lineage of these different directories, this is my recollection of events, some from the inside, and some from the outside:
Netscape Directory versions 3 and version 4 were where directory server as a commercial product really started to take off. Netscape Directory Three was based directly on the work of Tim Howes at University of Michigan. It was really more of an LDAP front-end, with provision for different back-end databases (at least in theory). Netscape Directory Four recognized that to get good performance, there needed to be tight coupling between the front-end (LDAP) and the back-end (database), so the facility of pluggable backends was dropped.
At this point, AOL buys Netscape and only wanting the browser, the website and its attached eyeballs, forms iPlanet with Sun to offload the other products it has no use for.
Sun takes over development, integrates components from its existing directory product and Directory 5.0 is born. This was a bit of a dead-end, with terrible performance mainly due to the replication scheme used.
Directory 5.1, which should really have been 6.0 because of the huge difference (ripping out the unworkable replication and replacing it with a loose consistency model) and different schema file format was more than just an incremental change. I also remember this being the time that the ACI format changed.
At this point, iPlanet disolved, and the code was shared by both Sun and AOL.
AOL took the code and tried to sell it as their own directory server. It never sold well, and eventually Red Hat bought it from AOL and open sourced it. 389 was derived from this open source (389 is the development version, the Red Hat directory is the stabilized commercial/supported version).
Sun continued to evolve the product, and with directory 5.2 had a replication model which supported up to four masters (actually, it would work with more, but the performance implications caused Sun to limit official support to a maximum of four).
This was probably the most successful in the entire product line. Four masters was enough to cover multiple data centers, replication would work over a WAN and the directory server itself would scale up tens of millions of entries with suitable hardware behind it.
Directory 6 was an evolution which tried to resolve some of the limitations of DS 5.2. It removed the four master limit and used a later version of the SleepyCat database. Scalability was improved.
At this point it became obvious that the fundamental design of the product was the limiting factor in getting significant performance improvements, so a next generation directory server project was started. This being Sun, it had to be written in Java. OpenDS was born. A lot of people were skeptical about performance of a Java DS, but early testing showed some surprisingly good results. Using a more modern back-end DB not only helped performance but improved resilience and reliability too. Fortunately, this happened at a time when Sun were experimenting with open source, so OpenDS was an open source project.
Sun then made what in my opinion was a strategic blunder. Trying to cut costs, they decided to combine their two directory engineering centers into one. They chose to continue with Grenoble, and shut down the Austin group, laying off a group of highly talented directory engineers and marketing people (this is not to say that the Grenoble group were not also talented).
Being unemployed, the Austin (ex)employees looked around for what to do, and UnboundID was created. They had been working with Sun customers for many years, and knew exactly what enterprise customers wanted from a directory, and had seen some of those needs continually slip along the roadmap timeline, or get dropped time after time. They took the OpenDS code and added those items to it (as proprietary extensions).
Back at Sun, DS 6 was supposed to be the end of the line for the C based directory, with DS 7 being based upon the OpenDS project.
There were still a few performance tweaks that could be applied to DS 6, so DS 7 was actually still based upon the legacy code – essentially taking ideas tested out in OpenDS, such as compression of database entries and back-porting them into the legacy DS code.
OpenDS was still intended to be the future.
Enter Oracle.
It soon became clear that whatever marketing spin they put on it, Oracle just wanted the customers, and not the directory technology. They were going to transition existing DSEE customers to Oracle Internet Directory (Oracle’s directory product sitting on top of an Oracle database), and since OpenDS has no customers, it was dead. At the same time, they put OpenSSO into a state of living death.
There were many customers using Sun’sOpenSSO product who were not thrilled at the prospect of losing their investment in OpenSSO, or the forced transition to what many considered to be an inferior product. ForgeRock was formed to provide support and a product evolution roadmap to OpenSSO customers that didn’t want to transition to Oracle’s access manager (was Oblix). OpenSSO (OpenAM) really needs a LDAP server, and being an ex Sun product had lots of dependencies on DSEE. ForgeRock needed an open source directory to complement OpenAM. UnboundID was certainly a possibility, but with the strong open source ethic at ForgeRock and the proprietary ethic at UnboundID, the fit was not there. OpenLDAP was another possibility, but although this had followed its own evolutionary path, and is a competent LDAP server, it is written in C and would require porting and support specific to each platform.
ForgeRock decided to do their own support of OpenDS. They acquired some of the key talent from the (Sun/Oracle) Grenoble directory engineering center, and OpenDJ was born. Initially, the idea had been to simply participate in the OpenDS community and provide commercial support, but for various reasons it soon became clear that it would be necessary to fork the project. There is still active participation in the OpenDS project, and with both being open source projects, some cross-pollination of ideas.
One of the biggest hurdles faced by ForgeRock (and UnboundID) was that Sun had provided the documentation effort for its open source projects (openDS and OpenSSO) and has copyrighted the result, now owned by Oracle, which meant that they were faced with the herculean task of completely re-documenting. In ForgeRock’s case, for two products; OpenAM (OpenSSO) and OpenDJ (OpenAM).
—-
Things didn’t really go as Oracle had planned with DSEE. Existing customers would not transition to OID in most cases, and with viable alternatives (UnboundID, OpenDJ and OpenLDAP) which did not require building an Oracle database infrastructure and employing DBMs, they continued with DSEE (or ODSEE as they insist on calling it).
Of course, they recognized what Sun had several years before, that the current code base had reached the end of the line, and that if they wanted to keep the existing customers they had to provide a path which was not OID. So back to OpenDS, plus a few tools to ease the transition from DSEE, and the Oracle Unified Directory came into existence.
There have been lots of headlines and quotes from various people about the press release from CERN about observations that indicated that they have observed neutrinos travelling faster than the speed of light. There have been a number of “authorities” claiming that these results must be wrong, because its absolutely certain that nothing can travel faster than light.
Of course, most of the media coverage more or less completely ignores the content of the press release from CERN. What they actually said was that over three years and many (thousands) experiments they have consistently observed that the time taken for the neutrinos to travel the 730km between CERN and Gran Sasso the time taken has been 0.00000006 seconds shorter than light takes to cover the same distance. This is equivalent to the distance being 20m shorter than it actually is, so the difference is actually very small, but measurable and consistent.
The scientists at CERN are doing what all good scientists do when their observations clash with accepted understanding. First, they look for errors and alternative explanations to account for their measurements. After exhausting all the explanations that they can think of, and not finding any of them account for the discrepancy, before announcing the demise of one of the cornerstones of current physics, they have made their methodology and data available to others, and asked them to verify the methodology and to examine the data for alternative explanations. Only when heir results are confirmed, and quite possibly replicated (or not) by others, will they feel somewhat confident in claiming that there are indeed some particles which can exceed the speed of light.
Its good to see that at least in some quarters, the scientific method is alive and healthy, with scientists freely and voluntarily sharing their methodology and data with others, and asking them to validate or disprove their findings.
Contrast this with so-called “climate scientists” who, along with the organizations that employ them, spend millions of dollars to avoid having to share any of their methodology or data, and famously wrote to somone asking for a copy of the data: “Why should I share my data with you when I know that all you will do with it is look for problems”.
Of course, looking for problems is exactly how real science works, not by having a few mates read a publication, declare it good, then claiming that the science is settled.
Apple has released their latest version of OS X, named Lion.
Having downloaded and installed this a week ago, I have spent a number of hours fighting with this beast, trying to disable most of the new “helpful” features which are anything but helpful.
Given the success of Apple’s other operating system iOS, which runs on the iPhone and iPad, Apple seems to have decided that the two operating systems should share a common interface.
Now if Apple had paid attention, they would have realized that this was exactly the reason why Microsoft have done so appallingly badly in the smart phone and tablet markets. Microsoft tried to make everything look like Windows, and that just doesn’t translate to the small screen (hand held devices). Now Apple are trying the same trick, and unless they learn their lesson very quickly, they will destroy the gains they have made on the desktop and laptop market.
So what is so bad about this feline monstrosity, and how do we go about taming it?
The first thing that hits you in the face like a dead fish is that they reversed the scrolling direction. They forgot the metaphor of all desktop systems, right back to the time they ripped off the design from Xerox – you scroll the screen by grabbing the scroll bar and sliding it up or down, where up is the beginning of the document, and down takes you to the end. On the small screen its different, the metaphor there is that you grab the document and slide that.
What this translates to is that on Lion, move your cursor down and the document goes up (!). The scrollbar goes in the opposite direction to the cursor, unless you actually grab the cursor, then everything goes the right way. This is obviously somewhat disconcerting, so to top the distraction of the errant scrollbar, they hid it. Well, then they obviously realized that the scrollbar performs other functions, such as giving you an idea of the length of the page, and your location in it, so hiding the scrollbar was not going to work well.
So they hide it, except when you are scrolling.
The answer to this is that in the System Preferences there are checkboxes to restore the correct scroll direction, and to show the scrollbar. Actually, the scrollbar shows in most applications anyway, its only Apple’s apps that have been modified to allow hiding where it flips in and out of existence.
Recent versions of OS X have implemented gestures. That is, using multiple fingers on the MacBook Pro trackpad to achieve various functions. These are generally good, things like tapping once with one finger on the pad for a “left click”, tapping with two fingers for a “right click”, using two fingers moving up and down to scroll, using two fingers moving left and right to move back and forward a page.
In Lion, they not only added more, they changed some of them, so the two finger left and right only works in some contexts. Then they added new features, such as full screen mode, which takes some multi-finger gymnastics to get out of, and back into, inviting lawsuits from people with not enough fingers due to various accidents, and those with arthritis who will have great difficulty performing these gestures. Actually inviting a much simpler gesture which only requires one finger…
Again, you have some control over these gestures via System Preferences. For example, you can turn on an option to allow two or three finger paging, then three finger works (almost) everywhere. Turning them off is possible, but then there are going to be some states you will find yourself in which may be somewhat hard to escape from.
Another evil change is restoring the last state of any application. This is a royal pain. Most times you fire up a text editor, its to edit a new document, not the last one you wrote. It can also be slightly annoying if your boss asks you to look at something, you fire up the editor, and it displays a copy of your cover letter for a job application to another company, or you start up your browser to find that when you let your brother use it, he was browsing three legged midget porn sites.
Apple say you have control on a per application basis, but in reality this means that when you quit the application you have to check a box to tell it not to remember the current state. No way to (for example) tell Safari to never open on the last site visited.
There is an option to turn this off globally, but its hidden in the options settings fot TimeMachine (the Apple backup software), so not trivial to find.
But possibly the worst abomination is file versioning. What this does is to save the current state of a file you are working on every few minutes. It will try to save this to your TimeMachine backup volume if it is connected, but if not, it will store it to your local disk. What this does for you is to eat huge quantities of disk space, and you have no clue where the space is going. It also keeps your disk from going to sleep, and so gobbles battery up rapidly on a laptop.
Of course, not every application does this, only the ones that Apple have converted. So its not implemented in the filesystem, where something like this really belongs if you are going to do it, so you never really know if your work is being autosaved by whatever application you are using or not.
Another interesting facet of this is that you no longer get a simple “save” or “save as” option in (converted) applications, but a confusing array of options asking if you want to create new versions of the file etc.
An added bonus is that if you don’t touch a file fo a while (supposedly 2 weeks by default), it gets locked, so next tim you go to work on it, you are again asked confusing questions about whether you want to unlock it, create a copy etc. There also appears to be a bug. It is merrily locking files much less than two weeks old on my system.
In the pre-release version of Lion, there was a checkbox to turn off versioning hiden away in the TimeMachine options. In their infinite wisdom, Apple removed that option in the released version.
Fortunately, it is still possible to turn this stuff off, but it requires running a command from the command line, as root:
# tmutil disablelocal
This stops backup copies being stored on your local disk.
Execute this, and you disk goes wild for a few minutes, deleting all the saved versions. All your vanished disk space returns, and the disk now happily goes to sleep and you battery lasts that much longer.
There are a few unkind people who have compared Lion to Vista. It isn’t Apple’s Vista, but it is certainly a creditable attempt.
The world was on an upwards path. The hand to mouth existence of the past was just a memory for many. Cities were being built, universities were spreading knowledge and libraries were storing that knowledge for future generations. Trade was spreading, and publicly financed sanitation projects were driving disease and pestilence back into the darkness. War was something that happened far away, at the edges of the empire.
Then something happened. The Roman empire collapsed and was overrun by barbarians. The world descended into an age of ignorance, superstition and fear. The Dark Ages had begun, and would last for 1,000 years before the renaissance (around 1500 AD) slowly re-established civilization, and put the world back on course.
Exactly what caused the collapse is not entirely clear because much of the written history of the period was destroyed.
This was not a unique event. Previous great civilization in Egypt and Greece had gone the same way. Undoubtedly the people alive even as the descent into chaos began never thought that it could happen to their civilization. Too much invested, a world class army, trade and influence covering unthinkable distances.
There was no single event that triggered the fall, it was a long term degeneration. The lack of political will in Rome allowed the military to degenenerate to the point that when the Huns forced the Visigoth migration, there was nothing to stop them flooding the empire’s borders and ending up with the sack of Rome in 410 AD. In 476 the last Roman emperor, Romulus Augustus, abdicated. Not a big deal in itself, since he held no real power either politically or militarily, but effectively he was the last one to leave who put out the lights on the Roman Empire.
Modern historians like to play down how bad things were, even to the point of rejecting the name “Dark Ages”, but in fact it truly was darkness that descended.
But that is just ancient history. There is no way the world can go any direction but onward and upward, is there?
Well, I might argue that we are already on the downwards slope.
Lets look at a bit more history. When Victoria came to the throne in 1837, it was in an England that had not really changed for the last 1,000 years. Someone transported from an earlier period would not find much changed. People lived off the land using the same farming techniques that previous generations had used. Trade was carried by wind powered ships.
By the time of her death, Victoria had seen the rise of England to dominate the globe, driven by an industrial revolution which had replaced wooden ships with iron, sails by steam, muskets by rifles, machine guns and artillery. Medical practices began to actually become effective. Electrical power distribution was on the horizon. The internal combustion engine was being fitted into cars, trucks and busses. Radio was in its infancy, one year after her death the first trans Atlantic radio transmission was made by Marconi. Three years after her death the first powered flight was made by the Wright brothers.
A huge change in one lifetime.
In the next lifetime, even more changes took place. Antibiotics meant that previously fatal disease could now be cured, immunization bought plague outbreaks under control, electricity was in most people’s homes, radio and television became ubiquitous, the power of the atom was harnessed producing weapons capable of leveling entire cities and generating limitless power, jet engines made mass air travel possible, Yuri Gararin orbited the Earth, starting a new exploration phase that ended with men walking on the moon, computers began to become truly general purpose and available as consumer items, faster than sound commercial flight began, the network which would evolve into the Internet was created.
The rate of change was exponential. Science fiction became reality, or shown to be hopelessly short-sighted.
So where are we now?
An image by a FaceBook friend (on left) probably illustrates this quite well. The thing to notice is that really isn’t anything new there. The cell phone has become smaller and offers more features, but its not really that much different, its still a cell phone. The car hasn’t changed much, more bells and whistles and clear-coat paint, but essentially the same, the game console is still … well .. a game console. The PC has evolved into a laptop, and has much more power, but is still just apersonal computer.
The space shuttle has … well … gone.
Where are all the new things wich didn’t exist in some form or other 30 years ago?
The stream of new inventions has dried up and been replaced by “innovation”, which is basically just re-applying or adding bells and whistles to already existing things.
Not only has the creation of new inventions and concepts dried up, but in some cases we are actually moving backwards.
We used to have supersonic commercial air transport. It is no more.
We used to have the means to put men on the moon. But no more, it was replaced with something that could only reach low earth orbit, destined to be itself replaced with what is actually little more than a glorified bottle-rocket. The people that knew how to put men on the moon have retired or died. The methods used to produce some of the materials they used is now unknown. The programs they used are stored (if not destroyed) on media that readers no longer exist for, and if the media could be read, the processors on which it ran no longer exist.
There are even a number of people that now believe that there never were people walking on the moon.
Malaria was under control, and heading for extinction. Its now back in full swing, killing millions every year, and making the lives of millions more a living hell.
Cheap farm machinery allowed third world countries to begin to produce enough food to keep their populations fed and healthy, even to build up stocks to see them through lean times. The rising cost of fuel will soon stop that.
We had cheap and abundant power, slowly but surely the power systems are degrading with power outages becoming more rather than less common. We also have the prospect of power becoming so expensive that we will go back to the time when people dreaded the onset of winter with the prospect of illness and death from the bone-chilling cold and damp.
We are moving from the age of atomic power to the age of windmills, a technology that never really worked, and won’t now.
We had the possibility of personal transport which we could use to drive from one side of a continent to another. It is now rapidly coming to the stage where using that transport simply to get to and from work may be no more than a dream.
We have gone from walking into a room, flicking a switch to instantly light the room, to sutumbling around in the semi darkness waiting for the feeble glow of our CFLs to grow into the harsh monochromatic light that we are now forced to live with. The supposed savings they produce burned up (and more) by leaving them on to avoid the long warm-up time, and having to replace them seemingly more frequently the old incandescent bulbs due to them expiring if turned on and off too frequently.
The evidence is all around that technologically and sociologically thing have come to a halt, and may even be going backwards.
The great armies built to maintain peace are disintegrating. The USSR is no more, England is finding it difficult to provision even minor engagements on the middle East. The US military power is more and more dependent upon technological superiority, at a time when domestic technology is on the decline. The US doesn’t even have the capacity to manufacture its own LCD displays.
The Visigoths may no longer be a threat to civilization, but their modern barbarian counterparts are continually present at the fringes, and announce their continued presence with random acts of terrorism.
Invasion is taking place, destabilizing societies. Continual influx from external societies is necessary for any healthy civilization, its the sociological equivalent of new DNA in the gene pool, but just as infusing new DNA by mass rape is not a good idea, there is a maximum rate at which foreign culture and people can be absorbed. Western society is well beyond those limits, building up tinder-box conditions which once ignited will be very difficult to suppress.
When the Roman Empire faded, its place was taken by the Church, which was not the warm and welcoming Church of today, but an organization typified by the Spanish inquisition and brutal suppression of any ideas of which they didn’t approve. They were responsible for holding back scientific progress as Galileo and his compatriots discovered.
The Church’s likely equivalent in the event of a new Dark Age may well be the transnational corporation. Failing that, there are many other pseudo religions (Green, Gaia etc.) who see their role as being to reduce the world population to what they consider manageable proportions, and to ensure that those population employ only green-approved technology.
Pray to whatever gods you believe in that its the transnationals that take over. If its the other group, Pol Pot’s Cambodia is going to look like a holiday camp.
An announcement by the American Astronomical Society probably not only puts a final nail in the coffin of AGW, but sets up a lot of people for a big U-turn.
It has long been a claim of Anthropogenic Global Warming (AGW) advocates that the sun has no significant effect on the temperature of the earth, that the Sun’s output is constant never changes. This assumption is one of the fundamental constants in climate models in current use.
Reality tells a different story. The Sun is a horribly complex system, not really well understood by scientists. it is much more than just a big ball of (fusion) fire in the sky. It undergoes many dynamic events on regular cycles. The one that most people are at least peripherally aware of is the sunspot cycle, which is an approximately 11 year cycle where the number of sunspots varies, and along with that particle emissions which cause Aurora and radio propagation changes on Earth.
There have been variation in this cycle that have been documented now for around 400 years. During that time there have been observed variation in the intensity of sunspots. There are two well documented minimums, where there were no, or very few sunspots, even at what should have been the height of the 11 year sunspot cycle.
The biggest of these was known as the Maunder Minimum, which lasted from roughly 1645 to 1715, and another known as the Dalton Minimum, running from1790 to 1830.
These minimums have coincided with some very cold periods (which AGW proponents have tried very hard to pretend never happened) – a period known as the Little Ice Age corresponding to the Maunder Minimum.
Paintings from the period document cold not seen in modern times, such as this painting by Pieter Brugel in 1565:
and this painting of the frozen river Thames in 1677:
The Dalton Minimum was not as deep and lasted a much shorter time. However, there were cooling effects such as a measured 2.0°C decline over 20 years measured at a weather station in Oberlach, Germany, and also the “Year Without a Summer” (1816) during which 1,800 people are reported to have frozen to death in New England.
Back beyond the earliest records of sunspot observations, there are indications, based upon analysis of C14 in tree rings, of another minimum, known as the Spörer Minimum, which lasted for approximately 90 years and coincided with abnormally low temperatures.
So what, you may ask, does this have to do with the AAS and today?
The AAS announcement included this:
Some unusual solar readings, including fading sunspots and weakening magnetic activity near the poles, could be indications that our sun is preparing to be less active in the coming years.
The results of three separate studies seem to show that even as the current sunspot cycle swells toward the solar maximum, the sun could be heading into a more-dormant period, with activity during the next 11-year sunspot cycle greatly reduced or even eliminated.
The evidence for this is fairly clear. The predictions for the maximum number of sunspots for the current cycle (24) has been reduced again and again. There are also measurements of magnetic effects of the quietening sun such as this graph showing the weakening magnetic field of the Sun.
Another interesting observation is the following pair of graphs. They basically show the weakening of the observed sunspots.
The brightness of the sunspots is increasing, and the magnetic field they produce is weakening. When the intensity reaches 1, and the magnetic field reaches 1500, sunspots will no longer be observable.
Based upon the past experiences, it is reasonable to assume that if/when the sun does go quiet, we can expect to see some significant falls in temperature.
It may take a while to sink in, but rather than concentrating policy on making energy too expensive to heat homes even now, and developing crops to produce ethanol, energy should be going into build energy resources that will actually work and produce far more energy that currently available, and crop development should be concentrating on crops which can successfully be grown in lower temperatures ans growing season which may be as much as 60 days shorter.
Failure to do so, and continuing to build solar farms that stop working when covered by snow and windmills that will freeze (and wouldn’t produce enough energy even if they did turn) will doom many millions of people to starvation and death by freezing.
———
As an afterthought, here are the NASA predictions for the current sunspot cycle, starting in 2007 through to today.
Note how not only does the amplitude drop, but things get pushed further and further out into the future:
This is a story of how a fairly simple thing can turn into a long drawn out saga.
In the following, there are some gross simplifications made in some of the descriptions of how things work, this is necessary to keep this article from expanding into a book.
One of my hobbies is shooting, and one of the easiest guns to customize for specific types of shooting (target, competition etc.) is the AR-15. The AR-15 is an interesting gun, it is a relative of the military M16 and M4 rifle and shares a lot of the hardware, minus the full-auto/burst/machine gun bits. This makes that hardware relatively cheap and easy to obtain. The ammunition is likewise relatively cheap, although with vast quantities being diverted to the Middle East recently, the cost and availability has been something of a problem.
The relatively small 5.56mm cartridge it uses means fairly light recoil, which is helped by the design of the rifle, which has the stock directly in line with the barrel rather than below it as in classic rifle designs. This light recoil, along with its semi-automatic operation make it very easy to shoot. There is, however, one small problem – it is LOUD.
The answer to a loud gun is fairly simple, fit it with a silencer (or more technically correct, a suppressor). Physically an easy thing to do. Practically, at least in the USA, more difficult.
The US government, in its infinite wisdom, have watched some old gangster movies and determined that a silencer makes guns just go “pffttt”, which encourages silent murder or something, and if people are going to be shot, then they must be shot with a LOUD gun. For that reason, in the US silencers are very highly regulated and subject to a $200 tax.
After jumping through all the hoops and paying my $200 tax, I obtained a suppressor for my AR-15. Problem is, attached to the existing rifle the combination is rather long and unwieldy.
For the answer, look to how the military solved this problem – the M4, which is a stubby version of the M16 with a collapsible stock and short barrel.
So, just build one with a short barrel – right?
Hang on a second … the US government, in its infinite wisdom, has decreed that rifles will have a barrel length of 16″. If you fit a barrel of 15.95″, you do not pass go, do not collect $200 and go straight to jail. Apparently, that 50 thousandths of an inch makes the difference whereby you could conceal the rifle by stuffing it down your trousers, or something, but 50 thousandths of an inch longer, and it becomes impossible … what was that? handguns don’t have long barrels? and you could conceal one of them easier? … hmm … you are just confusing the issue! go away!
It is possible to build a short barreled rifle, but (you guessed it!) its highly regulated, and subject to a $200 tax.
So, after going through all the background checks … one to buy the gun, which is apparently not enough to then fit it with a short barrel, and the one you passed to get a silencer … well … um … that was for a silencer, not a short barrel! so another to prove you are not an abuser of short barrels … and forking over another $200, then you can fit your short barrel.
So we are there … nice easy to shoot gun, relatively quiet (no, not the “pfftt” as in the movies, but now at a level that won’t perforate eardrums) and a manageable size.
Just one small problem …
After shooting it for a few minutes, you end up looking like a refugee from the Black and White Minstrel Show (if you are old enough to know what that is — if not, Wikipedia is your friend).
This is due to the design of the gun. As with most semi-automatic rifles, there is a small hole tapped into the barrel to let a small amount of the very high pressure gas driving the bullet escape, and be used to operate the mechanical system which ejects the empty cartridge case and load a new cartridge into the chamber. In many guns the gas is directed into a cylinder and drives a piston which pushes a mechanical linkage to do the work. In the AR-15/M16 the gas is taken directly into the internal workings of the gun, and directly drives the mechanics.
This design has always been a point of discussion. In one way, it is an elegant design, removing gas cylinders, pistons and mechanical linkages. In another view, deliberately diverting hot gas and soot into the precision components of the gun is asking for trouble. In practice, these guns probably do need a more regular cleaning than, say, an M1, M14, AK47 etc. but that is offset by not needing to do a much more complex cleaning operation on piston operated systems, and less moving parts to go wrong.
The other small problem is the AR-15 charging handle. This is used to initially cock the gun. It sits at the back and top – right in front of your face. When a silencer is fitted it causes higher than normal back-pressure in the barrel, so more gas back into the works. This gas has to go somewhere when it is finished driving the eject-reload cycle, and a large proportion of it ends up escaping around the charging handle, carrying black soot with it.
The insides of a suppressed AR-15 get very dirty, very quickly – and so does the face of the shooter.
There are some things you can do to help with this. The cheapest is to use some black silicone gasket sealing compound applied carefully around the charging handle, making certain that the handle remains free to move.
Well, it sort of works, for a while. It also looks a complete mess. Not a good engineering solution at all.
Next there is a replacement charging handle with some extra bits molded onto it to deflect the gas. This sort of works. Not entirely, but makes a big difference. It does make a big hole in your pocket though. It is exorbitantly expensive for what it is. It also does nothing for the accumulating grunge inside the gun.
The final solution is to rip out the existing gas system and replace it with a gas piston and an operating rod to move the mechanics. There are several manufacturers that build complete AR-15 uppers that are gas operated. They tend to be very expensive. Then there are manufacturers of add-on gas piston systems.
I chose this route, and selected the Osprey gas piston system. Mostly because it is simple (and IMHO, simple is usually better) but also because it seemed to address one of the most common complaints about add-on gas systems — that it made the gun unreliable.
Now there are different degrees of reliability that people look for in rifles. I am not going to be taking mine to war, dragging it through swamps and deserts and needing it ALWAYS to go bang when the trigger is pulled. However, I don’t want to be having to pull it to pieces to find out why it jammed every five minutes either.
The Osprey system is really designed for military use, and the claim is that it actually improves reliability. One of the worst enemies of semi-auto (and full auto for that matter) guns is sand. Osprey produced this video to show off their system’s resistance to sand:
By the way – notice how long and unwieldy his full size M16 with suppressor is?
So I bought one of these kits and fitted it to my shorty rifle:
Worked perfectly, and no more black face. The only thing that I didn’t like much was the hand guards that come with the kit. They maintain the same style as the traditional hand guard, but are larger to accommodate the piston assembly. Just a bit too chunky for my liking.
You can see the bottom half of the hand guard in this photo, along with the piston assembly sitting on top of the barrel.
So began the search for hand guards that would fit with an Osprey gas system in place.
There were people who managed to get various ones to fit, but this usually seemed to involve the use of a Dremel tool … not too appealing.
Then I saw a note on the Osprey website about a set which were being manufactured by Midwest Industries, specifically for use with an Osprey gas piston.
I have used hand guards from MI before, and know that they are of good quality. The hand guard being built for the Osprey is of the “tactical” variety, with accessory mounting rails on top, bottom and both sides. For those people that like hanging lights, lasers, whatever off their guns these are wonderful. For me, they are mostly just sharp edges for my hands.
While poking around the MI website I came across another, simpler (and cheaper!) hand guard which seemed just what I needed. In addition, it also said that it fits with Osprey (amongst others), the SS free-float hand guards (free-float just means that the hand guard doesn’t touch the barrel – this is a good thing).
A while later, when I had saved up enough pennies, I went to Brownell’s website (Brownell’s is probably the best known gunsmiths merchant in the US). They had the hand guard in stock, so I placed my order.
When the package arrived, I assumed this was about a 30 minute job. Yes, I had to remove the front sight (which can be a royal pain to do) but even so, I had all the right tools -plain sailing.
Except for those taper pins holding the front sight on. They would not move. Typically, the answer to this is to use a bigger hammer. Even that didn’t work. What eventually did work was using a blow-torch to heat everything up, then a couple of whacks with the big hammer and both pins moved.
Well past my 30 minute estimate, I could actually start. I fitted the new barrel nut – this replaces the original and is threaded on the outside to screw the hand guard onto. Next I put the gas piston and front sight back into place.
Hmmm… with that much wider barrel nut, it leaves only about 1/4″ for the piston to move. Nowhere near enough.
What am I missing?
Back to the MI website, where I discovered that they no longer claimed that it would work with the Osprey gas system!
Checked on Brownell’s website. they still had the original text, saying that it would.
In fact, it probably will work in some cases. There are three different length gas systems on AR-15s, mainly depending upon the barrel length. With the two longer systems, this would probably work fine, but no way with the short system I have.
I was preparing to pack it all up and return it, probably getting credit towards buying the more expensive “tactical” version (which would work), when I had a last desperate idea — call Osprey and see if its possible to get a rod with a longer actual rod, and shorter connector to connect it to the piston. It looks like this might be possible.
I fired off an email, and an hour or so later got a reply from the general manager of Osprey saying that this was a problem that they had seen before, and yes, there is an alternate op-rod. Unfortunately, they couldn’t give me one, I would have to buy it.
Hand over my credit card number, and one is in the mail.
Hopefully, this will be the end to the long saga of ringing ears and dirty faces.
More news to follow, once I have received the package.
—–
A week later:
The new op-rod arrived. Fitting was a matter of 30 seconds, re-assembling the rifle took all of 5 minutes. Test fired, and it works perfectly.
At half the price, I might have been a little more generous in my rating (but not by much).
The book starts by giving a brief history of the product, in all its various forms and insists upon pointing to Oracle as being the keeper of the flame for OpenSSO, which is hard to swallow given their stated intention to kill it, removing public access to binaries and to a lot of information. I know the author works for them, but really! Read the rest of this entry »