On UA sniffing, browser detection, and Alex's post

Unless you’ve not been paying attention during the past week, you may have come across Alex Russell’s recent treatises on the cost of feature detection and one possible solution1. Alex is one of the smartest folks I’ve ever met, and I’ve always admired his willingness to share his opinion in any forum regardless of the popularity of the thought or the response quality he’d receive. While I can’t say I always agree with his conclusions, I can say that I respect how he arrives at them. And this is why I feel badly when his points of view get misrepresented to the point of confusion.

The beginning

In his first post on the subject, Cutting the interrogation short1, Alex makes several claims:

  1. Feature detection is not the panacea for cross-browser solutions
  2. Some feature detection techniques incur a performance hit that isn’t always reasonable
  3. The set of available features for known browsers is known

I don’t find anything terribly controversial about these claims, and further, I believe them all to be correct and easily verifiable. The second point is actually the key to understanding his position.

The fastest running code is the code that performs the fewest number of operations. As a good programmer, and certainly one that wishes to deliver the best user experience, it is your job to complete any given ask using the fewest number of operations. Feature detection necessarily introduces additional operations to determine the appropriate set of next operations.

While I’ve never been opposed to feature detection such as determining whether a given function or property exists, I’ve openly opposed the type of long and involved feature detection techniques2 employed by some libraries and developers, especially when performed as an upfront evaluation of multiple features, such as those found in Modernizr3. As someone who’s worked on several large-scale, highly trafficked web sites, I’ve always made it a point to avoid this type of methodology for performance reasons.

The proposal

Alex’s proposal for improving the performance of feature detection was to determine and then cache the results of feature tests for known browsers. This would allow libraries to skip passed the long and time-consuming feature detection code when the results are actually already known. The approach requires a certain level of user-agent detection4 to serve up the correct feature detection set.

Now, I’ve been (in)famous for saying in that past that I don’t believe user-agent detection is bad or evil or that it breaks the spirit of the Web or any such thing – I’ve simply stated that it’s a technique you should know in general and understand where and when it’s appropriate to use. I’ll say this again: you use user-agent detection when you want to identify the browser being used. That’s it. Feature detection, on the other hand, is used when you want to determine that a feature is available for use. These are two different techniques with two very different use cases.

The proposal from Alex is to use user-agent detection to load the results of feature tests run in a particular user-agent while leaving regular feature detection for browsers that are unknown entities. Let’s face it, Internet Explorer 6′s feature set is not changing, so if you can accurately detect this browser it makes sense to preload its feature set.

I would also augment Alex’s proposal with the same caution that I have with user-agent sniffing, which is to only identify previous versions of browsers (not current ones). You cannot  be certain that a feature set is frozen for a particular browser until the next version is released. Case in point: Internet Explorer 8 shipped with a native JSON implementation that didn’t match the final ECMAScript 5 specification. This was later fixed in a patch5. At that point in time, Internet Explorer 8 was the most recent release so it would only be reasonable to cache results from Internet Explorer 7 or earlier.

What he didn’t say

Almost as interesting as what Alex did say is what he didn’t say. Mostly because people immediately started hinting that he actually was saying the things that he didn’t say. This is an incredibly frustrating yet unbelievably common occurrence on the web that I’ve also dealt with. Not that Alex needs anyone coming to his rescue, but I do want to outline the things he never said in his posts:

  1. He never said that user-agent detection is better than feature detection
  2. He never said that feature detection is bad and shouldn’t be used
  3. He never said that user-agent detection is awesome and should always be used
  4. He never said his proposal is the only solution

As tends to happen with controversial topics, people have been latching on to one or two sentences in the entire post rather than trying to absorb the larger point.

My opinion

I was asked by a colleague last week what I thought about Alex’s proposal. Since I had only skimmed the two posts, I decided to go back and actually read them carefully. First and foremost, I think Alex accurately outlines the problems with the current feature detection craze, which can be summarized neatly as “all feature detection, all the time” or even more succinctly, “feature detection, always.” He’s correct in pointing out that the feature detection craze doesn’t pay close enough attention to the performance overhead associated with running a bunch of feature tests upfront.

Generally, I like the idea of having pre-built caches of feature tests for older, known browsers such as Internet Explorer 6 and 7. We absolutely know the issues with these browsers and neither the issues nor the browsers are going away anytime soon. I’m less convinced of the need to cache information for other classes of browsers, especially those that update with regular frequency. For instance, I think it would be wasteful to do such caching for Chrome, which auto-updates at such a dizzying pace that I can’t tell you off the top of my head which version I’m running on this computer.

At this point, I’m more in favor of Alex’s proposal than I am against it. I do believe there’s value in caching feature detection results for known entities, however, I believe the number of UAs for which that should be done is small. I would target two sets of browsers: older ones (IE6/IE7) and specific mobile ones. Interestingly, these share the common aspect of running code slower than modern browsers running on desktops. Keeping a small static cache designed to optimize for the worst-performing browsers makes the most sense to me, and then I would only do additional feature tests on an as-needed basis – running the test on the first attempt to use the feature and then caching it dynamically.

I’m sure there’s a sweet spot of cached feature data that can be found by focusing primarily on the outliers, especially ones that are using slower JavaScript engines (IE6) or low-powered devices (mobile) that cause slower-running JavaScript. Of course, as with every theory, this approach would have to be tested out in real world scenarios to figure out the exact savings. Personally, I think it’s worth investigating.

  1. Cutting the interrogation short, by Alex Russell
  2. JavaScript feature testing
  3. Modernizr
  4. Performance innumeracy & false positives, by Alex Russell
  5. An update is available for the native JSON features in Internet Explorer 8

Understanding JavaScript Promises E-book Cover

Demystify JavaScript promises with the e-book that explains not just concepts, but also real-world uses of promises.

Download the Free E-book!

The community edition of Understanding JavaScript Promises is a free download that arrives in minutes.