Webkit Browsers and JavaScript Tracking Pixels - or How I Learned to Stop Worrying and Love setTimeout

2010/6/21 16:12

I do not doubt the blazing speed performance of webkit based browsers, but the key to webkit's speed will probably take a rethink for many web design patterns.

Google Chrome, Safari, and future web kit based browsers utilize parallel processing much more aggressively than their counterparts. I've seen the "parallel processing" term thrown around a bit, generally to mean that resources are loaded "asynchronously". So lets say you have a code block like:

function loadImage() {

    var im = new Image();
    im.src = 'http://mysite.com/image/directory/image.gif';

    alert('Your image is ' + im.height + ' pixels tall');


In gecko (Firefox) based browsers, the alert will show an image height. However, in webkit browsers, the javascript parser won't wait until the image is fully loaded before executing the alert statement. The height then will be some value that equates to zero.

That's all well and fine, but webkit browsers don't exactly do parallel processing either... Consider this code block, eschewing the image object altogether and try creating an 'img' element:

<div id="destinationDiv">
    Code will wind up here....

<script type="text/javascript">
function loadImageAndLeave() {

    var im = document.createElement('img');
    im.src = 'http://mysite.com/tracking/pixel.jpg';


    window.location = 'http://click.through.location.com/';


In this case calling "loadImageAndLeave" may create the image tag on the page... but it generally won't reach out to the foreign server to request the image file. You may wonder why a web site would want to make an image call directly before exiting the page, but 'pixeling' is a pretty common method of tracking user interaction with a web site. They have a few advantages that will keep them popular:

  • Images are exempt from cross-domain checks. Your web site can include images from foreign servers, and the browser won't throw security exceptions.
  • JavaScript has image helper functions built in, the aforementioned Image object that will allow you to load the resource without ever needing to actually render it on the page. Though most tracking pixels are just that. a 1x1 image and transparent to boot, many people would rather not affect the page by adding elements and risk distorting the layout.
  • It really doesn't matter what's used for the endpoint of a logging request. Some file on an internal or foreign server needs to be called.
  • Images are simple html elements--many developers would have more confidence in their client's abilities to place an image on their page than say a script tag. Pity the poor fool with an XHTML Strict web site who tries to put script tags within the body tag.
  • Similarly it's easy to slip an image tag into various components--many of your client's may not actually have access to the full destination web site. Take for instance the ad networks, tracking views on your forum posts, or the huge corporations where every department may get access only to a small portion of the the company's site. Alongside this there may be restrictions on what html tags you're allowed to submit to these systems. Image tags generally get a pass.

If the purpose for the loadImageAndLeave() function is to call a tracking pixel before navigating to a foreign site, you wouldn't need to wait for the full image to load before continuing to the destination, but the request for the image must still be sent out.

Why does this happen? I don't know the official reason, but it seems that webkit browsers take parallel processing one step further. It seems they queue up the external resources that a function, or sequence of functions request during their execution, and won't actually load the resources until the process finishes without leaving the page. That way if a function or function sequence results in the user navigating away from the page, the browser won't even need to load the external resources.

If my assumptions are right, then it seems like something that could fall under "speed enhancements".

If you're here, reading this, you've probably run up against this problem yourself. How to break this behavior and force Webkit browsers to behave like gecko? I've found the best way is to use setTimeout to break up the various calls.

So the an alternative to the loadImageAndLeave() would be:

function loadImage() {

    var im = new Image();
    im.src = 'http://mysite.com/tracking/pixel.jpg';

    setTimeout(leave, 500);


function leave() {
    window.location = 'http://click.through.location.com/';

Using setTimeout in the loadImage function will force the browser to wait 500 miliseconds before exiting the page, but more importantly setTimeout like setInterval, and eval, breaks the execution cycle meaning that all those queued resources will start to load. Once again, for tracking pixels you don't really care if the browser finishes loading up the image file, just that the request reaches the servers.

I hope there's a much better way to do this, since although setTimeout looks graceful to my eyes, its a relatively expensive operation. What would be ideal would be for webkit browsers to offer a "synchronous processing" option that could be activated in code.