I’m not an avid blogger, and when I do post it’s rarely tech related. But recently I have had cause to do some investigation into the effects of Internet Explorer’s garbage collection routines on performance, and I thought it would be useful to summarize some findings.
Eric Lippert posted about the internals of IE’s garbage collector back in September 2003, though he skimmed over the important bits, which were later noted in the comments. The crux of the problem is that IE’s script engine uses allocations to determine when to run the GC; that is after 256 variable allocations, 4096 array slot allocations, or 64kb of strings have been allocated. Not only are allocations a bad indicator of garbage, but these limits are such that any decent sized application is going to make the GC run pretty regularly.
To compound this problem, the running time of the garbage collection routine is dependent on the size of the working set (O(N^2) as described in Lippert?s article, though the results below show a linear relationship). So as your application gets bigger the garbage collection runs slower.
Back in the day this didn?t really matter, but as web applications are getting more complex there is the potential to hit a performance wall. More code being executed means the garbage collector will be run more frequently; and because the applications contain more state on the client; and have larger code bases, the object graph that the garbage collector has to traverse gets bigger.
To demonstrate the effects of this on performance I?ve used a simple benchmarking function which creates 5000 object literals with random properties, and then sorts them. The function is then run on a simple HTML page, pre-populated with a further O-objects, each with P-properties, which will always remain in scope.
The following results show the mean execution time of the create-and-sort test as O increases for constant P=50 on Firefox 2.0 (red) and IE6 (blue) on the same computer.
Try the test for yourself.
Now, the test environment is quite contrived in that it creates the literals as homogeneous global variables in a simple scope, but the effects are the same if you create objects dynamically, with scope chains exposed via event handlers and closures.
I doubt there are many web applications that are big enough to be seriously impacted by these problems, though it is worth bearing in mind, since performance is proven to be strongly linked to adoption of web apps.
Microsoft issued a hot fix that allows you to increase the allocations, this gives a significant performance boost, but you don?t want to force all your users to patch IE so this isn’t a viable solution. IE7 seems to have solved this problem by having dynamic allocation thresholds that scale to the size of your application, but rollout of IE7, particularly to corporate users, is likely to be slow for the rest of 2007. The other options can be painful and basically involve optimizing your application by reducing code size and finding the balancing point between improved performance from keeping state local and keeping your working set to a manageable size. Always explicitly dispose objects when they are no longer needed by removing event handlers and dereferencing properties.
Eric Lippert’s 2003 post: “How Do The Script Garbage Collectors Work”
Micrsoft Support Article: “You may experience slow performance when you view a web page that uses Jscript in Internet Explorer 6”