Use CSS display:table for Layout

I had this post in draft since October 2008. I thought I’d redesign this blog site using display:table and explain that in a series of posts, starting with this one. But I never found the time for the redesign!

Still, I have been using display:table on a much larger site for about 6 months now, so I thought I might as well post this as it is, and perhaps follow up with more examples later.

Also, about 3 weeks after I started this draft, Rachel Andrew and Kevin Yank wrote a book on CSS display:table for layout, as well as a useful summary article! I do recommend looking at that article for finer details.

No need for css float for layout in modern browsers

For a few years now, web developers doing CSS-based layouts have used floats or absolute positioning for layout web sites to avoid using non-semantic HTML <table>s.

While doable, extra hoops often have to be jumped through (mostly for IE) and some seemingly simple things can be harder than necessary (like equal height columns).

However, for a simpler solution, CSS-based display:table, display:table-row, display:table-cell etc are all usable today across Firefox 2+, Safari 3+, Opera 9+ and IE8.


Consider the following HTML:

	<div id="header">
		<!-- header -->

	<div id="content-body-wrapper">
		<div id="content-body">
			<div id="primary-nav">
				<!-- some navigation column here -->
			<div id="secondary-nav">
				<!-- some additional column here -->
			<div id="content">
				<!-- main content here -->
	<div id="footer">
		<!-- footer -->

And the CSS to style it to get equal height columns:

#content-body-wrapper {
#content-body {
#primary-nav, #secondary-nav, #content {
#primary-nav, #secondary-nav {
#content {

And that’s it!

The above is just the layout bit of the CSS. Here are some screenshots (click for full size) with content and very basic styling just to see the equal height column effect:

The first is with Firefox 3 and the second with IE8.

You can actually omit extra divs, even the one that gets display:table and the browser is required to create an anonymous table for you.

Isn’t using table for layout wrong?

This is not the same as using the structural html table elements for layout purposes — that indeed is an inappropriate use for tables.

This is using CSS to give table-like display, which is fine as it leaves the HTML (and document structure) in tact.

What about IE 7 and 6?

IE7 and 6 of course remain problems, but you can use conditional comments and give them older techniques that attempt to achieve this.

Some limitations or issues

Some limitations of css display:table I have come across, however, include these:

  • Lack of colspan/rowspan equivalents
  • Like HTML tables, a CSS cell can expand in width based on content (as well as height).

On the last one I noticed this when using things like <pre> even with overflow:auto with the thought that just like inside floated columns with widths assigned this would result in those <pre> blocks getting horizontal scrolls if they became too wide.

Instead, as with HTML tables, they push out the cell they are in. The only workaround I knew to this was to give a px or em width to such elements. (It also applies to large images in a cell.)

That being said, display:table seems a lot cleaner!

CSS3 has an advanced layout module in the works but there are not any browser implementations of it (that I am aware of), so in the interim this could be a useful approach.

More info


Update: May 2010. Translated into Belorussian, by Ucallweconn

Google App Engine as your own Content Delivery Network

24 Ways has an excellent article on using Google App Engine as your own Content Delivery Network, showing you how easy it is to set one up.

A CDN is a network of servers around the world to serve content from your site from the nearest physical location. All the large sites (Yahoo, Google, Amazon, etc) use them.

After reading the above post, I was also curious to find out how if Google App Engine helps in the following:

  • Compression
  • Expires headers and versioning

A comment in the original post implies compression is on by default, which is what I’d expect.

Using far future expires headers can be a great performance boost for your site, and is easy to apply to assets such as images, CSS, and JavaScript files.

At the same time, if you change such a file you want to be sure your repeat visitors will not see the old one because it was cached far into the future.

So, with a simple URL Rewrite Rule we can make things like /css/version-1/site.css point to /css/site.css.

If you update site.css, you can change the version number. Browsers will not have a file from this new path in their cache so will download it and cache it into the future again.

Is it possible to do this with Google App Engine? It looks promising

If I can find a spare moment, I may try this out on this site (which, admittedly doesn’t get that much traffic to make it worth bothering about!) — and then I’ll try it on a site that takes up most of my spare time, which gets a LOT of traffic, where it would actually be worth doing!

Jonathan Snook’s jQuery Background Animation as a Plugin

Jonathan Snook recently posted a really neat background animation technique using jQuery. This was something I was looking for and it seemed like a good candidate for a jQuery plugin.

So, following on from my recent post about turning jQuery code into richer, unit testable plugin code, I thought I’d describe the quick process of doing so here. (It’s worth reading Snook’s post first though!)

The general steps discussed to achieve this are as follows:

  1. Add additional keyboard accessibility
  2. A first attempt plugin
  3. A plugin that might be unit testable

Add additional keyboard accessibility

I posted a comment on Snook’s post to say additional hover and blur events should do the trick, so here is an example applied to one of the four demo menus he had with those events added:

$('#d a')
    .css( {backgroundPosition: "0 0"} )
        $(this).stop().animate({backgroundPosition:"(0 -250px)"}, {duration:500})
        $(this).stop().animate({backgroundPosition:"(0 0)"}, {duration:500})
// I added these event handlers:
        $(this).stop().animate({backgroundPosition:"(0 -250px)"}, {duration:500})
        $(this).stop().animate({backgroundPosition:"(0 0)"}, {duration:500})

A first attempt plugin

The above code as a plugin might look something like this:

(function($) {
    $.fn.animatedBackground = function(options) {

    // build main options before element iteration by extending the default ones
    var opts = $.extend({}, $.fn.animatedBackground.defaults, options);

    function startAnimation() {
    function stopAnimation() {
        var animationConfig = { duration:opts.duration };
        if (opts.complete)
            animationConfig.complete = opts.complete;

    // for each side note, do the magic.
    return $(this)
        .css( {backgroundPosition: opts.backgroundPositionInit} )

    // plugin defaults
    $.fn.animatedBackground.defaults = {
        backgroundPositionInit : "0 0",
        backgroundPositionStart : "(0 0)",
        backgroundPositionEnd : "(0 0)",
        durationStart : 500,
        durationEnd : 500,
        complete : null

The above is just a quick 2 minute thing — I am sure with more thought the plugin options could be made even more flexible. But this will do for the purpose of this post.

For each of the 4 demo menus Snook provided, you could then call them as follows:

    $('#a a')
                backgroundPositionInit : "-20px 35px",
                backgroundPositionStart : "(-20px 94px)",
                backgroundPositionEnd : "(40px 35px)",
                durationEnd : 200,
                complete : function(){
                    $(this).css({backgroundPosition: "-20px 35px"});
    $('#b a')
                backgroundPositionStart : "(-150px 0)",
                backgroundPositionEnd : "(-300px 0)",
                durationEnd : 200,
                complete : function(){
                    $(this).css({backgroundPosition: "0 0"});

    $('#c a, #d a')
            { backgroundPositionStart : "(0 -250px)" }

(Examples c and d are combined with one selector, while a and b each have more complex options.)

In the “simple” cases (c and d) a very small amount of code is needed to use the plugin. For (a and b) if you were only going to use this once, it might be questionable whether the plugin for this is worth the effort!

Unit testable plugin?

Some plugins might be so small that unit testing them may not seem beneficial or worth the effort. In this particular case, it is not clear if it is necessary. However, for the purpose of this post at least it may be a useful exercise. So, these might be some things to bear in mind:

  • The bulk of the plugin relies on animate() which works asynchronously. Unit testing asynchronous calls can be tricky with QUnit. More importantly, we are not trying to unit test animate() but our plugin code instead.
  • The function handler for each mouse/focus/blur event could be made into a default plugin function
  • Unit tests can then replace the default function with a mock function to confirm that the rest of the plugin works with the various configuration options passed in.

To achieve the above, a simple step might just be to make the private startAnimation() and stopAnimation() methods public.

This can be done a few ways, e.g. keep those private methods and make them call the public ones, or wherever the private ones are called, make them call the public ones, etc.

The two public methods would look something like this:

$.fn.animatedBackground.startAnimation = function($el, opts) {

$.fn.animatedBackground.stopAnimation = function($el, opts) {
    var animationConfig = { duration:opts.duration };
    if (opts.complete)
        animationConfig.complete = opts.complete;


Here’s a page with a unit testable version of the plugin which also has the original menu examples

Was it worth adding extra code to make it unit testable?

The testable plugin version is a bit larger than the original (ignoring minification and gzipping benefits to remove a lot of the difference).

Was it therefore worth changing in this way from the original?

In my opinion, the initial plugin version would probably suffice, especially if likely to be used across a few small projects.

If, on the other hand, you were going to use it in a more critical scenario, then unit testing what you can could be useful.

A principle of test driven development is to write unit tests first. In this case as it was existing code, it seemed okay to do it in the order described above. Furthermore, sometimes it feels tricky to always stick to that principle religiously, and writing unit tests afterwords might be okay if the plugin is smallish, perhaps?


So, many thanks for Jonathan Snook for his post. That technique is useful for me in some other projects.

This post hopefully shows that even small snippets of code can be turned into a plugin, sometimes unit testable ones. Whether that is worth your efforts depends on your need and audience.

Turn your jQuery code into a richer, unit testable, plugin

I find myself increasingly using jQuery as my JavaScript framework of choice.

It’s by-line of “write less, do more” really seems apt.

But sometimes, by writing just that little bit extra, you can do even more.

For example, I often try to do the following:

  • Make most jQuery code into reusable plugins
  • Use the jQuery plugin development pattern for added flexibility
  • Use QUnit to unit test JavaScript
  • Combine the two approaches to drive out a richer API for the plugin

By unit testing with QUnit, I find I often need to trigger additional events or add additional code from within the plugin so the test can be meaningful.

But this extra code isn’t only useful for testing, it becomes a useful part of the plugin’s API, improving its functionality and flexibility without sacrificing maintainability and readability of the code.

I’ll try to demonstrate that in this post.

Make most of your jQuery code into reusable plugins

In many cases, where there is a block of jQuery code initializing something, it is a candidate for a plugin.

It took me a little while before I gave jquery plugins a go, thinking it will be complex and I won’t have time. But, it turns out to be really simple, elegant and useful for both simple and complicated scenarios.

Example to create a dynamic side note toggler

Lets take a simple example for illustration: suppose I have some HTML that acts as an aside, or side note, and that I want to toggle its appearance.

Lets say we agree this kind of HTML format for it (or microformat):

<p>Some text before the side note.</p>

<div class="side-note">
	<p>Any HTML could go here including</p>
		<li>Bulleted lists</li>
		<li>Tabular data</li>

	<div class="side-note">
		<p>Even nested side notes, if that is of any use!</p>

<p>Some text after the side note.</p>

The first way I’d do it might be something like this:

$(document).ready(function() {
    $('.side-note').each(function() {
            .wrap('<div class="dynamic-side-note-container"></div>')
            .before('<h3 class="toggler"><a href="#">Side note:</a></h3>')
            .parent(0).find('> h3.toggler > a').click(function() {
                $(this).parents('.dynamic-side-note-container').eq(0).find('> .dynamic-side-note').slideToggle();
                return false;

I could have added a click event handler to the h3 header above, but using an anchor adds keyboard accessibility.

(The CSS selector could also be improved, e.g. to narrow it down to only side-notes in some content div, e.g. $('#content .side-note').)

Here is a working example

As a simple jQuery plugin

The above works. But, we can move most of the above code into a jQuery plugin.

Why bother? We can gain a bit more flexibility, such as the ability to use any selector, not rely on a side-note class. We could pass in other parameters such as what the text for the side note toggler/header should be, etc.

As a first attempt, the plugin might look something like this:

$.fn.sideNotes = function() {
    // returning this way allows chaining
    return $(this)
        .wrap('<div class="dynamic-side-note-container"></div>')
        .before('<h3 class="toggler"><a href="#">Side note:</a></h3>')
        .parent(0).find('> h3.toggler > a').click(function() {
            $(this).parents('.dynamic-side-note-container').eq(0).find('> .dynamic-side-note').slideToggle();
            return false;

We then just need to invoke the plugin:

jQuery(function() {

(Note how you could use any selector now not just .side-note as above.)

Here is a working example using the simple plugin

Plugins also make it easier to pass in more options for configuration. The next example uses a useful pattern for plugin development that also shows a nice way to handle plugin options:

Use the jQuery plugin development pattern for added flexibility

The post, A Plugin Development Pattern (for jQuery), from the Learning jQuery blog is really useful.

It provides a good way to write plugins that also get the following features:

  • Configurability by passing in options
  • Default options to keep invoking code small and neat
  • A closure where you split your code out into manageable private functions, etc., if you need
  • Ensuring that your plugin support chaining
  • And more

Using some of those ideas, here is what we might come up with for the sideNotes plugin:

(function($) {
    $.fn.sideNotes = function(options) {

    // build main options before element iteration by extending the default ones
    var opts = $.extend({}, $.fn.sideNotes.defaults, options);

    // for each side note, do the magic.
    return $(this)
        .wrap('<div class="dynamic-side-note"></div>')
        .before('<h3 class="toggler"><a href="#">' + opts.sideNoteToggleText +'</a></h3>')
        .parent(0).find('> h3.toggler > a').click(function() {
            $(this).parents('.dynamic-side-note').eq(0).find('> .side-note').slideToggle();
            return false;

    // plugin defaults
    $.fn.sideNotes.defaults = {
        sideNoteToggleText : 'Side note:'

And invoking the plugin is the same as before:

jQuery(function() {

    // or overriding the default side note toggle text:
    // $('.side-note').sideNotes({ sideNoteToggleText : 'As an aside:' });

Here is a working example using the plugin pattern

Unit testing the jQuery plugin

As the plugin code is reasonably well encapsulated, we can create some unit tests for this plugin.

Unit testing jQuery (as with any code) gives you confidence in maintaining it. For example, when refactoring code, unit tests give you confidence that you can do it without breaking things. Plugins, almost by definition are good candidates for unit testing.

Examples of unit tests for this plugin

Our example side note plugin is quite simple, so the unit tests will likely be small in number. Types of tests we might include are the following:

  • Test that we can expand/collapse a side note
  • Test that we can expand/collapse a side note many times and ensure the toggle state reported is correct each time
  • Test that when the side note plugin has run, all the side notes are collapsed initially
  • Test that we can change the toggle text to something else

And so on.

Remember, the unit tests should not test the slideToggle() method we happened to use (as that should be unit tested itself, and we may use other ways of toggling the side note in the future). Instead, we just need to unit test our code and plugin functionality.

So ideally, we might even want to “mock” the slideToggle(), if necessary.

Use QUnit to unit test JavaScript

QUnit is a unit testing framework, used internally by jQuery itself for all its core JavaScript, and now opened up and documented for others to use, too. It is a simple framework to support the creation and execution of tests.

The tests are run in a browser. You could automate it against all your target browsers by using something like Selenium.

The framework is in its early days (a setup and teardown set of methods would be nice, for example), but is rich enough to get started with it.

Here is a QUnit test page for the side note plugin

(I included a manual test area as this can sometimes be useful where visual confirmation is useful or automated tests of some parts is not possible/easy. Having this all in one place can be handy.)

Using unit tests and plugins helps create a richer API for the plugin

To write testable code, you may find you need to provide more hooks in the code. Yet, you probably don’t want to pollute your code so much that it is detrimental to performance or maintainability.

Triggering events on the plugin for added flexibility

In the unit test example page, how did we manage to get unit tests to confirm a side note had been toggled when we didn’t have any callback for it?

The first idea people might have is to pass a callback that a plugin user can point to in the options when invoking the plugin, something like this:

jQuery(function() {
            toggled : function() { console.log('side note was toggled'); }

That looks useful; the unit test can provide a callback when it invokes the plugin.

However, that limits us to just one observer, the one that invoked the side note plugin in the first place.

But we can go one step further: trigger events. This allows more than one observer to watch for the event. This is achieved using jQuery’s trigger() and bind() methods.

By having unit tests watching for these events, it does not distort the plugin code; it enhances its API making the event useful for a real plugin user, if they need it.

In our side note plugin example, when we call slideToggle() we can trigger an event when the toggling has completed:

// all the stuff that gets to the
// slide toggle bit comes here!
.slideToggle( function() {

If your code cares when this happens, you can bind to this event, something like this:

$('.side-note').bind('sideNoteToggled', eventHandlerGoesHere);

(A fuller example below uses trigger() to also pass the expanded/collapsed state in conjunction with some ARIA information.)

Providing default implementations that can be replaced (or mocked for unit testing)

One particular challenge I had with unit testing this side note example was that the slideToggle() method used by the plugin, internally runs asynchronously (using setTimeout etc).

Writing normal unit test code means the test can finish before the animation has run.

Even though I tried to use setTimeout() on the tests themselves to try and make them wait for the sideNote to finish toggling, and passing in the fastest speed possible to the slideToggle method, it wasn’t always right, and not consistent across all browsers.

This gave me the opportunity to do a few things:

  • Encapsulate the call to slideToggle in another method
  • Make that method the default implementation, but overrideable
  • Use this to provide a mock slide toggler for testing purposes

With jQuery’s plugin architecture providing a default implementation is quite straight forward:

$.fn.sideNotes.toggle = function() {
    $(this).slideToggle(function() {

Mocking the slideToggle() call is nice because we don’t want to test external code in our unit tests; that should have been unit tested by whoever wrote that (the jQuery team I imagine! I’ll have to look at their unit tests when I get a moment to see how they overcome asynchronous issue — there is some mechanism to provide stop() and start() methods for AJAX testing and I tried this here, but still wasn’t getting consistent results).

In our example, we can overwrite the the default toggle method to simply use toggle() instead of slideToggle(), which is a non-animated version that runs and completes immediately.

This makes writing the testing code a bit simpler too. (There are one or two bits that I did that didn’t feel too great in my opinion, such as the way I chose to expose the post toggle action, but that is perhaps for another day to sort out!)

To override the default implementation, you can redefine sideNotes.toggle in your code, something like this:

$.fn.sideNotes.toggle = function() {

Example QUnit test code

So taking the above considerations this is what we might have in our QUnit unit test code:

This code block shows a test util object to help instrument the test, and then at the bottom, an example of a mock object to replace the slideToggle() call.

    var testUtils = {
        isExpandedCount : 0,
        isCollapsedCount : 0,
        defaultTestOptions : {
            selector : '#example .side-note',
            sideNoteOptions : {},
            speed : 'fast'
        // a crude setup like method, which QUnit currently doesn't have
        init : function(options) {
            var opts = $.extend({ sideNoteOptions : {} }, testUtils.defaultTestOptions, options);
            // tests assume the relevant HTML is present on the page the test is running
                .bind('sideNoteToggled', testUtils.sideNoteToggled)
            return $(opts.selector);
        sideNoteToggled : function(event, isExpanded) {
            isExpanded ? testUtils.isExpandedCount++ : testUtils.isCollapsedCount++;
        // until there is an explicit tear down method this will have to do
        reset : function() {
            $('#example .side-note').unbind('sideNoteToggled');
            testUtils.isExpandedCount = 0;
            testUtils.isCollapsedCount = 0;

    // mock the slideToggle. We are not testing that.
    function mockSideNoteToggle() {
    // keep the original implementation if we want to use it later
    $.fn.sideNotes.originalToggler = $.fn.sideNotes.toggle;

    // override the default toggle method with the mock
    $.fn.sideNotes.toggle = mockSideNoteToggle;

We can then write unit tests, such as this one (note this code would go inside the anonymous function used above):

module("Single toggle tests");

test("Test side note is expanded when toggled", function() {
    var $sideNote = testUtils.init();

    $sideNote.each(function() {
        equals($(this).attr('aria-expanded'), 'false', "Side not starts as collapsed");
        .find('.side-note:eq(0)').bind('sideNoteToggleCaptured', function() {
            equals(testUtils.isExpandedCount, 1, "A node has been expanded");
            equals(testUtils.isCollapsedCount, 0, "No nodes have been collapsed after initialization");
            equals(this['aria-expanded'], true, "Node’s aria state is expanded");
            .find('.toggler > a:eq(0)').click();
            // calling click() is like simulating the user action to start the test

In the above example, the following is happening:

  1. The call to init() creates/initializes the sideNote plugin
  2. We then assert that the ARIA state reports each side note is not expanded
  3. We then find each side note and bind to the sideNoteToggleCaptured event (which is raised by the unit test utility shown earlier, not by the plugin itself)
  4. We then simulate a user action by finding the toggler and clicking it
  5. The simulated click() makes the plugin eventually trigger the sideNoteToggled event, which the testUtils object will catch.
  6. The test util will catch that event and update the various testing counters and itself trigger the sideNoteToggleCaptured event.
  7. Finally at this point, we can run our assertions, such as confirming the expected number of times the side note was expanded/collapsed and what the expected ARIA state should be (in this example seen by the use of the equals() function).

See the full example for more unit tests

The additional tests in the full example also show the side note being dynamically created per test run, so you don’t always have to have the HTML set up exactly as needed for every test, manually.

The unit test code in that example can probably be further improved by refactoring those test() functions into a helper function where you pass in callbacks to run when the test has completed, how many clicks you want to invoke, etc, but that is for another time!

Final plugin code

Here is the final plugin code (with some additional options not discussed above):

(function($) {
    $.fn.sideNotes = function(options) {

        // build main options before element iteration
        var opts = $.extend({}, $.fn.sideNotes.defaults, options);

        // iterate and process each matched element
        return $(this)
            .attr('aria-expanded', false)
            .wrap('<div class="dynamic-side-note-container"></div>')
            .before('<' + opts.toggleElement + '><a class="toggler" href="#">' + opts.sideNoteToggleText +'</a></' + opts.toggleElement + '>')
                .find(opts.toggleElement + '.toggler > a')

        // example of private method
        function doToggle() {
                    .find('> .dynamic-side-note').each( function() {
                        $, options);

            return false;

    // plugin defaults
    $.fn.sideNotes.defaults = {
        sideNoteToggleText : 'Side note:',
        speed : 'normal',
        toggleElement : 'h3'

    // default implementation for the toggler (public. i.e. overrideable)
    $.fn.sideNotes.toggle = function(options) {
        $(this).slideToggle(options.speed, $.fn.sideNotes.toggled);

    // default callback when toggle completed (public. i.e. overrideable)
    $.fn.sideNotes.toggled = function() {
        this['aria-expanded'] = this['aria-expanded'] === true ? false : true;

        $(this).trigger('sideNoteToggled', this['aria-expanded']);

Is the additional lines of code for a plugin worth it?

The plugin used in the unit tested example is larger in terms of lines of code than the very first attempt, above.

In this case, the lines of code is still quite small, and as plugins get even larger the percentage difference is likely to be small (the size difference is more noticeable in smaller code samples, such as this contrived side note example).

Minimizing and gzipping JavaScript, plus using far future expires header for better caching would further reduce the percentage difference in file sizes of the two approaches.

Depending on your needs, this may be a reasonable trade-off in return for additional flexibility.


So, in many cases, jQuery code can be made reusable by making it into a plugin. Any time you find yourself writing a block of code, say inside a $(document).ready() block, consider converting it into a plugin and calling it.

When making the plugin, consider the following:

  • Provide various options for flexibility
  • Write unit tests to test your plugin
  • Trigger important events inside your plugin to aide unit testability, and in doing so, increase flexibility of your plugin even more.

When you get the hang of this, it is probably better to write the unit tests before the actual plugin code. This will help focus on what is needed and what the plugin needs to expose in terms of capability. To be honest, at times I have found it easier to retrofit a plugin with unit tests. I probably need to be a bit more disciplined to write tests first!

(While there are a number of enhancements that could be added to this, the main thing I’d probably do is rename the plugin to something more generic than just a side note toggler, but I will leave that for the reader to do, and remember to refactor the unit tests accordingly!)

Hope that is useful?

Google to host a number of JavaScript libraries

Google just announced their AJAX Library API, where Google will host many major JavaScript frameworks for you, such as jQuery, Prototype, Mootools, Dojo, etc.

This will allow you to write web pages that refer to those scripts rather than copies on your own site, reducing your bandwidth, but also leveraging the infrastructure capabilities of Google, such as their content distributed network (which means users would be served those files from a location much closer to them), properly compressed, minified, cacheable files, etc.

In addition, if your visitors have been to other sites using the same technique, they would not need to download the same libraries all over again, and with increasingly rich web sites, these files can get quite large.

So, if you use jQuery, you can use this in your web pages:

<script type="text/javascript" src=""></script>

Of course, this is not for everyone, and it may be worth bearing in mind some concerns raised by Peter Michaux, who also provides a useful way to handle the scenario where you think access to Google from the user’s location might be restricted (e.g. blocked from a paranoid office!):

<script type="text/javascript" src=""></script>
<script type="text/javascript">
  if (!jQuery) {
    // load from your own server
    document.write('<script type="text/javascript" src="/js/jquery.min.js"><\/script>');

Still, it could be a useful technique to further improve the download/load performance for a number of sites.

A number of months back, Yahoo was the first to announce something like this when they said they would host their YUI library for you so it is good to see others getting into this.

How to fix huge text in Firefox 3 Beta 5 on Kubuntu 8.04

Firefox 3.0 beta 5 on Kubuntu 8.04 (I don’t know about other Linux distributions) renders some text way too big. It turns out to be an issue when using points for your font size units in CSS (you should use a relative font unit, generally anyway!).

The quickfix

You can fix this by doing the following:

  1. Type about:config in the Firefox address bar
  2. Look for the setting called layout.css.dpi. The default value is -1.
  3. Change it to 96

There may be a deeper problem; Firefox on Linux is a GNOME application and so uses the GNOME window manager settings. Kubuntu uses the KDE window manager, so the DPI settings for it do not seem to apply to GNOME and I don’t know where/how to set GNOME ones while running Kubuntu.

Some forums suggest a few things that have not worked. While the above change in Firefox has worked, ideally I’d like to change the GNOME settings instead. Anyone have any ideas on that?

Getting Firebug working properly, too

To get Firebug to work properly (YSlow 0.9.5b1 seems to stop it working, which is a shame):

  1. Uninstall YSlow if you have it
  2. Uninstall any existing Firebug
  3. Install DOM Inspector if not already installed
  4. Get Firebug 1.2 alpha (1.1 beta did not seem to work properly for me)
  5. Restart as prompted

I tried to reinstall YSlow, but no joy. That’s a bummer as YSlow is useful. Until they fix it, you can remote to a PC/Mac running it or use a virtual machine of something that runs it!

Some additional thoughts/notes

The Mozilla Knowledgebase on layout.css.dpi says that the default of -1 means to use the OS dpi or 96, whichever is *greater*, although Kubuntu/KDE reports 96! Somewhere there is a higher value, but I am a bit new to Linux to know where to find that.

While I initially suspected this to be a problem specific to Firefox on Linux when I used the vector graphic application, Inkscape, as I was exporting an svg to a bitmap, I noticed that the dpi settings it reported was something like 116dpi. Inkscape is also a GTK application, so now I think that it is a DPI setting used by GNOME.

Unless someone knows how to set those things while using Kubuntu, I have to make my change in Firefox itself (and see what happens when the final version is released). A few forums and posts suggested things like running the gnome setting daemon at start up, using the KDE GUI for GTK settings etc, but none of those seem to make any difference. Any ideas anyone?

I can live with this for now (I just need to figure out how to get those ugly close buttons replaced with something else — anyone know how to get GNOME apps to use a different icon set or theme when running Kubuntu???)

Accessibility 2.0 Conference

Accessibility 2.0; A million flowers bloom, Conference. I attended the Accessibility 2.0 conference, held in London on 25th April, 2008.

While I won’t summarize all the things that the session speakers said (there are some links towards the end for that), here are some key things I took away from each presenter:

Open Data — Keynote Presentation from Jeremy Keith

This guy is usually quite funny, but this time he decided to read something he had written. I thought this was going to be a long and dry session, but instead it was very interesting, looking at the importance of

  • Open data for accessible information (to everyone) and for digital preservation
  • Standards for format longevity and innovation (bred by constraints)

He was approaching accessibility from the perspective of accessible data for everyone.

Assistive Technologies and AJAX by Steve Faulkner

Steve Faulkner is a noted accessibility expert and has lots of detailed information about accessibility on his blog (and in particular, some decent research on how screen readers work in different scenarios). His examples mainly came from Twitter, the micro-blogging tool.

There were some interesting examples of how WAI ARIA live regions would help make some simple features accessible, such as the text countdown that shows how many characters you have left to type, common on many sites where you fill in text areas.

(One of his points about the problem of using the abbr element’s title attribute to put in an ISO 8601 date time format eventually led to a public argument during the final panel session between Mike Davies and Jeremy Keith!)

Fencing in the Habitat by Christian Heilmann

Christian Heilmann works for Yahoo but has long blogged about accessibility, JavaScript and more.

This talk was about how not to approach accessibility. He has already published his slides and notes for the talk, so the only thing I will add here are my main take-away points:

  • There’s no need to scare people into doing accessibility anymore
  • It’s generally easy (despite a lot of misconceptions) and helps so many people. He described the immense joy some elderly people had when they were given a wii without any knowledge how to use it. Yet, very quickly they were having immense fun playing on it! Accessibility is more than guidelines; it is also about usability.

Rich Media and Web Applications for People with Learning Disabilities by Antonia Hyde

This talk was really interesting about how a group that is often left behind in accessibility implementations — people with learning disabilities — are affected by the web.

A number of instructive videos showed how some people can struggle with common features on sites.

She ran out of time but put up her slides on her blog.

User Generated Content by Jonathan Hassell

Jonathan Hassell is the head of Audience Experience and Usability for BBC Future Media & Technology. He also leads the BBC Usability & Accessibility team so had quite a lot of interesting things to say.

User generated content is of course a major issue in modern web sites when it comes to concerns like standards and accessibility.

One of the strategies Hassell mentioned was to use a moderator approach, accepting it is really hard to get end users to know about how to create accessible content. This way, a moderator adds accessibility and other things once the content has been created.

This is quite an expensive way to do thing, he accepted and sometimes will not be possible.

The other problem is that as more content moves away from basic text-based formats that is the web, assistive technologies will struggle even more to provide accessible alternatives. While not impossible, the implications include the following:

  • Those formats require accessibility be built in, or have alternatives
  • Content producers will become increasingly responsible for accessible content

The other point he stressed, like many others, is that accessibility is a misleading word; it is really an issue of usability.

As content becomes more multi-media the additional challenge he finds is how to make such multi media accessible in innovative ways. One example was using 3D and stereo sound in a Flash-based children’s game where they had to push a train carriage from the left part of the screen to the right hand side. By using 3D sound and stereo, audio feedback to a child who is blind or has poor eyesight, would still allow them to experience more of the game than usual. Another game showed sign language being used.

Another interesting point he made was regarding JavaScript: As a number of people have noted for a long time, there is a common misconception that JavaScript is an accessibility no-no. Hassell noted that it helped make the new BBC home page more accessible to screen readers, rather than less.

A Case Study: Building a Social Network for Disabled Users by Stephen Eisden

This talk was about the user testing, accessibility audits and technology decisions that come with building a social network for disabled users called Disability Information Portal (DIP). After evaluating a few different ways to do this, they settled on WordPress as the basis of their network, currently in pilot stage,

Eisden also noted that an accessible site was generally a more usable site, too.

Tools & Technologies to Watch or to Avoid by Ian Forrester

Ian Forrester runs BBC’s Backstage, a web site for designers and developers.

This promised to be an excellent session, but for some reason they only gave him about 20 minutes and he had some 80 slides to go through! So he had to cut really short which was disappointing. I had the opportunity to talk to him a bit more after the sessions were over, and that was really interesting. He has a good interest in XSLT too :)

Some of the points he did manage to get across were how some Web 2.0 sites made it hard enough for normal users to do things, let alone people with additional needs. Examples included hard to understand user licenses and terms & conditions; Flash videos and captioning; using various sites to import contacts or export data, etc.

Accessibility 2.0 Panel Discussion chaired by Julie Howell

This panel was an interesting enough discussion, chaired by one of the most prominent people in the accessibility circles. Nothing to really add here that is not captured in the notes by others listed below.

Slides and notes from each session

AbilityNet, who hosted the conference will be putting up videos and transcripts of each session shortly.

Jeremy Keith, one of the speakers, and well known for his work on DOM Scripting (I think he coined that term?) was taking notes during the session and posted them as follows:

  1. Open Data by Jeremy Keith.
  2. Making twitter tweet by Steve Faulkner.
  3. Fencing in the habit by Christian Heilmann.
  4. Rich Media and Web applications for people with learning disabilities by Antonia Hyde.
  5. User-generated Content by Jonathan Hassell.
  6. A case study: Building a social network for disabled users by Stephen Eisden.
  7. Tools and Technologies to Watch and Avoid by Ian Forrester.
  8. Accessibility 2.0 panel discussion with Mike Davies, Kath Moonan, Bim Egan, Jonathan Hassell, Antonia Hyde and Panayiotis Zaphiris, moderated by Julie Howell.

Christian Heilmann also did some live-blogging and posted his notes up on the Yahoo Developer Network. He also includes a number of additional links to speakers, and other live-bloggers.

Summarising the whole day

A common theme running through the day was that web accessibility is not just for disabled people; it is about usability and ensuring as wide an audience as possible can access the content.

Jeremy Keith also had a useful summary:

All in all, it was a great day of talks with some recurring points:

  • Accessibility is really a user-experience issue.
  • Guidelines for authoring tools are now more relevant than guidelines for content.
  • Forget about blindly following rules: nothing beats real testing with real users.

Jeremy Keith, Open Data and Accessibility, April 26, 2008

Searching for apps and commands by typing rather than clicking; is typing the new clicking?

Microsoft recently announced an add-on to Office 2007 to let people search for commands by typing it in if they can’t find it in the new Ribbon user interface.

Office 2007 search for commands add-on lets you type the command you are looking for

I find it interesting that a number of interfaces are now offering “shortcuts” to mouse clicking everywhere.

While desktop search applications already provide this kind of convenience for finding files, such an interface for finding commands is quite interesting, to me.

I have noticed this more so in products used typically by technical people.

For example, on the Mac at various places, such as the System Preferences, you can type for the settings you want to adjust and the application will highlight possible matches for you:

Mac OSX System Preferences allows you to type for a feature and it will highlight matches

KDE 4, a Linux window manager, has a start-like menu (called the KickOff) where you can type the program you want to find:

Vista has a similar thing from its Start menu:

Various software development tools from the excellent Resharper add-in for C# in Visual Studio, to API documentation readers, all offer some way to type in the class, file, method etc you are looking for. And such features are sometimes an efficient alternative to clicking through large trees of information:

Resharper's handy type pop up allows you to start typing the name of a class you want to open

Someone wrote a very useful jQuery API guide where the main form of navigation is to begin typing for the api feature you are looking for:

jquery-api documentation allows you to start typing for functionality and the api adjusts accordingly

The Office add-on for finding commands, while still in early testing, is a feature that targets more than just technical users, so it will be interesting to see how this works out.

Is Microsoft’s announcement a sign of failure of the Ribbon interface? I don’t think so; it may be more reflective of the challenge they find of getting users who are so used to a well-established product to doing things dramatically differently.

As people are getting more used to searching on the web, perhaps that metaphor is making its way back into more traditional user interfaces and that typing is perhaps no longer seen as a poor usability option compared to clicking around using a mouse.

(To clarify, I don’t expect, or want to imply, that typing/command lines etc should replace window managers and pointing/clicking interfaces; instead, there seems to be some appropriate situations where typing looks more efficient than clicking, often in places where the GUI was there because of the advantages of pointing and clicking. Or, at least, typing helps augment a GUI, reducing barriers.)

This reminds of my first job (technically a placement year/internship at Northern Telecom in London but it was such a good experience I consider it a proper job!) in 1996 looking at the HP-UX box on my desk (HP’s UNIX machines) trying to get it all set up and asking the person opposite how to use the file manager GUI to do a certain thing; she just looked at me, smiled, and said, “just use the console; typing is a lot more efficient!”

Typing seems to have become the new mouse clicking…

IE8 meta switch switch!

A little while back the web development blogs were abuzz with Microsoft’s announcement that IE 8 will, by default, render in IE7 mode, so as not to “break the web.”

I also had a post on the implications of that meta switch.

Well, it seems that the IE team have decided to change that decision, and decided that IE8 will, by default, interpret web content in the most standards compliant way it can.

Continue reading