CSS Sprites are Stupid – Let's Use Archives Instead! (Firefox Demo)

simple sprite sheet
A simple sprite sheet

Update: Good news, everyone! A proper proposal for this kind of thing is already on its way. :)

CSS sprites are a nuisance

While CSS sprites offer nice performance benefits (less connections/overhead), they are troublesome in every respect. Putting lots of tiny images with different dimensions into one bigger image is fiddly and very hard – np-hard in fact. But that's the smallest issue. It doesn't need to be perfect after all. Deflate will happily squish all that extraneous white-space to virtually nothing.

One of the real problems is CSS. It just isn't flexible enough to let you do everything you might want to do with your sprite sheet. For example repeating parts of the image isn't possible. And if you want to display a sub region of an image in the upper right of some element, it only works if that sub region is in the lower left of the sprite sheet. So, the best thing you can do is to use elements in the size of the sub region and use background-position to slide the image around, but that usually means extra markup for something that should be very simple.

But it doesn't stop there. The real nightmare is maintenance. A year or two and several iterations later it becomes very hard to figure out which parts of the sheet are used. Where can you add one? Which one can be removed? How many places do you have to fix if you move this column a bit up? It all becomes a mess of fuzzy uncertainty.

CSS sprites are a huge time sink. They waste my time, they waste your time, and they also waste the time of millions of other front-end developers. It's about time to do something about it.

Archives to the rescue

I thought about the issue for a while and using archives was the first thing that came to mind. Here are the key features in a nutshell:

  • images are easy to update, add, and remove
  • no arranging involved
  • no artificial restrictions (they are just like usual images after all)
  • still a single file to download
  • easy automation

Just update your files, archive it anew, and you're good to go. It really can't get any easier than that. The automation part is also very intriguing. The content management system can for example check if one of the CSS files was changed and if so, aggregate them, minify them, and then create a list of images for the archive. It's very straightforward and requires very little processing power.

But there is more!

The fun doesn't stop there though. You can put everything into the archive! Favicon, scripts, style sheets, and all your (layout) images. Well, you can even add the document itself, but that won't be practical for most use cases and it also comes with some additional implications. So, we'll ignore that case for now. (You'll see what I mean when you see the example code.)

But think about it. Everything is in one archive and there is also the document (gzipped of course). That's only 2(!) files for people with an unprimed cache. It really can't get any better than that. Well, each content image adds another connection, but that's alright.

Additionally, you always get compression for your JS and CSS files. Of course you can gzip them, but that doesn't work for everybody. About 15% of the visitors of larger websites in the United States have gzip disabled – or more accurately: their Accept-Encoding headers are scrambled by proxies or stupid anti-virus software in order to turn it off. Pretty outrageous, isn't it? But wouldn't it be nice to make their lives a tad better? ;)

Solid vs. non-solid archives

While solid archives provide better compression, non-solid archives are better suited for this task. In a solid archive all files are compressed as one big data block, which comes with a few implications. First and foremost you need the complete archive to extract a single file. Additionally, you have to decompress the whole thing even if you only want to extract one of those files.

In a non-solid archive all files are compressed individually, which means you can take a file as soon as it's ready and update the rendering. This also means we can influence the rendering by changing the order of the files in the archive. E.g. favicon first, then CSS, then the logo, then the supporting layout graphics, then icons, and finally the script stuff.

Another benefit of non-solid archives are the per file checksums. If a download error occurs you can a) detect it and b) resume at the right spot. Pretty nifty. Well, download errors shouldn't occur either way, but those poor 56k dudes will be happier.

Demo time!

Interestingly Firefox supports JAR archives, which are basically just Zip files with a different file extension. It's some Netscape 4 (or so) leftover stuff, which was meant to be used for signed pages. (A poor man's https or something? I really don't know.) Fortunately it's still there and it seems to work pretty well.

ZIP/JAR files are non-solid archives which utilize Deflate compression (just like GZip, PNG or SWF) by the way. Each file got its own checksum and a tiny header with the name of the file. So, theoretically there is everything we need for progressive rendering. I haven't tested if Firefox actually does that though, since that isn't really important at this stage.

But enough of that. Check the quick and dirty demo! The demo shows one image, one CSS image, and one favicon from the JAR. (The favicon doesn't work in the online version, since this site's favicon was already loaded – it works fine in the offline version though.)

Note: If you have NoScript installed set noscript.forbidJarDocuments to false. That was a workaround for some vulnerability which was fixed many moons ago with the release of Firefox 2.0.0.10.

Let's take it apart

It uses the JAR: URI scheme, which is surprisingly simple. It starts off with "jar:" followed by the relative or absolute URL of the JAR, followed by a single forward slash, followed by the path of the resource inside the jar.

For example:

<img src="jar:test.jar!/img1.png" alt="img1" width="32" height="32"/>

The same with an absolute URL:

<img src="jar:http://kaioa.com/b/0907/test.jar!/img1.png" alt="img1" width="32" height="32"/>

Of course you can also use directories inside that JAR. E.g. it could look like:

<img src="jar:example.jar!/images/icons/img1.png" alt="img1" width="32" height="32"/>

The CSS image was defined like this:

background:transparent url(jar:test.jar!/img2.png) 0 0 no-repeat;

And the favicon like this:

<link rel="shortcut icon" type="image/x-icon" href="jar:test.jar!/img3.ico"/>

As you can see it's pretty easy stuff. However, there is one catch: The JAR file needs to be served with the right mime type. This isn't required if it's loaded locally though. The mime type needs to be either application/java-archive or application/x-jar. (The file extension doesn't matter by the way. Only the mime type is important.)

Here is how to set it for Apache (.htaccess):

AddType application/java-archive jar

Refer to this guide if this step wasn't elaborate enough for your taste. The HTTP Life Headers Extension might be also helpful by the way.

Moving forward

There isn't really anything like graceful degradation in this case. The only thing you can do is load a super simple vanilla CSS and then override it with the one from the archive if possible. That really doesn't look like a good option.

What we really need is support of the JAR: URI scheme in all modern browsers. It doesn't need to be the JAR one, but that one works and it's also a de facto standard used in a bunch of products. Additionally, the format is suitable for this kind of task and all potential submarine patents expired many many years ago.

I don't really know to which spec this kind of thing belongs. HTML5? CSS3? None of those? Either way I want to see it as soon as possible everywhere. It's simple stuff and there are huge benefits. What's not to love about this? :)

Download: jardemo.zip (2kb)

Comments

Excellent!

Well spotted, I fear this may be Firefox specific and something to do with their Chrome/Plugin system? I can confirm it doesn't work in Safari; which isn't surprising - I wonder how long it would take the internet explorer development team to implement something similar, or if we could perhaps use some kind of multi-part document, the same as Safari and IE support saving a file to a multi-part web archive?

interesting

The next time I'll do a work for someone, I will try and employ this technique... :)

Cheers

re: Excellent!

>I wonder how long it would take the internet explorer development team to implement something similar[...]

Going by their track record (CSS 1.0 and PNG)... I would say... 10 years. :]

re: interesting

>I will try and employ this technique

It isn't ready for prime-time yet (it only works with Firefox!), mind you.

"hard"

Good job solving a problem that doesn't exist.

You start off on a false pretense:

"Putting lots of tiny images with different dimensions into one bigger image is fiddly"

1. Who says they are "tiny" and so what if they are? (ever heard of ZOOM? WOW! Now they are BIG!)
2. If the images have different dimensions you are DOING IT WRONG
3. The most images you should ever have in a sprite is 3 or see the last part of #2.

"Solution" Sucks

My tests:

FF3 = works, and so I assume also FF2?

IE6 = fails

IE8 = fails

Opera (latest) = fails

Chrome (latest) = fails

Safari (latest) = fails

re: "hard"

So, Google is doing it wrong? :)

re: "Solution" Sucks

This is obviously a proof of concept. Also note that the title points out that it's Firefox-only.

re: "hard"

Why would you never have more than 3 images in a sprite? Why would you never have images of varying dimensions in the same sprite?

It sounds like you're thinking of CSS Sprites as restricted to only navigational elements (normal, active and hover states) rather than for more general use. This article is trying to solve the problem introduced by the latter, as even small and simple designs can easily have a dozen or more images in a single sprite.

re: re: "hard"

>Why would you never have more than 3 images in a sprite?

Fewer connections. For each connection you get the handshake, you send the request, you send cookies perhaps, then you get the response header and the data, bla bla... and there is also the problem that tcp/ip always starts off slow (thus 1x20kb is quite a bit faster than 20x1kb – even without all that extra junk).

>It sounds like you're thinking of CSS Sprites as restricted to only navigational elements (normal, active and
>hover states) rather than for more general use.

Huh? You're the one who suggests that there should be at most 3 images in a sprite sheet.

Very cool

This would be wonderful if it were more broadly supported by the browsers. HTTP requests are the worst culprit in slow loading pages. Especially as pages become more resource intensive (i.e. more supporting files) - especially considering browsers typically only download 2 files concurrently (2 connection limit) and block when they hit a script resource so now you're down to 1 connection for a moment...

Caching Implications???

Many of your complains can be mitigated - most often by breaking that one sprite that does everything into a reasonable compromise between reducing the # of requests with sprite complexity by grouping like things together and not trying to put all 100 images, buttons, text, repeating backgrounds, and other things all in the same image file.

"You can put everything into the archive! Favicon, scripts, style sheets, and all your (layout) images."

For all the requests you'd save, this model sounds like there would be many caching issues to contend with. Even just sticking to images [and sprites that try to do everything in a single file a prone to this as well] but if you tweak just one parameter in one script file you then need to push out a new jar file and have your users redownload the entire build. For your 3 byte string change you've now negated what could be a couple hundred k.

I'd also like to see some real world responsiveness comparisons here. With issues like page rendering, script blocking and the like I don't know if it would really be advantageous to jar up all the JS files together, and even more so jar up all the css, js and image files together and have a browser wait for the larger single download before being able to render much of anything.

Jar files [like data URIs] may someday have a real place in creating well performing web sites, but there's a time and a place for everything - including sprites.

- Chris Casciano

re: Caching Implications???

Yes, with sprite sheets there are always trade-offs. You always need to find the balance between perfection and maintainability. With archives you won't have to think about this aspect anymore.

>[...]if you tweak just one parameter in one script file you then need to push out a
>new jar file and have your users redownload the entire build.

Changes are rare and most users have an unprimed cache anyways. With today's fat sites that puny cache doesn't last very long. The cache does still improve loading times when the user goes from one page to the next though.

>[...]and have a browser wait for the larger single download before being able to render much of anything.

As I said it's technically possible to do progressive rendering. Each file can be decompressed as soon as the deflate stream is complete (there is one for each file in the archive). And you can also change their order if you like.

Progressive rendering

Actually, zip files, and thus jar files, store a "central directory" at the end of the file which is used to find the data streams inside the archives. You can not reliably scan a zip file from the start and hope to extract its contents, you need to start from the end to be sure to get it right. (There are inconsistencies which prevent you from seeking from one header to the next.)

Thus, no browser will display anything progressively from a jar file. Sorry.

re: Progressive rendering

Yes, there is an index at the end. However, this isn't needed to decode the files. Over in some older article I explain how a Zip file looks like. You can decompress any file as soon as you get the end of the Deflate stream. In fact that's basically what I did over there – I went past the file header and ripped the "naked" Deflate stream out of that file. I didn't even bother looking at the index.

Zip streams can contain data

Zip streams can contain data fields after the deflate stream, too, and the layout of these extra data fields are inconsistent between implementations. These and other things will break your scanning. You can try and work around these things, but you will end up with a big mess.

re: Zip streams can contain data

From the 6.3 specs:

  Overall .ZIP file format:

    [local file header 1] <- these headers have a signature
    [file data 1]
    [data descriptor 1]
    . 
    .
    .
    [local file header n]
    [file data n]
    [data descriptor n] <- we are only interested in the stuff up to this point
    [archive decryption header] 
    [archive extra data record] 
    [central directory]
    [zip64 end of central directory record]
    [zip64 end of central directory locator] 
    [end of central directory record]

As you can see there is no problem at all.

Edit:

Just to be clear: I doesn't need to be ZIP nor does it need to use the "JAR: URI scheme". This stuff just happened to work in one browser and it also seems to meet all requirements. This is only a proof of concept which shows how it could work and how a possible implementation could look like.

c'mon, why is google held as the standard?

What is popular is not always right, what is righ tis not always popular.

Just because Google does it doesn't mean it's right (or wrong).

Just like a recent article that they said they were going against css image replacement (despite the fact that google does it themselves).

Source: http://yoast.com/google-speed-sprites/

re: re: re: "hard"

>Huh? You're the one who suggests that there should be at most 3 images in a sprite sheet.

No, he didn't. He was asking the parent posted why he thinks there would be at most 3 images per sprite.

>Why would you never have more than 3 images in a sprite?

See?

.. which is precisely why it

.. which is precisely why it sucks. it isn't much of a solution if it's targeted to a minority of users.

i'm not the original poster of the comment you replied to, btw

re: c'mon, why is google held as the standard?

Nah... all the front-end guys and gals are over at Yahoo.

I just picked Google for this particular example, because I really like that one. They flush the results page early. And if you take a look at the waterfall diagram, you can see that the sprite sheet finished downloading even before the complete markup of the page arrived.

This gives us the really impressive "BANG! - and it's there" effect we all love to see.

re: re: re: re: "hard"

Oh whatever. :)

re: .. which is precisely why it

And once more... this is just a proof of concept.

Resource packages to the rescue!

Hi,

Interesting article. For the past few weeks here at Mozilla, we've been working on a proposal called "resource packages", which are similar to what you are doing here, but with the additional bonus of having a fallback mode for other browsers. I have reached out and met with Steve Souders from Google, and we are contacting the other browser makers once we have the proposal in a bit better state.

You can view an early draft here: http://limi.net/articles/resource-packages

— Alexander Limi · Firefox User Experience

re: Resource packages to the rescue!

Excellent news!

Tell Steve I really like his new book. It's plain awesome. ;)

Edit: The proposal is very clean and I really like the way it works. It's much better than my quick and dirty hack.

Edit2: And yes, the index really is at the end. However, if you intend to load all files anyways you won't need it. Each file name is stored twice (once in the local file header and once in the index). The purpose of the index is to speed up seeking. E.g. if the ZIP contains thousands of files and you want to access a specific one, you won't have to go from one local file header to the next until you find it. Instead you jump to the correct location right away with the help of the index.

"Proof of Concept"

If this is proof of concept, perhaps the article shouldn't arrogantly mock the present method that actually works across browsers.

re: "Proof of Concept"

I "arrogantly" mocked it, because it's needlessly complicated.

This is used internally by Firefox, and to distribute extensions

It's got nothing to do with signed pages or what have you -- it would long have been deprecated and removed. It's simply there because it's used internally by Firefox (and all other XUL apps), and to bundle extensions (.xpi are actually zip files, just like .jar).

re: This is used internally by Firefox, and to distribute extens

http://www.mozilla.org/security/announce/2007/mfsa2007-37.html

"The jar: URI scheme was introduced as a mechanism to support digitally signed web pages, enabling web sites to load pages packaged in zip archives containing signatures in java-archive format."

Edit: That Netscape bit might be complete bollocks though. I'm not really sure where I got that from. :)

Well, I have actually written

Well, I have actually written zip decompressors, and let me tell you, there ARE problems. A lot of apps DON'T follow the Zip specs properly. If you try to decompress a zip file without consulting the central directory, you WILL run into problems.

re: Well, I have actually written

>A lot of apps DON'T follow the Zip specs properly.

7zip does and so does kzip/zipmix. If progressive loading fails you can still just load the whole thing in one go. Or well, fallback to the non-archived files (see Alexander Limi's proposal).

It would be better to use a

It would be better to use a clean new format. Zip is kind of a mess in many ways. But that leaves you with the problem of not having tool support.

Sadly, none of the existing popular archive formats are really optimal for this. RAR is easy to load sequentially, but it's proprietary and messy in its own way. 7z is an utter nightmare to parse correctly. Tar can only be compressed solidly. And while XAR would work well, it is immature and seems to be kind of abandoned. It doesn't even have a spec.

Re: new compression format

No one wants a new compression format.

> Tar can only be compressed solidly.

What do you mean by that? Of all the format's listed tar is extremely simple (I've written parsers for it before and they fit in one page of C code) and it's also meant to be streamed (its name is "Tape Archive", after all!) so you can definitely get to files before the whole archive is loaded. The one annoyance is that it has no index so the software would have to build an in-memory index (ever so slightly more complicated) or run through the archive each time looking for the file (possibly slow if there are a lot of look ups).

-David

JAR URI scheme not even registered

A clean new format might be good, or at least reliable documentation. The JAR URI scheme is not even registered at http://www.iana.org/assignments/uri-schemes.html.

Support

Considering pretty much all browsers (new and many older versions) support gzip, would it not be better to use that to compress images into an archive file? It looks like FF 3.5 has support for JAR files, but no other browser do.

GWT ImageBundle

I'm currently using GWT ImageBundle and it has been working quite well with almost no overhead :)

http://code.google.com/p/google-web-toolkit/wiki/ImageBundleDesign

re: Support

Most of us already do use GZip. E.g. on this site the document itself is gzipped and so are the aggregated JS and CSS files. I outlined the effects in some older article. The thing is... Gzip only works on single files. So, it doesn't help you with the number of connections.

The history of jar archives.

First, a few definitions: JAR is "Java Archive", basically how Java code is distributed in library format. When you compile Java from source code, you create binary class files, that then (for convenience sake) can be essentially zipped up into one library, a jar file. A jar file with some special files in a special directory structure is a WAR file, or a Web Archive-- these are the files that Java application servers use, so you can compile a war file on your local machine and deploy it to your app server and it'll run. Another extension of war is an EAR file, an Enterprise Archive.

Anyways, I digress. Java support was originally added for Netscape Navigator 2.0. Netscape 3.0 updated the version of Java and included a technology called "Liveconnect", which allowed developers to call Java code with Javascript -- basically, if someone clicked on a hyperlink, you could (for example) start a plugin playing. This sounds fairly basic, but was a very big deal back in the day.

Now, there were some shortcomings of Liveconnect, mainly that anything you really wanted to do with Java / Javascript you couldn't, because of the security sandbox. No opening of sockets, no writing to the filesystem, there were a lot of limitations. (As an aside, this was right around the time that ActiveX really got crucified because of its weak code-signing, allowing any plugin to basically do anything. Here's where you saw Microsoft really start to fail, security-wise)

So, Netscape came up with an idea for "Communicator" 4.0. If you included your Javascript inside a signed jar (basically, adding a certificate to the file) you could perform actions outside of the security sandbox. Save files, read files, run applications on your local machine, etc. Decent idea, but this was when Netscape essentially ran out of time. The implementation was half-assed, and it was difficult to sign and deploy code in a timely manner. So it fell out of practice, and my guess is that Firefox has that code in their codebase more as legacy support than anything else.

In any case, there are a few reasons why this isn't such a great idea; you already know about the lack of support from any other browser, the other reasons are that jar files take a nontrivial amount of time to uncompress and save onto the filesystem. One might argue that the round-trip time of an HTTP connection takes more time, but if you're pipelining requests the overhead isn't that much more. There are so much easier ways to achieve a faster round-trip time, especially if you're dealing with a ton of images on one page.

Resource package issue

As I already wrote on Twitter there is one issue with the trailing index. While it's possible to decode the files progressively without having the index, you don't know which files need to be downloaded as usual since you don't know yet which files are overridden by the archive.

This could be addressed by an index file at the very start of the archive, but this approach looks pretty silly. Blocking the download of other resources until the archive finished downloading also doesn't look like a good option.

It's probably a good idea to use a new simple format. This way the encoding of the file names could be also addressed (always UTF-8).

By the way the reason why Zip's index is at the end is that it allows to add files to the archive without having to rewrite the file completely. That's surely benefiting in general, but something like that doesn't really matter for this use case since the archives will be always relatively small.

CSS Sprites done right

The technique CSS Sprites is a means to overcome the "2 requests on one Server only"-Barrier in many Browser and Server Implementation of the HTTP (HyperText Transfer Protocol) used to submit HTML, CSS, JS, Images and even JAR).

So if you put all the images in your archive - why not put all the other stuff into it, too? A page could be 1 file alone, containing everything it needs. Why don't we do that? Because it collides with another Barrier: Caching.

Another hindrance is your effort only works in 1 browser - and is somewhat unlikely to be adopted by others. So CSS sprite is what we'll be stuck with for a long time - unless SVGs with different layers get supported in all modern browsers, so we can stick every single image inside a layer.

So why don't we find techniques to improve the use of CSS Sprites? One could for example use repeat in at least one direction - the PNG format will shrink vertical or horizontal lines of a single color to less 10 bytes, so figure out which one is less used and put the other one to use in your sprite - or figure out how to use the other direction without the repeat.

Greetings, LX

HTTP 1.2

I just had a new idea. Since CSS sprites are used to overcome the limitation in the HTTP Specification, why not improve the latter: HTTP 1.2 could feature multipart requests - and since most browsers have the HTTP implementation in a single lib or part, it would be just as easy to support this even in older browsers.

Greetings, LX

jar: protocol as ZIP viewer and compressed site handler

I recently stumbled upon the jar: protocol in a Mozilla bug report; someone used it to refer to test cases uploaded in a single ZIP file. Sun came up with the protocol years ago to access Java archives, http://docs.sun.com/source/819-0913/author/jar.html, but I don't think it's ever been part of a browser standard.

jar: lets you view the index and individual files of a ZIP file within the browser, which is a hella nifty feature. Just put jar: in front of the URL and !/ on the end. This always works for files you've downloaded (file:/// URLs), and works for archives on the net although either the web server has to modify mime type or you have to set network.jar.open-unsafe-types to true in Firefox's about:config.

You can browse entire compressed web sites using this technique, which is pretty exciting for distributing courseware and wikipedia slices, as in the OLPC project. Here's a complete bible, all the links seem to work,
jar:http://wiki.laptop.org/images/d/df/Bible-en.xol!/bible-en/files/index.htm

It breaks on password-protected ZIP files, it doesn't handle accented characters in the archive's name or in filenames inside the archive, it gets confused saving directories out of the archive, and Firefox doesn't recognize jar: consistently as a protocol so settings for pop-ups or cookies don't work well. Still, it's a very, VERY cool ancient feature! I filed a dupe bug to expose this as a "View archive in Firefox" option. -- skierpage

Yes, this was originally

Yes, this was originally introduced by Netscape to have signed JavaScript similar to signed Java. Signed Java code can ask you for permission to access your local file system and similar - and Netscape implemented the exact same feature for JavaScript. That code in Firefox is considered legacy that is worth getting rid of (http://groups.google.com/group/mozilla.dev.platform/browse_thread/thread...).

But whatever the original purpose of the jar: protocol, it is used by Firefox and extensions to speed up file access, so it will definitely stay. It is also occasionally used by web applications as a speed-up measure though there the lack of browser support is usually problematic. So I am really looking forward to resource packages.

-- Wladimir Palant

IE dll's

Think MS uses DLL's in Internet Explorer to extract images - check the source of a 404 page in IE

res://ieframe.dll/info_48.png

But the security may only apply to that (particular) dll. As a work around, it may be possible to embedd your own images in a dll (so called) ieframe.dll - or one of the other safe havens and reference it on a different server path. Off to do some experiements,

Paul, Webdistortion

It don't work in Safari,

It don't work in Safari, Chrome, IE7, IE8...so it means-we don't use it.

re: It don't work in Safari,

Of course you shouldn't use it anywhere. Look, this is merely a proof-of-concept which was used to outline the benefits of the archive approach.

Automatic CSS Sprites creation will be the winner

I think in the future there will be a lot of tools for automatic Sprites combining. But now there are several more or less useful. I.e. SpriteMe from Steve Souders ( http://spriteme.org/ ) or Web Optimizer ( http://code.google.com/p/web-optimizator/ ).

re: Automatic CSS Sprites creation will be the winner

These tools surely can help, but they can't address all issues. E.g. you can't put icons and gradients in the same file, because CSS doesn't allow you to repeat (or stretch) a sub region of an image. You also can't mix and match different image types.

You'd also need such a tool in whatever language your projects are using. Otherwise you can't integrate it seamlessly.

Resource packages remove all these restrictions and they are also far easier to create (in any language). Additionally, you can also put CSS, JS, and other files into them. Whatever you may need in the future - you will be able to use this infrastructure. That's what makes them so interesting.

(I hope I'll get around writing a followup [or two] soonish.)

re: re: Automatic CSS Sprites creation will be the winner

This all is a very good idea, but before we have at least 50% coverage in browsers this can;t be implmeneted on production. Right now it works only in Fx - it's about 25%, so data:URI + mhtml: looks better in this scope.

Keep thinking the good thoughts

Don't let the fools bring you down. I've heard many ideas like this over the last decade, good ideas from bright minds, and rather than trying to help or nurture the idea and improve the world, a few people just tell you why it won't work. Google has become what it is today, in-part, from people asking "how can we ...?" instead of saying "we can't because ...".

Like so many bits of the net, there are so many fools who didn't read the article and then want to post obvious problems that you addressed in the article. I am not a front end designer, however I understand enough of the problems to appreciate you trying to find a better solution.

Good enough is not great, and until we have great ideas, methods and items, we need to keep improving the good enough ones. IMHO Microsoft is guilty of leaving concepts at good enough and not improving them to great concepts; while Apple and Google seam to keep striving for better. If you reply to my opinions of Apple, Google or Microsoft, we'll all know you either didn't read my entire post or are an idiot as I'm telling you now, stick to posting facts; I used them only as examples, so don't start a flame war if you disagree with my examples.

Think to the future, web apps published in a single archive seems like a wise idea to me, it will help when managing different versions, or hosting them side by side. The push for higher level programming languages, cross platform execution, XML, and several other technologies is to make them more robust and less dependent on a strict format. Using CSS sprite sheets IMHO is a cute trick, and can be made to work, but it's a hack in most cases and should be replaced with a better standard, like resource bundles, archives, or any number of other ideas. For years I used vi and edit/notepad to write HTML because the tools, WYSIWYG editors, wrote such awful and bloated HTML. On a 14.4k connection it made a huge difference. With the cell phones of today having faster speeds than that, it's pointless now, however, with larger web sites, the need for clean, well written sites has remained.

Too bad web standards don't have a End Of Lifetime of, say, 10 years. It's rather silly to keep supporting old methods forever, and in some cases only makes it harder to build new hardware, software, and standards.

Thanks for listening to an old man rambling.
John Stephens

Post new comment

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

More information about formatting options