I initially wrote this post for our corporate Dextrose blog at http://blog.dextrose.com but due to its technical nature, I republished it over here in order to get a bit more attention. Enjoy!

There has been quite some controversy about the decision to use plain old HTML to render a scene with the Aves Engine versus the utilization of the new popular Canvas tag since my Google tech talk is up (http://www.youtube.com/watch?v=_RRnyChxijA). I wanted to elaborate a bit about the reasoning behind all this.

Canvas doesn’t play well with images

Unfortunately, we happen to use lots and lots of different images in a typical isometric scene with the Aves Engine. With Canvas, there’s a friction in the API when working with images: When trying to include an image into your canvas render, you need to first construct it via DOM methods (new Image()), set the source, wait until it is loaded and then render it to the canvas. A step that is painful, even if you’re later on caching the images, and most importantly, a step that is horribly slowing down the Canvas interface with whatever reasoning behind it.

In fact, even with precached DOM representations of images, our basic tests have shown that Canvas based rendering of an isometric scene is almost always 2-3x slower than just dumping out a big HTML string with some class names linked to external stylesheets. External stylesheets just seem to be a much better cache when connected to HTML output than DOM-Canvas.

Canvas-based redrawing is painful

There’s a lot of motion going on within games based on the Aves Engine. They’re mostly full screen (at least extended to the window size) and feature character sprite animations and transitions. Say a character moves by 20 pixels to the left. It’s easy to clear the whole canvas and rerender it, but also extremely slow. It’s however extremely difficult to implement an own partial redrawing system. Here’s roughly what the engine would need to do:

  1. Get the outer boundaries of the whole part that was changed, i.e. the old character position and the new one combined in a rectangle
  2. Redraw this part with all elements not in motion during this redraw – includes characters, objects etc. Engine needs to keep track of all positions of all elements on the viewport all time.
  3. Now that you cleared the part, you can place the character on top again with the new changed position

This might roughly give you an idea of the steps involved here. I’m not saying it is impossible, as this is in fact how most efficient rendering engines do redraws. It’s just very very time consuming and difficult, and we haven’t had the chance to explore it yet. On the other hand, the browser rendering engines already do this work for us at the moment, which is really convenient for the time being.

Action surfaces

If you have seen one of our demonstrations or videos, you have also seen one of the most unique features of the Aves Engine, our action surfaces. It allows you to place generic HTML content onto any kind of surfaces, transformed to isometric projection. This is working due to the possibilities of applying 2D matrices to elements via CSS Transforms (WebKit, Gecko, Opera) or the matrix filter (Trident [IE]).

Action surfaces obviously also need to be considered in the layering of objects. If a character is walking in front of a wall, the action surface needs to stay behind the character. With HTML based rendering, this is pretty easy, as we’re just using z-index to control the layering, and the action surface lives in the actual container element.

Now with Canvas, this gets pretty tricky. There’s no way to render HTML content (or a snapshot of it) to Canvas, first of all. Doing screenshots of portions of the screen and rendering them to canvas was possible some time ago in Firefox, and I think still is for XUL, but is disabled in web context for security reasons (you could snap contents of file inputs etc.). Now this leaves you with only one apparent choice:

  1. Collect how many action surfaces need to be displayed in the current render
  2. Count, how many different layers you would need
  3. Split the Canvas render into multiple <canvas> elements for each layer and slide in the DOM based action surfaces inbetween the partial renders

Thank god canvas supports see-through transparency, but this is still a pain nonetheless. I’ve never seen a system like this implemented and working. We’re happy to be the first, but again, haven’t tackled it for the alpha version of the engine.

Again, we’re still experimenting all the time with possible render improvements, and this post was mainly meant to give you some insights in how we do R&D to base our assumptions on actual tests. For us, it’s not so much about the technology – it’s about delivering a truly sophisticated gaming experience. We’ll do our best to squeeze the most out of the open web stack, and we look forward to jumping on Canvas when the time comes!

Reply with a tweet or leave a Webmention. Both will appear here.