Phaser 4 Dev Log 7

Published on 26th May 2020

It's been a month since the last Dev Report, so I felt it was high time I updated everyone on progress. Aside from taking some time out to finish and release Phaser 3.23 I have been laser-focused on Phaser 4 development. Anyone tracking the repo will have seen over 520 commits since Dev Report 6, which is a lot of ground to cover. I'm not going to be able to detail everything that has happened, but I'll certainly highlight the most important and interesting changes.

Farewell Nano

The most important change is that I'm no longer working specifically on Phaser 4 Nano. All development is taking place in the @phaserjs/phaser repo, and I will mothball the Nano repo when I get closer to release, to avoid any confusion. The reason I'm not calling it Nano any longer is that there's no longer any need to differentiate between v4 and v4 Nano.

The whole point of Nano was that it had a vastly reduced feature set in order to achieve nice and compact build sizes. Yet, due to the modular way in which I've built v4, as long as you only pick a handful of Game Objects, you'll get a "Nano" build by default anyway!

Every last piece of v4 is a module, most of which work entirely independently, without any hooks elsewhere in the API. To that end, all of the initial work I undertook making sure build sizes were tiny still persists today, even though there are now lots more features available than ever before.

I have vastly overhauled the WebGL renderer, restructured how Game Objects work and inherit from each other, added in World events, tightened up lots of core areas and still, due to the design decisions made, it's perfectly possible to build the full core WebGL system in under 12 KB (min / gz) and the Canvas one in under 8 KB.

It's very nearly at the point where the core API is fixed. This means that it doesn't matter what extra features are added, it will never increase the size of a core build. Of course, the more different types of Game Object you add, and the more features such as tweens and shaders, the bigger your build will become. Yet, unlike previous versions where "everything and the kitchen-sink" were included and it was up to you to remove parts you didn't want, this time around it's a much more logical approach of only including the parts you need from the beginning, and letting your build tools handle the rest.

Multiple Entry Points

A few weeks back I restructured the way Phaser 4 was built so that it generated multiple-entry points. This took some serious Rollup kung-fu and I had to code a couple of custom-written plugins to achieve it, but the end result was well worth it! Multiple Entry Points are also known as Direct Imports. It's an emerging new pattern that is especially important for libraries that provide many independent utility functions. This allows users to import independent parts of the library from separate files. Sound familiar to how v4 is structured? Good!

Direct imports have a few key advantages, the most important of which is that bundlers need to analyze less code. This of course makes bundling faster. It means that it doesn't matter if I drop thousands of new modules into the v4 API, the bundler at your end only needs to look at the modules you've imported because each one of them is a unique entry point. Lukas Taegert wrote a brilliant post about this, which I would urge you to read, however, I've copied the most pertinent part below:

"It can be very beneficial to our users if we provide direct imports for independent parts of our library. One way of doing this could be to just distribute the source files together with our library and instruct our users to import from there. This can lead to nasty issues, though, if different parts of the users’ code import our library in different ways. Imagine one module imports the upper function from "fancy-case" while another imports it from"fancy-case/src/upper". Even though it is technically the same code, these are now two very distinct functions and upper will end up twice in the user's code.

This may not sound too problematic but imagine what happens if we store some persistent state in a variable next to the upper function (definitely not a recommended practice, but it happens) or if the user relies on comparing references to our upper function. Suddenly we are facing a myriad of weird, hard-to-track bugs. Also, the untouched source code did not benefit from any optimizations such as scope-hoisting or tree-shaking or any transformations applied to the code via plugins like rollup-plugin-babel.

Rollup offers a simple but powerful solution: You can designate the independent parts of your library as additional entry points.

You see that our originally eight modules have been reduced to five chunks, one for each entry module and an additional chunk that is imported by several of the other chunks. Depending on the format you look at, the chunks simply import or require each other without any additional management code added or any code duplication. To avoid duplications and thus the potential issues with duplicated state or references mentioned above, Rollup applies a “coloring” algorithm that assigns an individual color to each entry module and then traverses the module graph to assign each module the “mixed” color of all entry points that depend on it.

In our example, both the red entry module upper.js as well as the blue entry module lower.js depend on constants.js and shiftChar.js so those are assigned to a new purple chunk. main.js and upperFirst.js only depend on other entry modules and thus do not further change the coloring."

It took a bit of work, but I refactored the v4 build process to take advantage of this chunk painting and the end result is significantly quicker and smaller builds! Plus, you can now pull in just one single module from anywhere in the API and use it, without inheriting anything else or duplicating functions, which is great because it means you could pull Phaser functions into all kinds of non-game apps.

If you wish to track progress please stick a Watch on the Phaser 4 repo (and of course read these Dev Logs :) - if you wish to have a play with the current build of v4 either pull from the repo or grab it from npm where the 0.0.22 version was published just today.

Right, enough with the dry module stuff, let's see what's new in the API!

WebGL and Canvas Renderer Updates

The WebGL renderer has been given a complete overhaul since last time. Previously, the renderer class itself was responsible for pretty much everything. However, now it hands off all of those responsibilities to various different systems. I've created 3 systems so far: The FBO System, for managing frame buffer objects, a Shader System, for creating, installing, and binding shaders and the Texture System, for handling the creation, disposal, and modification of WebGL textures.

I also split out virtually everything to its own functions. So rather than have a massive WebGL Renderer class, it's now a really slim state manager that uses the different systems to handle the rendering process. As I add more features to the renderer, they will take the shape of new systems, for example a Stencil System for masking, rather than being new methods in a single class.

All of the draw operations have been split into functions too, which Game Objects now call directly. For example, the Sprite Game Object will call the BatchTexturedQuad function during render, rather than calling a function on the renderer such as 'batchSprite'. This has two benefits. First, multiple Game Objects can share common draw functions if they don't require any special kind of rendering. Secondly, if a Game Object is created that does require a unique way of drawing, such as a Graphics / primitives object, it will only ever be tied to its own drawing functions. The renderer doesn't have to gain a thousand-line 'drawGraphics' function, for example. Instead, it's the Graphics Game Object that is responsible for passing the renderer instance to the draw function it uses.

I'm very pleased with how things have been separated so far. I can add in lots of different drawing methods, or custom shaders and not have them bloat the size of the renderer as a result. Only those Game Objects that require them will ever talk to them.

WebGL isn't the only one to have been worked on, either. I always promised that a Canvas Renderer would be included and work has started on that. It's now happily rendering all of the current set of Game Objects and, as you'd expect, it's a much smaller file size too. There's still work to be done, especially with the new layers (see below), but it's happily rendering sprites and text, as quickly as possible.

Default Origin

In Phaser 2 the default origin (or anchor, as it was called then) of a Sprite was 0x0, i.e. the top-left. In Phaser 3 this changed to the center (0.5 x 0.5), which is both common in game frameworks and handy for things like rotation. This is very much a marmite taste, though, i.e. it tends to polarise opinion on which is the "correct" one. There is, of course, no correct one. However thanks to the way that Config options work in v4 it was trivial for me to add this:

Here you can see we're passing the `DefaultOrigin` function to the Game class. This will set a default global origin value to be whatever you pass to the function so in this case 0 x 0. This means that any Game Objects created by v4 will have their origin set to be the top-left by default, without you needing to apply it to them one by one.

If you don't pass in `DefaultOrigin` then it'll simply default to 0.5 as in Phaser 3.

Voila! Both sides of the camp happy with one tiny 221-byte function :)

Render Layer

A couple of Dev Logs ago I talked about how the renderer will keep track of the dirty state of Game Objects. If nothing was dirty, the renderer can skip a whole frame, instead using the contents of the previous frame. This has a fantastic impact on battery life on mobile devices.

This feature is, of course, still present. However, two new types of cache have been added since then. The first of these is a Game Object called Render Layer. To explain further, it's important to know how Phaser 4 is structured. The Game Object hierarchy is very simple and very straight forward.

At the base level you've got the Game Object class. This class contains all of the core elements that _any_ Game Object requires. This includes a reference to the World, a reference to the parent Game Object (if any), a collection of Children, a Transform, Bounds and Input component, and then a few important flags that allow you to control updating and rendering.

Everything extends from this base Game Object class.

The Container class extends from it and adds in a bunch of handy transform management methods, such as `setScale` or `setSkew`, and a number of getters and setters. Every Game Object in Phaser 4 can have children, not just Containers, but the class exists to make life a little cleaner and easier for you.

The Sprite class extends from Container. It adds in the ability to set a texture and frame, along with the associated vertex data, but little else. After all, it already has most of what it needs already inherited.

And that's where it stops. Game Object -> Container -> Sprite is as deep as it goes for arguably the most common Game Object you'll use. Each level exposes extra features, relevant to that level only. I have done away with all of the mixins and components from both v2 / v3 and earlier builds of v4. They're just straight clean classes with a few properties hanging off them now. This will make it much easier to navigate from a documentation point of view. No more battling trying to get mixin JSDocs to merge properly with classes they've been applied to (like in v3), or higher-level classes with literally hundreds of properties and methods hanging off them (as in v2).

It's also much easier for you to extend into your own Game Objects, too. As you won't have to worry so much about accidentally overwriting an important internal property. All of the core features a Game Object needs are present: the ability to update, both itself and its children, the ability to render should it need it and various flags to keep track of state.

So, where do Layers fit into this?

A Layer is a way for you to group your Game Objects _without_ impacting their transforms. You can add Sprites or Containers to a Layer and then take advantage of the various display functions available to you, such as `FindChildByName' or 'GetFurthestChild', having them scan the children of the Layer. But they won't inherit any transforms from the Layer, meaning it's a purely semantic grouping, rather than a display position influencing one.

Building upon this, I added in Render Layers. A Render Layer works in the same way as a normal Layer, in that you add children to it (but their transforms are never impacted by the Render Layer) however, unlike a Layer, during the render pass, those children are rendered to a texture that the Render Layer owns. At the end of the pass, the Render Layers texture is then rendered to the game. Here's an example of creating one:

And here's the result:

Of course, visually it doesn't really look any different than if you'd just made 500 star sprites directly. So, why bother adding in this extra step? There are a couple of reasons it can be useful. First, the Render Layer texture is fully cached. The Render Layer intelligently tracks its children and knows exactly which of them are 'dirty' this frame. If none of its children are dirty they are all skipped for rendering. Instead, the Render Layer just presents its previously cached texture to the game renderer to use.

The benefits of this should be immediately obvious. If you've got a game where perhaps the background map or scenery never, or rarely, changes, then by simply putting those Sprites as children of a Render Layer it will automatically cache them. For example, say you have got a background full of a few hundred stars and planets. Rather than render every one of them each frame, they could be grouped into a Render Layer which will cache them, potentially dramatically cutting down the quantity of WebGL operations. This will also work for Canvas, too, letting you cut down on drawImage calls.

You can see this in the example above if you look at the WebGL calls:

Here we can see how the Render Layer drew its prepared texture (with all the stars) to the Canvas frame buffer and then drew the logo Sprite over the top. This took one texture swap and one drawElements call with a tiny buffer of just 6 vertices. We could have drawn all of the stars directly and they would have been batched, but it still would have had to upload 3006 vertices each frame for the same result. On Canvas, it would have saved 499 drawImage calls, too, which is a potentially huge saving. The more complex the contents of your Render Layer become, especially with regard to heavily texture swapping Games Objects, or those with deeply nested children, the better the saving you get.

Although v4 already cached the entire scene, which was the process I reported in Dev Log 5, all it took was one moving or animated sprite to break that cache. However, Render Layers allow for the caching of complex or busy parts of the scene graph, letting you trade a bit of memory for object iteration, gl state changes, and draw calls.

It's similar to a feature that existed in Phaser 2 where you could call 'cacheAsBitmap' and it would render the Display Object and all of its children to a texture. The difference, is that the old feature never checked the state of the children. Should a new child be added, or an existing one removed, or a texture or position changed, it had no idea about this and would carry on rendering the old 'cached' bitmap instead. Render Layers take this concept and evolve it to its natural conclusion.

Having all the contents of a layer as a texture has other benefits, too. Such as using that texture elsewhere in your game. Either as the texture for another Game Object, or saving it out as a PNG, or using it as the input for a shader. As you can see, there are all kinds of practical applications beyond just being a cache.

Effect Layer

Those of you who've been working in WebGL for a while will already have probably perked up at the mention of the Render Layers drawing all of their children to their own frame buffer backed texture. Because, the next step on from that, is the Effect Layer.

An Effect Layer extends from the Render Layer, but adds in its own custom post render handler that does one very important thing: it allows you to set shaders to act upon the contents of the Render Layer, and what's more, it still supports the Render Layer cache and it also allows multi-pass shaders, for some really nice effects.

Effect Layers are created in the same way as Render Layers. Here's a basic example:

We've loaded a few assets and created some Sprites. The background Sprite is just added to the display list directly, but the rest are added to the Effect Layer.

If we run the code as-is, we just get this static Scene:

But, what if we drop a shader into the mix?

Here we're creating a new Shader object, passing in our fragment shader, which is just a string of code defined in the source file, and then adding it to the Effect Layers shaders array. If we re-run the exact same test we now get a nice plasma shader applied to the children of the Effect Layer:

Great stuff :)

We can use any fragment shader and there are several helpful uniforms and attributes set-up ready for us. Here's the same Scene but running with a pixelate shader:

It's only impacting the children of the Effect Layer and we didn't have to mess around with a custom pipeline or anything! Should you choose to move a Sprite into, or out of the Effect Layer, it will just render normally again. There's no resetting of filters.

It gets better, though. The Effect Layer 'shaders' property is an array for a reason. Let's combine the plasma and pixelate shaders:

All we did was add them both to the shaders array. The Effect Layer now looks like this:

You don't have to stop at 2 shaders, either. You can chain as many as you like, which is really useful if you want to run a blur shader across both the x and y-axis or run an effect such as a glow/outline shader.

All of the cache benefits of the Render Layer still apply, too. If the children haven't updated, then it'll use the cached texture of them and just apply the shader to that. Indeed, unless the shader updates in real-time too, you can even cache the results of those. For example a color matrix manipulation shader, or a sepia or grey-tone shader, could be used to tint a complex background scene and it'll stay cached until you next update it.

More importantly, both the Render and Effect Layers work in conjunction with the Render List. When a World is asked to render itself, it starts by doing a Depth First Search across its children in order to build-up a render list. This list can then be worked upon. For example, if a large part of the list is cached, those entries are removed from it. Or, if you wanted to specifically sort a section of the list - for example, to have a Container sort its children by say a z property, this is now very easy to accomplish.

Depending on which type of World you're using, the Render List can be additionally processed. For example, in the default World, Game Objects are automatically culled based on their intersection with the Camera. This can help save potentially tens of thousands of objects from being added to the Render List, which in turn means they don't get rendered to the Render or Effect Layers either.

With camera culling, layer caching, and two levels of renderer caching all built-in by default, this will easily be not only the fastest version of Phaser yet but the most optimal in terms of not overwhelming devices. Yes, it will still require some effort on your part to structure your game carefully, but all of the core components are in there and working right now, to make your games as efficient as they can be.

That's it for this Dev Report. I feel like I barely scratched the surface of what's new, but I need to get some coding done as well :) I didn't even touch upon the motion system, tweens or the new input system, or the plans I have to handle physics. But, those can come soon. Right now, it's time to crack on with development and push hard towards the first beta release, so you can all start playing with this.

If you've any questions, feel free to ask them in the comments on catch me on Discord in the phaser4 channel.

It's been a month since the last Dev Report, so I felt it was high time I updated everyone on progress. Aside from taking some time out to finish and release Phaser 3.23 I have been laser-focused on Phaser 4 development. Anyone tracking the repo will have seen over 520 commits since Dev Report 6, which is a lot of ground to cover. I'm not going to be able to detail everything that has happened, but I'll certainly highlight the most important and interesting changes.

Farewell Nano

The most important change is that I'm no longer working specifically on Phaser 4 Nano. All development is taking place in the @phaserjs/phaser repo, and I will mothball the Nano repo when I get closer to release, to avoid any confusion. The reason I'm not calling it Nano any longer is that there's no longer any need to differentiate between v4 and v4 Nano.

The whole point of Nano was that it had a vastly reduced feature set in order to achieve nice and compact build sizes. Yet, due to the modular way in which I've built v4, as long as you only pick a handful of Game Objects, you'll get a "Nano" build by default anyway!

Every last piece of v4 is a module, most of which work entirely independently, without any hooks elsewhere in the API. To that end, all of the initial work I undertook making sure build sizes were tiny still persists today, even though there are now lots more features available than ever before.

I have vastly overhauled the WebGL renderer, restructured how Game Objects work and inherit from each other, added in World events, tightened up lots of core areas and still, due to the design decisions made, it's perfectly possible to build the full core WebGL system in under 12 KB (min / gz) and the Canvas one in under 8 KB.

It's very nearly at the point where the core API is fixed. This means that it doesn't matter what extra features are added, it will never increase the size of a core build. Of course, the more different types of Game Object you add, and the more features such as tweens and shaders, the bigger your build will become. Yet, unlike previous versions where "everything and the kitchen-sink" were included and it was up to you to remove parts you didn't want, this time around it's a much more logical approach of only including the parts you need from the beginning, and letting your build tools handle the rest.

Multiple Entry Points

A few weeks back I restructured the way Phaser 4 was built so that it generated multiple-entry points. This took some serious Rollup kung-fu and I had to code a couple of custom-written plugins to achieve it, but the end result was well worth it! Multiple Entry Points are also known as Direct Imports. It's an emerging new pattern that is especially important for libraries that provide many independent utility functions. This allows users to import independent parts of the library from separate files. Sound familiar to how v4 is structured? Good!

Direct imports have a few key advantages, the most important of which is that bundlers need to analyze less code. This of course makes bundling faster. It means that it doesn't matter if I drop thousands of new modules into the v4 API, the bundler at your end only needs to look at the modules you've imported because each one of them is a unique entry point. Lukas Taegert wrote a brilliant post about this, which I would urge you to read, however, I've copied the most pertinent part below:

"It can be very beneficial to our users if we provide direct imports for independent parts of our library. One way of doing this could be to just distribute the source files together with our library and instruct our users to import from there. This can lead to nasty issues, though, if different parts of the users’ code import our library in different ways. Imagine one module imports the upper function from "fancy-case" while another imports it from"fancy-case/src/upper". Even though it is technically the same code, these are now two very distinct functions and upper will end up twice in the user's code.

This may not sound too problematic but imagine what happens if we store some persistent state in a variable next to the upper function (definitely not a recommended practice, but it happens) or if the user relies on comparing references to our upper function. Suddenly we are facing a myriad of weird, hard-to-track bugs. Also, the untouched source code did not benefit from any optimizations such as scope-hoisting or tree-shaking or any transformations applied to the code via plugins like rollup-plugin-babel.

Rollup offers a simple but powerful solution: You can designate the independent parts of your library as additional entry points.

You see that our originally eight modules have been reduced to five chunks, one for each entry module and an additional chunk that is imported by several of the other chunks. Depending on the format you look at, the chunks simply import or require each other without any additional management code added or any code duplication. To avoid duplications and thus the potential issues with duplicated state or references mentioned above, Rollup applies a “coloring” algorithm that assigns an individual color to each entry module and then traverses the module graph to assign each module the “mixed” color of all entry points that depend on it.

In our example, both the red entry module upper.js as well as the blue entry module lower.js depend on constants.js and shiftChar.js so those are assigned to a new purple chunk. main.js and upperFirst.js only depend on other entry modules and thus do not further change the coloring."

It took a bit of work, but I refactored the v4 build process to take advantage of this chunk painting and the end result is significantly quicker and smaller builds! Plus, you can now pull in just one single module from anywhere in the API and use it, without inheriting anything else or duplicating functions, which is great because it means you could pull Phaser functions into all kinds of non-game apps.

If you wish to track progress please stick a Watch on the Phaser 4 repo (and of course read these Dev Logs :) - if you wish to have a play with the current build of v4 either pull from the repo or grab it from npm where the 0.0.22 version was published just today.

Right, enough with the dry module stuff, let's see what's new in the API!

WebGL and Canvas Renderer Updates

The WebGL renderer has been given a complete overhaul since last time. Previously, the renderer class itself was responsible for pretty much everything. However, now it hands off all of those responsibilities to various different systems. I've created 3 systems so far: The FBO System, for managing frame buffer objects, a Shader System, for creating, installing, and binding shaders and the Texture System, for handling the creation, disposal, and modification of WebGL textures.

I also split out virtually everything to its own functions. So rather than have a massive WebGL Renderer class, it's now a really slim state manager that uses the different systems to handle the rendering process. As I add more features to the renderer, they will take the shape of new systems, for example a Stencil System for masking, rather than being new methods in a single class.

All of the draw operations have been split into functions too, which Game Objects now call directly. For example, the Sprite Game Object will call the BatchTexturedQuad function during render, rather than calling a function on the renderer such as 'batchSprite'. This has two benefits. First, multiple Game Objects can share common draw functions if they don't require any special kind of rendering. Secondly, if a Game Object is created that does require a unique way of drawing, such as a Graphics / primitives object, it will only ever be tied to its own drawing functions. The renderer doesn't have to gain a thousand-line 'drawGraphics' function, for example. Instead, it's the Graphics Game Object that is responsible for passing the renderer instance to the draw function it uses.

I'm very pleased with how things have been separated so far. I can add in lots of different drawing methods, or custom shaders and not have them bloat the size of the renderer as a result. Only those Game Objects that require them will ever talk to them.

WebGL isn't the only one to have been worked on, either. I always promised that a Canvas Renderer would be included and work has started on that. It's now happily rendering all of the current set of Game Objects and, as you'd expect, it's a much smaller file size too. There's still work to be done, especially with the new layers (see below), but it's happily rendering sprites and text, as quickly as possible.

Default Origin

In Phaser 2 the default origin (or anchor, as it was called then) of a Sprite was 0x0, i.e. the top-left. In Phaser 3 this changed to the center (0.5 x 0.5), which is both common in game frameworks and handy for things like rotation. This is very much a marmite taste, though, i.e. it tends to polarise opinion on which is the "correct" one. There is, of course, no correct one. However thanks to the way that Config options work in v4 it was trivial for me to add this:

Here you can see we're passing the `DefaultOrigin` function to the Game class. This will set a default global origin value to be whatever you pass to the function so in this case 0 x 0. This means that any Game Objects created by v4 will have their origin set to be the top-left by default, without you needing to apply it to them one by one.

If you don't pass in `DefaultOrigin` then it'll simply default to 0.5 as in Phaser 3.

Voila! Both sides of the camp happy with one tiny 221-byte function :)

Render Layer

A couple of Dev Logs ago I talked about how the renderer will keep track of the dirty state of Game Objects. If nothing was dirty, the renderer can skip a whole frame, instead using the contents of the previous frame. This has a fantastic impact on battery life on mobile devices.

This feature is, of course, still present. However, two new types of cache have been added since then. The first of these is a Game Object called Render Layer. To explain further, it's important to know how Phaser 4 is structured. The Game Object hierarchy is very simple and very straight forward.

At the base level you've got the Game Object class. This class contains all of the core elements that _any_ Game Object requires. This includes a reference to the World, a reference to the parent Game Object (if any), a collection of Children, a Transform, Bounds and Input component, and then a few important flags that allow you to control updating and rendering.

Everything extends from this base Game Object class.

The Container class extends from it and adds in a bunch of handy transform management methods, such as `setScale` or `setSkew`, and a number of getters and setters. Every Game Object in Phaser 4 can have children, not just Containers, but the class exists to make life a little cleaner and easier for you.

The Sprite class extends from Container. It adds in the ability to set a texture and frame, along with the associated vertex data, but little else. After all, it already has most of what it needs already inherited.

And that's where it stops. Game Object -> Container -> Sprite is as deep as it goes for arguably the most common Game Object you'll use. Each level exposes extra features, relevant to that level only. I have done away with all of the mixins and components from both v2 / v3 and earlier builds of v4. They're just straight clean classes with a few properties hanging off them now. This will make it much easier to navigate from a documentation point of view. No more battling trying to get mixin JSDocs to merge properly with classes they've been applied to (like in v3), or higher-level classes with literally hundreds of properties and methods hanging off them (as in v2).

It's also much easier for you to extend into your own Game Objects, too. As you won't have to worry so much about accidentally overwriting an important internal property. All of the core features a Game Object needs are present: the ability to update, both itself and its children, the ability to render should it need it and various flags to keep track of state.

So, where do Layers fit into this?

A Layer is a way for you to group your Game Objects _without_ impacting their transforms. You can add Sprites or Containers to a Layer and then take advantage of the various display functions available to you, such as `FindChildByName' or 'GetFurthestChild', having them scan the children of the Layer. But they won't inherit any transforms from the Layer, meaning it's a purely semantic grouping, rather than a display position influencing one.

Building upon this, I added in Render Layers. A Render Layer works in the same way as a normal Layer, in that you add children to it (but their transforms are never impacted by the Render Layer) however, unlike a Layer, during the render pass, those children are rendered to a texture that the Render Layer owns. At the end of the pass, the Render Layers texture is then rendered to the game. Here's an example of creating one:

And here's the result:

Of course, visually it doesn't really look any different than if you'd just made 500 star sprites directly. So, why bother adding in this extra step? There are a couple of reasons it can be useful. First, the Render Layer texture is fully cached. The Render Layer intelligently tracks its children and knows exactly which of them are 'dirty' this frame. If none of its children are dirty they are all skipped for rendering. Instead, the Render Layer just presents its previously cached texture to the game renderer to use.

The benefits of this should be immediately obvious. If you've got a game where perhaps the background map or scenery never, or rarely, changes, then by simply putting those Sprites as children of a Render Layer it will automatically cache them. For example, say you have got a background full of a few hundred stars and planets. Rather than render every one of them each frame, they could be grouped into a Render Layer which will cache them, potentially dramatically cutting down the quantity of WebGL operations. This will also work for Canvas, too, letting you cut down on drawImage calls.

You can see this in the example above if you look at the WebGL calls:

Here we can see how the Render Layer drew its prepared texture (with all the stars) to the Canvas frame buffer and then drew the logo Sprite over the top. This took one texture swap and one drawElements call with a tiny buffer of just 6 vertices. We could have drawn all of the stars directly and they would have been batched, but it still would have had to upload 3006 vertices each frame for the same result. On Canvas, it would have saved 499 drawImage calls, too, which is a potentially huge saving. The more complex the contents of your Render Layer become, especially with regard to heavily texture swapping Games Objects, or those with deeply nested children, the better the saving you get.

Although v4 already cached the entire scene, which was the process I reported in Dev Log 5, all it took was one moving or animated sprite to break that cache. However, Render Layers allow for the caching of complex or busy parts of the scene graph, letting you trade a bit of memory for object iteration, gl state changes, and draw calls.

It's similar to a feature that existed in Phaser 2 where you could call 'cacheAsBitmap' and it would render the Display Object and all of its children to a texture. The difference, is that the old feature never checked the state of the children. Should a new child be added, or an existing one removed, or a texture or position changed, it had no idea about this and would carry on rendering the old 'cached' bitmap instead. Render Layers take this concept and evolve it to its natural conclusion.

Having all the contents of a layer as a texture has other benefits, too. Such as using that texture elsewhere in your game. Either as the texture for another Game Object, or saving it out as a PNG, or using it as the input for a shader. As you can see, there are all kinds of practical applications beyond just being a cache.

Effect Layer

Those of you who've been working in WebGL for a while will already have probably perked up at the mention of the Render Layers drawing all of their children to their own frame buffer backed texture. Because, the next step on from that, is the Effect Layer.

An Effect Layer extends from the Render Layer, but adds in its own custom post render handler that does one very important thing: it allows you to set shaders to act upon the contents of the Render Layer, and what's more, it still supports the Render Layer cache and it also allows multi-pass shaders, for some really nice effects.

Effect Layers are created in the same way as Render Layers. Here's a basic example:

We've loaded a few assets and created some Sprites. The background Sprite is just added to the display list directly, but the rest are added to the Effect Layer.

If we run the code as-is, we just get this static Scene:

But, what if we drop a shader into the mix?

Here we're creating a new Shader object, passing in our fragment shader, which is just a string of code defined in the source file, and then adding it to the Effect Layers shaders array. If we re-run the exact same test we now get a nice plasma shader applied to the children of the Effect Layer:

Great stuff :)

We can use any fragment shader and there are several helpful uniforms and attributes set-up ready for us. Here's the same Scene but running with a pixelate shader:

It's only impacting the children of the Effect Layer and we didn't have to mess around with a custom pipeline or anything! Should you choose to move a Sprite into, or out of the Effect Layer, it will just render normally again. There's no resetting of filters.

It gets better, though. The Effect Layer 'shaders' property is an array for a reason. Let's combine the plasma and pixelate shaders:

All we did was add them both to the shaders array. The Effect Layer now looks like this:

You don't have to stop at 2 shaders, either. You can chain as many as you like, which is really useful if you want to run a blur shader across both the x and y-axis or run an effect such as a glow/outline shader.

All of the cache benefits of the Render Layer still apply, too. If the children haven't updated, then it'll use the cached texture of them and just apply the shader to that. Indeed, unless the shader updates in real-time too, you can even cache the results of those. For example a color matrix manipulation shader, or a sepia or grey-tone shader, could be used to tint a complex background scene and it'll stay cached until you next update it.

More importantly, both the Render and Effect Layers work in conjunction with the Render List. When a World is asked to render itself, it starts by doing a Depth First Search across its children in order to build-up a render list. This list can then be worked upon. For example, if a large part of the list is cached, those entries are removed from it. Or, if you wanted to specifically sort a section of the list - for example, to have a Container sort its children by say a z property, this is now very easy to accomplish.

Depending on which type of World you're using, the Render List can be additionally processed. For example, in the default World, Game Objects are automatically culled based on their intersection with the Camera. This can help save potentially tens of thousands of objects from being added to the Render List, which in turn means they don't get rendered to the Render or Effect Layers either.

With camera culling, layer caching, and two levels of renderer caching all built-in by default, this will easily be not only the fastest version of Phaser yet but the most optimal in terms of not overwhelming devices. Yes, it will still require some effort on your part to structure your game carefully, but all of the core components are in there and working right now, to make your games as efficient as they can be.

That's it for this Dev Report. I feel like I barely scratched the surface of what's new, but I need to get some coding done as well :) I didn't even touch upon the motion system, tweens or the new input system, or the plans I have to handle physics. But, those can come soon. Right now, it's time to crack on with development and push hard towards the first beta release, so you can all start playing with this.

If you've any questions, feel free to ask them in the comments on catch me on Discord in the phaser4 channel.