Phaser 4 Dev Log 6
A lot has changed since the last Dev Report and progress has been rapid and fruitful, so I wanted to keep you all updated.
Before I get started with this post I need to tell you about a great new Phaser 3 book published by Ourcade. It shows you how to make an Infinite Jumper using modern JavaScript. The steps are clear, the code is well structured and best of all, the book is completely free! Grab a copy from the Ourcade site.
The Past has revealed to me the structure of the future
There have been a number of significant structural changes to the project. Originally, I had been creating each Phaser 4 module as its own package under the @phaserjs organization in npm. They were all stored in a monorepo on GitHub, but published as individual packages. This is a quite common approach and you'll often read about larger sites or companies doing this. However, it's not without its problems.
In theory, this should have been the right way to do it. In practice, it was a complete nightmare. For a start, npm isn't set-up to cope with this. This means you need to employ the use of a 3rd party tool like Lerna to ease the strain for you. Lerna itself is quite opinionated in how it works (and rightly so), so it comes with its own tools to help you use it, the Lerna Wizard.
Once you've got this workflow down you hit the first issue. What is the main thing you want to do when changing code in a module? It's to test it, of course. The two primary ways to test are traditional unit testing and actual production testing (i.e. coding a proper Phaser example). Unit tests are well and good for checking modules in isolation, or even a few together, but they don't, for example, help catch the kind of WebGL issues that arise because you incorrectly calculated an array buffer offset once a certain number of Sprites have been added to the batch.
Production tests pull in from lots of different modules, hundreds of them in fact. This leads to the most fundamental issue of all - if you want to test that your most recent changes actually worked, you need to publish each module you updated first, then pull those into your test project before rebuilding it. It's a painstaking process to put it lightly and is utterly flow destroying.
The common way to get around this is to use npm link. This allows you to link a package to a shared space on your local filesystem. Your test projects can be linked to these shared modules as well. The result is that you can edit your code in Module A of Project A, which has been linked locally, and Project B can see that code change immediately without needing to run an npm update first. It skips the whole npm publish step of the flow.
At least, in theory, it should do. In practice, it's a complete nightmare. The link that is created is extremely brittle. The moment you commit and push Module A, the link is broken and needs creating again from both ends. Equally, VS Code gets really confused about what files should be pulled in from where and would often fail to see updates. I typically like building my tests using TypeScript in watch mode, so I can make a small change and have it rebuild and reload. Except with a linked module, it never recognized this as a change in your code, so you'd have to abort the watch and manually build it again.
For small production tests, that perhaps pulled in only 4 modules, it was a headache but I coped. When I started creating bigger tests, that needed 20+ linked modules, it became unbearable. It was, quite frankly, completely unworkable.
I read a really fascinating blog post by Jonathan Lai, who's a production engineer for Etsy. In his post, he details how the Etsy site relies on over 12,000 modules and the act of bundling it, into something a developer can actually then test, was taking webpack over 41 minutes and consuming 32 CPU cores and 64 GB of ram. It's an extreme example, because of its scale, but it took their team weeks to optimize so it would build in under 5 minutes. That's an incredible achievement of course, but for me, I get frustrated if the build takes longer than 5 seconds. I couldn't possibly wait for 5 minutes.
The whole stack felt like a house of cards just waiting to come crashing down. Perhaps it's because I mostly work on Windows, where there's no real concept of symlinks (which is what npm link uses) that made it so brittle. Maybe on a Mac, the experience is better. But for me, it was sapping both my motivation and my progress. When you cannot rely on your tools, or trust that they're actually building the code you think they should be building, it's time to take stock and try another approach.
These were just pure workflow problems, too. They didn't even include issues configuring TypeScript and Rollup. I've spent countless days this year working and reworking my configs to bend them to my will. It doesn't help when you try to use the official Rollup TypeScript plugin, only to find it has major bugs such as this one that didn't exist in the previous version. When the simple act of upgrading an essential package by a point release breaks everything, it makes you really worry about the state of modern web development.
This is why I was working in the Phaser 4 Nano repo. I wasn't using a single Phaser 4 module, because the whole process was so painful. All of the code was local and easy to maintain. I could edit, build and get instant results without any issues. Yet I knew that long term, this wouldn't work. It may be ok for Nano, which has a really small API size, but it wouldn't be possible for Phaser 4 itself.
For my own sanity, something needed to change.
Enter the monopackage
The changes I've made are as follows: I have deleted every @phaserjs package. Technically, I emailed npm and asked them to do it, because we're not worthy to have such control. The end result is the same. It's a clean fresh slate.
I then created just 1 new package: @phaserjs/phaser. Inside this package are all of the dist files from the new Phaser repo. I guess you could consider it as being a monopackage, rather than monorepo. It's one single package, but it contains hundreds of clearly namespaced modules, with zero external dependencies.
New packages can, of course, be created for the examples, templates, guides and such-like. But Phaser as a whole now lives in one single place. Since making this change it has bought order and calmness back into my dev life. If I'm making a large set of sweeping changes I can use npm link to test it quickly, without the need to publish. But, being one package, publishing is easy and doesn't require any extra tools to help with the process.
From the end-user point of view, it's a lot simpler to ingest as well. You install one single package and that's it, you have everything you need right there. You don't need to go package hunting, wondering if perhaps the 'Sprite' package is out of date compared to the 'Texture' package, and so on. You don't need to keep all of your package versions updated, either. You only need to worry about one of them.
This doesn't mean I have to put every single Phaser related thing into this one package, though. For now, you could consider it a 'core' package and there is plenty of room to put more specialist code, such as Spine support, into their own packages, to help keep things isolated. Fundamentally though, I want as much as possible stored in the one place. It makes life easier for me and you alike.
Having made the change to a 'single package' system, my workflow improved overnight. I was finally developing quickly and without interruption. New features dropped fast and it was a really good place to be.
The original plan had been to release Phaser 4 Nano. A specially cut-down 'small' version of Phaser 4 with a tiny footprint and a focus on just a few core areas of the framework. Because of all the issues with package management, Nano was built entirely in isolation. I didn't share any of the code I had already written because it was more hassle than it was worth. This is why Nano's progress was so fast. Now that I have fixed the workflow and moved to a monopackage, Nano can be built from all of those modules, without having duplicate code. That is what I've done this week. All of the progress I made with Nano, plus all of the hard work I put into the Phaser 4 modules, have been combined in one single place.
Farewell God classes
I've written in the past about how moving to modules helped avoid having God Classes, yet in Phaser 3 there are still a lot of such classes. Yes, they're created by pulling the relevant code in from small focused modules, but the resulting API scale is still the same.
As I looked through the Phaser 4 code I realized I was doing the same thing again. The code was tiny and packaged, but the classes were sucking in lots of these modules. A good example is the Container class. It had a bunch of methods such as `addChild`, `addChildAt`, `swapChildren` and so on. These are all common operations that you may wish to perform on a Container. By having them as part of the class methods, I was prescribing what _I thought_ was important to the Container. I doubt very much if any of you have ever used `swapChildren`, for example. I'm not sure I ever have. Yet it was a 'core' method.
The fundamental change I made over the last few weeks was to train myself to not think like this. Classes are now bundles of the absolute core properties and methods only. The Container class has a `children` property, as you need to store them somewhere. But it doesn't have any methods at all that deal with managing those children. Instead, those are pulled in from the modules as _you_ need them, not as I think you may need them. Here's a simple example:
Here you can see that I've imported the `AddChild` module. This module is a simple, strongly typed function, that takes any Parent as its first argument and then as many children as you like. Internally it uses a spread operator.
The children are, as you'd expect, added to the parent. As children can only belong to a single parent at any one time, they are cleanly removed from any previous parent they may have had. Finally, their transforms are updated to take the new parent into account.
Of course, it's quite a safe bet that you'd always need to use `AddChild`, so you could argue that it should be part of the Container class. However, it doesn't make sense to have multiple rules like this, as I'd be back into the situation of determining what is important and what isn't. By simply making _everything_ a module, there are no assumptions. It's less for the end developers to have to commit to memory, too: "Hmm, do I look in the class API docs or the modules docs for this?!". The answer will now always be "Unless it's a property, look at the modules".
There's another benefit, too. Smaller file sizes. I was very pleased with the sizes before, but by taking virtually every method out into its own module, they are truly tiny now. The code builds to just 8.2 KB (min + gz):
And remember, this is including the full WebGL Renderer, Scene Manager, Game loop and other parts! I will be working on those soon, to allow bits to be swapped in and out. For example, the above code doesn't use the Texture Manager at all. So there's no need for it to be present.
Wot no Scene?
There are some other important, yet subtle changes in the code above you may not have picked up on.
Did you notice how the Sprites no longer have the `Scene` passed to them as their first parameter?
After working through the code I realized that it just wasn't needed. There isn't any point during its creation where a Sprite needs to know about the Scene it belongs to. I've come up with a new way for Game Objects to access the internal Game systems they require and it doesn't need a `scene` reference to work.
There is another large benefit to this: A Sprite instance can be moved from one Scene to another Scene, without needing to do anything to it. You just `AddChild` it to a parent (such as the World) in the new Scene and it is added without any fuss or need to destroy and re-create it again.
Game Config
In Phaser 3 the Game Config file has a staggering 96 options you can set! These are all stored in an internal Config object, which any plugin can access. It doesn't matter if you don't need Keyboard input, it will still store default config settings for the keyboard anyway. It's overwhelming and it's quite hard to code against, too, as the TypeScript auto-completion for an object with 96 properties is massive.
When I was working on Lazer a few years ago, I came up with the idea of each unique option, or bundle of options, to be set via dedicated config functions instead. It made a lot of sense to revisit this, so I ported them over to Phaser 4. It means you can now do the following:
Each Config setting can be included from the Config namespace and you pass as many of them as you need to the Game constructor. In the code above you can see we've set the size of the canvas, the parent DOM element and given it a Scene to start with.
There are a few benefits to this. The obvious one is that, unless you import it, the config setting isn't part of the bundle. The second is that you can combine multiple properties together. For example, the Size setting can take width, height, and resolution. Previously, this would have been 3 separate properties in the config object. You also get code insight from the functions:
And when docs are added they'll appear in this, too.
The final benefit is the ability to ship a whole bunch of config setting functions without worrying about the size. And you can create your own, too. As long as they adhere to the config function interface, you can drop whatever you like in here.
It's a quality of life improvement beyond anything, but these days, frameworks live and die on those.
Module Power
I've already discussed the benefit of using modules in this dev log, however, I wanted to cover another feature they have, which could change the way you approach structuring your game code.
Not only are modules stand-alone, but lots of them now also accept multiple Game Objects to work upon. Here's an example:
We create 3 Text objects in the way you'd normally expect, by creating an instance of the Text Game Object. There are quite a number of Text modules included. In the code above we import `SetFont`, `SetPadding` and `SetBackgroundStyle`. In this example, we're creating 3 UI buttons for a basic title screen. We want the Options and Credits buttons to have a smaller font size, so we pass them to `SetFont`. We want them all to have the same padding and rounded background, though, so we pass them all to those functions.
Finally, they're added to the World, so they render. The end result looks like this:
It's not the most glorious of screens, yet I hope you agree that the approach with which it was built was nice and clean. Rather than hunt around for the right method to use on the Text class and then set it for each of the 3 'buttons', we instead can just pass the ones we want to be updated directly to the relevant functions.
It's not just Text that works like this, either. All Game Object modules work in the same way. This becomes particularly powerful if you want to do something like assigning an animation to a Sprite. You can call the `AddAnimation` function once, but pass in whatever Sprites you wish to have the animation.
Over time, as the Phaser 4 API expands, more and more modules like this will be added.
Dynamic Textures
As you saw in the first example in this dev log, we were creating a texture dynamically at runtime using the `SolidColorTexture` function. This just spat-out a single color block, which we applied to a Sprite. As you may expect, it's not the only type of texture we can create, though. There's also a Grid Texture, a Render Texture, a Canvas Texture and also a Pixel Texture.
Indulge me a little bit here, ok? As we're going to get a tiny bit retro :) A Pixel Texture is an ability for you to create a texture by using a pre-defined palette of 16 colors and an array of hex values. For each value in the array, a 'pixel' is drawn in that color. You can have any sized array but typically want to keep it small, i.e. 32 x 32 or less.
Here's an example:
By looking at the data array you can start to make out the shape of the sprite. Run it, and this is what you get:
Our princess is definitely in this castle :)
Here we have got a teeny 8x8 data array using the PICO8 palette that generates our princess Sprite. Periods / full stops in the data mean a transparent pixel.
This feature has actually existed in Phaser since version 2 but it's a very edge-case, so I kind of felt guilty including it in Phaser 3 as part of the standard API. Yes, you could exclude it with a custom build but lots of people didn't, so it sat there wasting space.
As I was creating the Texture generators for Phaser 4 I bought this fun little thing across and it now lives happily in the textures folder waiting for someone to use it. If you don't, it won't take up any space. If you do, have fun :)
Scene Types
I also changed the way you create and work with Scenes. There is no longer just one standard Scene to extend, but a selection of them. In all the demos above you'll notice I extend from `StaticScene`. A Static Scene is one in which the camera never moves. Think of games like Pacman, Bejeweled, 2048 or most card games. The camera is fixed to the spot and doesn't move around. In games like this, there is usually no need to perform any camera culling because pretty much everything you can see is on-screen at the same time. A simple 'visible' check is enough.
Recalculating and checking the bounds of every single Game Object can get really expensive, so there's no need to do it unless you know it's needed. And for a Static Scene, it isn't. To this end, the Static Scene has a fixed camera and a fixed size World, too. It allows it to skip internal checks and the Static Camera is a lot smaller in size as if effectively has far less to actually do.
Of course, there is the opposite of this too. You can create a Scene in which you know for a fact the World is going to be larger than the screen, and in those cases, Camera culling is enabled by default, along with the ability to move the camera around. Culling can make a dramatic difference to the speed of your games and it's something I'll be demonstrating in the next Dev Log. I've also included a toggle, so you can enable and disable it as required, on a per-Scene basis. This helps you really optimize performance.
As well as Game Objects being able to be culled for rendering, you can also do the same for updating. The update bounds are usually significantly larger than the rendering bounds, but if a Game Object falls outside of the update bounds you can stop it from updating at all. Therefore things like Animated Sprites won't update if they are far enough away from the Camera, effectively going to sleep and taking no CPU time. In this way, you can 'bring to life' a whole section of your game once the update bounds hit it and then start rendering it once the camera bounds hit it. Both bounds can be set to whatever size you require.
The final thing I explored briefly, but haven't yet settled on, is the use of a kdtree or a spatial hash in order to partition your World objects up. I found that with a kdtree enabled I was able to populate the World with hundreds of thousands of Sprites and maintain performance. The downside is that kdtrees don't retain the display order that comes with iterating a traditional display list. So I'll need to find a way to enable a depth sort as part of the process. Sorting can be quite expensive, so there is always a trade-off between the cost of the sort and kdtree vs. just rendering anyway. Even so, you'll have the option and it will be up to you to make the call. It's very game-specific anyway.
Next Time
I'm very pleased with the place I'm in, in terms of features completed and project structure. I'm also pleased that lots of the hard work I put into Lazer (i.e. pre v3) are finally making its way into Phaser. I feel like it doesn't matter how esoteric or edge-case a feature is, it no longer matters. As long as it works and has documentation, I can include it in the monopackage. It's entirely up to you if you import it or not! Which is the way it should have always been.
It also means I'm very open to receiving pull requests from the community to add new features. Again, as long as they adhere to our editor and lint standards and are documented, there will be no issue in including them. I won't have to spend ages evaluating if they're the 'right fit' or not.
At the moment all of the example code is all written assuming you'll import packages directly. I know quite a lot of you like to just whack the phaser.js file into a script tag and hack away. You have not been forgotten :) There will always be a Phaser bundle of some kind that you can use like this. For now, though, I will focus entirely on the module approach in the hope I can bring over as many devs to this style of coding as possible, if you're not there already, of course.
Thanks for reading this far, it's been quite the dev log! I really cannot wait to get the first release out so everyone can start playing. Despite doing some crazy hours, there are still only so many hours in the day and I've been working full-pelt in all of them. As always, I'll keep you as updated as I can here.
A lot has changed since the last Dev Report and progress has been rapid and fruitful, so I wanted to keep you all updated.
Before I get started with this post I need to tell you about a great new Phaser 3 book published by Ourcade. It shows you how to make an Infinite Jumper using modern JavaScript. The steps are clear, the code is well structured and best of all, the book is completely free! Grab a copy from the Ourcade site.
The Past has revealed to me the structure of the future
There have been a number of significant structural changes to the project. Originally, I had been creating each Phaser 4 module as its own package under the @phaserjs organization in npm. They were all stored in a monorepo on GitHub, but published as individual packages. This is a quite common approach and you'll often read about larger sites or companies doing this. However, it's not without its problems.
In theory, this should have been the right way to do it. In practice, it was a complete nightmare. For a start, npm isn't set-up to cope with this. This means you need to employ the use of a 3rd party tool like Lerna to ease the strain for you. Lerna itself is quite opinionated in how it works (and rightly so), so it comes with its own tools to help you use it, the Lerna Wizard.
Once you've got this workflow down you hit the first issue. What is the main thing you want to do when changing code in a module? It's to test it, of course. The two primary ways to test are traditional unit testing and actual production testing (i.e. coding a proper Phaser example). Unit tests are well and good for checking modules in isolation, or even a few together, but they don't, for example, help catch the kind of WebGL issues that arise because you incorrectly calculated an array buffer offset once a certain number of Sprites have been added to the batch.
Production tests pull in from lots of different modules, hundreds of them in fact. This leads to the most fundamental issue of all - if you want to test that your most recent changes actually worked, you need to publish each module you updated first, then pull those into your test project before rebuilding it. It's a painstaking process to put it lightly and is utterly flow destroying.
The common way to get around this is to use npm link. This allows you to link a package to a shared space on your local filesystem. Your test projects can be linked to these shared modules as well. The result is that you can edit your code in Module A of Project A, which has been linked locally, and Project B can see that code change immediately without needing to run an npm update first. It skips the whole npm publish step of the flow.
At least, in theory, it should do. In practice, it's a complete nightmare. The link that is created is extremely brittle. The moment you commit and push Module A, the link is broken and needs creating again from both ends. Equally, VS Code gets really confused about what files should be pulled in from where and would often fail to see updates. I typically like building my tests using TypeScript in watch mode, so I can make a small change and have it rebuild and reload. Except with a linked module, it never recognized this as a change in your code, so you'd have to abort the watch and manually build it again.
For small production tests, that perhaps pulled in only 4 modules, it was a headache but I coped. When I started creating bigger tests, that needed 20+ linked modules, it became unbearable. It was, quite frankly, completely unworkable.
I read a really fascinating blog post by Jonathan Lai, who's a production engineer for Etsy. In his post, he details how the Etsy site relies on over 12,000 modules and the act of bundling it, into something a developer can actually then test, was taking webpack over 41 minutes and consuming 32 CPU cores and 64 GB of ram. It's an extreme example, because of its scale, but it took their team weeks to optimize so it would build in under 5 minutes. That's an incredible achievement of course, but for me, I get frustrated if the build takes longer than 5 seconds. I couldn't possibly wait for 5 minutes.
The whole stack felt like a house of cards just waiting to come crashing down. Perhaps it's because I mostly work on Windows, where there's no real concept of symlinks (which is what npm link uses) that made it so brittle. Maybe on a Mac, the experience is better. But for me, it was sapping both my motivation and my progress. When you cannot rely on your tools, or trust that they're actually building the code you think they should be building, it's time to take stock and try another approach.
These were just pure workflow problems, too. They didn't even include issues configuring TypeScript and Rollup. I've spent countless days this year working and reworking my configs to bend them to my will. It doesn't help when you try to use the official Rollup TypeScript plugin, only to find it has major bugs such as this one that didn't exist in the previous version. When the simple act of upgrading an essential package by a point release breaks everything, it makes you really worry about the state of modern web development.
This is why I was working in the Phaser 4 Nano repo. I wasn't using a single Phaser 4 module, because the whole process was so painful. All of the code was local and easy to maintain. I could edit, build and get instant results without any issues. Yet I knew that long term, this wouldn't work. It may be ok for Nano, which has a really small API size, but it wouldn't be possible for Phaser 4 itself.
For my own sanity, something needed to change.
Enter the monopackage
The changes I've made are as follows: I have deleted every @phaserjs package. Technically, I emailed npm and asked them to do it, because we're not worthy to have such control. The end result is the same. It's a clean fresh slate.
I then created just 1 new package: @phaserjs/phaser. Inside this package are all of the dist files from the new Phaser repo. I guess you could consider it as being a monopackage, rather than monorepo. It's one single package, but it contains hundreds of clearly namespaced modules, with zero external dependencies.
New packages can, of course, be created for the examples, templates, guides and such-like. But Phaser as a whole now lives in one single place. Since making this change it has bought order and calmness back into my dev life. If I'm making a large set of sweeping changes I can use npm link to test it quickly, without the need to publish. But, being one package, publishing is easy and doesn't require any extra tools to help with the process.
From the end-user point of view, it's a lot simpler to ingest as well. You install one single package and that's it, you have everything you need right there. You don't need to go package hunting, wondering if perhaps the 'Sprite' package is out of date compared to the 'Texture' package, and so on. You don't need to keep all of your package versions updated, either. You only need to worry about one of them.
This doesn't mean I have to put every single Phaser related thing into this one package, though. For now, you could consider it a 'core' package and there is plenty of room to put more specialist code, such as Spine support, into their own packages, to help keep things isolated. Fundamentally though, I want as much as possible stored in the one place. It makes life easier for me and you alike.
Having made the change to a 'single package' system, my workflow improved overnight. I was finally developing quickly and without interruption. New features dropped fast and it was a really good place to be.
The original plan had been to release Phaser 4 Nano. A specially cut-down 'small' version of Phaser 4 with a tiny footprint and a focus on just a few core areas of the framework. Because of all the issues with package management, Nano was built entirely in isolation. I didn't share any of the code I had already written because it was more hassle than it was worth. This is why Nano's progress was so fast. Now that I have fixed the workflow and moved to a monopackage, Nano can be built from all of those modules, without having duplicate code. That is what I've done this week. All of the progress I made with Nano, plus all of the hard work I put into the Phaser 4 modules, have been combined in one single place.
Farewell God classes
I've written in the past about how moving to modules helped avoid having God Classes, yet in Phaser 3 there are still a lot of such classes. Yes, they're created by pulling the relevant code in from small focused modules, but the resulting API scale is still the same.
As I looked through the Phaser 4 code I realized I was doing the same thing again. The code was tiny and packaged, but the classes were sucking in lots of these modules. A good example is the Container class. It had a bunch of methods such as `addChild`, `addChildAt`, `swapChildren` and so on. These are all common operations that you may wish to perform on a Container. By having them as part of the class methods, I was prescribing what _I thought_ was important to the Container. I doubt very much if any of you have ever used `swapChildren`, for example. I'm not sure I ever have. Yet it was a 'core' method.
The fundamental change I made over the last few weeks was to train myself to not think like this. Classes are now bundles of the absolute core properties and methods only. The Container class has a `children` property, as you need to store them somewhere. But it doesn't have any methods at all that deal with managing those children. Instead, those are pulled in from the modules as _you_ need them, not as I think you may need them. Here's a simple example:
Here you can see that I've imported the `AddChild` module. This module is a simple, strongly typed function, that takes any Parent as its first argument and then as many children as you like. Internally it uses a spread operator.
The children are, as you'd expect, added to the parent. As children can only belong to a single parent at any one time, they are cleanly removed from any previous parent they may have had. Finally, their transforms are updated to take the new parent into account.
Of course, it's quite a safe bet that you'd always need to use `AddChild`, so you could argue that it should be part of the Container class. However, it doesn't make sense to have multiple rules like this, as I'd be back into the situation of determining what is important and what isn't. By simply making _everything_ a module, there are no assumptions. It's less for the end developers to have to commit to memory, too: "Hmm, do I look in the class API docs or the modules docs for this?!". The answer will now always be "Unless it's a property, look at the modules".
There's another benefit, too. Smaller file sizes. I was very pleased with the sizes before, but by taking virtually every method out into its own module, they are truly tiny now. The code builds to just 8.2 KB (min + gz):
And remember, this is including the full WebGL Renderer, Scene Manager, Game loop and other parts! I will be working on those soon, to allow bits to be swapped in and out. For example, the above code doesn't use the Texture Manager at all. So there's no need for it to be present.
Wot no Scene?
There are some other important, yet subtle changes in the code above you may not have picked up on.
Did you notice how the Sprites no longer have the `Scene` passed to them as their first parameter?
After working through the code I realized that it just wasn't needed. There isn't any point during its creation where a Sprite needs to know about the Scene it belongs to. I've come up with a new way for Game Objects to access the internal Game systems they require and it doesn't need a `scene` reference to work.
There is another large benefit to this: A Sprite instance can be moved from one Scene to another Scene, without needing to do anything to it. You just `AddChild` it to a parent (such as the World) in the new Scene and it is added without any fuss or need to destroy and re-create it again.
Game Config
In Phaser 3 the Game Config file has a staggering 96 options you can set! These are all stored in an internal Config object, which any plugin can access. It doesn't matter if you don't need Keyboard input, it will still store default config settings for the keyboard anyway. It's overwhelming and it's quite hard to code against, too, as the TypeScript auto-completion for an object with 96 properties is massive.
When I was working on Lazer a few years ago, I came up with the idea of each unique option, or bundle of options, to be set via dedicated config functions instead. It made a lot of sense to revisit this, so I ported them over to Phaser 4. It means you can now do the following:
Each Config setting can be included from the Config namespace and you pass as many of them as you need to the Game constructor. In the code above you can see we've set the size of the canvas, the parent DOM element and given it a Scene to start with.
There are a few benefits to this. The obvious one is that, unless you import it, the config setting isn't part of the bundle. The second is that you can combine multiple properties together. For example, the Size setting can take width, height, and resolution. Previously, this would have been 3 separate properties in the config object. You also get code insight from the functions:
And when docs are added they'll appear in this, too.
The final benefit is the ability to ship a whole bunch of config setting functions without worrying about the size. And you can create your own, too. As long as they adhere to the config function interface, you can drop whatever you like in here.
It's a quality of life improvement beyond anything, but these days, frameworks live and die on those.
Module Power
I've already discussed the benefit of using modules in this dev log, however, I wanted to cover another feature they have, which could change the way you approach structuring your game code.
Not only are modules stand-alone, but lots of them now also accept multiple Game Objects to work upon. Here's an example:
We create 3 Text objects in the way you'd normally expect, by creating an instance of the Text Game Object. There are quite a number of Text modules included. In the code above we import `SetFont`, `SetPadding` and `SetBackgroundStyle`. In this example, we're creating 3 UI buttons for a basic title screen. We want the Options and Credits buttons to have a smaller font size, so we pass them to `SetFont`. We want them all to have the same padding and rounded background, though, so we pass them all to those functions.
Finally, they're added to the World, so they render. The end result looks like this:
It's not the most glorious of screens, yet I hope you agree that the approach with which it was built was nice and clean. Rather than hunt around for the right method to use on the Text class and then set it for each of the 3 'buttons', we instead can just pass the ones we want to be updated directly to the relevant functions.
It's not just Text that works like this, either. All Game Object modules work in the same way. This becomes particularly powerful if you want to do something like assigning an animation to a Sprite. You can call the `AddAnimation` function once, but pass in whatever Sprites you wish to have the animation.
Over time, as the Phaser 4 API expands, more and more modules like this will be added.
Dynamic Textures
As you saw in the first example in this dev log, we were creating a texture dynamically at runtime using the `SolidColorTexture` function. This just spat-out a single color block, which we applied to a Sprite. As you may expect, it's not the only type of texture we can create, though. There's also a Grid Texture, a Render Texture, a Canvas Texture and also a Pixel Texture.
Indulge me a little bit here, ok? As we're going to get a tiny bit retro :) A Pixel Texture is an ability for you to create a texture by using a pre-defined palette of 16 colors and an array of hex values. For each value in the array, a 'pixel' is drawn in that color. You can have any sized array but typically want to keep it small, i.e. 32 x 32 or less.
Here's an example:
By looking at the data array you can start to make out the shape of the sprite. Run it, and this is what you get:
Our princess is definitely in this castle :)
Here we have got a teeny 8x8 data array using the PICO8 palette that generates our princess Sprite. Periods / full stops in the data mean a transparent pixel.
This feature has actually existed in Phaser since version 2 but it's a very edge-case, so I kind of felt guilty including it in Phaser 3 as part of the standard API. Yes, you could exclude it with a custom build but lots of people didn't, so it sat there wasting space.
As I was creating the Texture generators for Phaser 4 I bought this fun little thing across and it now lives happily in the textures folder waiting for someone to use it. If you don't, it won't take up any space. If you do, have fun :)
Scene Types
I also changed the way you create and work with Scenes. There is no longer just one standard Scene to extend, but a selection of them. In all the demos above you'll notice I extend from `StaticScene`. A Static Scene is one in which the camera never moves. Think of games like Pacman, Bejeweled, 2048 or most card games. The camera is fixed to the spot and doesn't move around. In games like this, there is usually no need to perform any camera culling because pretty much everything you can see is on-screen at the same time. A simple 'visible' check is enough.
Recalculating and checking the bounds of every single Game Object can get really expensive, so there's no need to do it unless you know it's needed. And for a Static Scene, it isn't. To this end, the Static Scene has a fixed camera and a fixed size World, too. It allows it to skip internal checks and the Static Camera is a lot smaller in size as if effectively has far less to actually do.
Of course, there is the opposite of this too. You can create a Scene in which you know for a fact the World is going to be larger than the screen, and in those cases, Camera culling is enabled by default, along with the ability to move the camera around. Culling can make a dramatic difference to the speed of your games and it's something I'll be demonstrating in the next Dev Log. I've also included a toggle, so you can enable and disable it as required, on a per-Scene basis. This helps you really optimize performance.
As well as Game Objects being able to be culled for rendering, you can also do the same for updating. The update bounds are usually significantly larger than the rendering bounds, but if a Game Object falls outside of the update bounds you can stop it from updating at all. Therefore things like Animated Sprites won't update if they are far enough away from the Camera, effectively going to sleep and taking no CPU time. In this way, you can 'bring to life' a whole section of your game once the update bounds hit it and then start rendering it once the camera bounds hit it. Both bounds can be set to whatever size you require.
The final thing I explored briefly, but haven't yet settled on, is the use of a kdtree or a spatial hash in order to partition your World objects up. I found that with a kdtree enabled I was able to populate the World with hundreds of thousands of Sprites and maintain performance. The downside is that kdtrees don't retain the display order that comes with iterating a traditional display list. So I'll need to find a way to enable a depth sort as part of the process. Sorting can be quite expensive, so there is always a trade-off between the cost of the sort and kdtree vs. just rendering anyway. Even so, you'll have the option and it will be up to you to make the call. It's very game-specific anyway.
Next Time
I'm very pleased with the place I'm in, in terms of features completed and project structure. I'm also pleased that lots of the hard work I put into Lazer (i.e. pre v3) are finally making its way into Phaser. I feel like it doesn't matter how esoteric or edge-case a feature is, it no longer matters. As long as it works and has documentation, I can include it in the monopackage. It's entirely up to you if you import it or not! Which is the way it should have always been.
It also means I'm very open to receiving pull requests from the community to add new features. Again, as long as they adhere to our editor and lint standards and are documented, there will be no issue in including them. I won't have to spend ages evaluating if they're the 'right fit' or not.
At the moment all of the example code is all written assuming you'll import packages directly. I know quite a lot of you like to just whack the phaser.js file into a script tag and hack away. You have not been forgotten :) There will always be a Phaser bundle of some kind that you can use like this. For now, though, I will focus entirely on the module approach in the hope I can bring over as many devs to this style of coding as possible, if you're not there already, of course.
Thanks for reading this far, it's been quite the dev log! I really cannot wait to get the first release out so everyone can start playing. Despite doing some crazy hours, there are still only so many hours in the day and I've been working full-pelt in all of them. As always, I'll keep you as updated as I can here.