Phaser 3 - Dev Log - October 2020
Woo! I'm getting so close to the release of 3.50 I can almost taste it :) There are literally just a couple of points left on my list that I need to look at, and then I can finally wrap this beast up. However, I wanted to post a new Dev Log before then, because, as usual, a lot has happened since the last one. Onwards! ...
As in the previous log, I've spent a long time working on issues reported on GitHub. Last time I wrote I had knocked the issue total down from well over 200 to 130. But I've been working hard on them since then and they now stand at a very healthy 63 issues (the total is a little higher, but the others are all feature requests, not bugs)
This is a great number to be at. There are lots of issues relating to Audio, because I haven't touched that yet and mostly they're to do with all of the recent changes made in iOS, which has screwed with audio in lots of games, across lots of frameworks, not just Phaser. I have decided that I absolutely need to recode the audio API. However, I intend to do this in Phaser 4 and will then back-port it to Phaser 3 in the near future.
Will 3.50 be the final version of Phaser 3?
I've been asked this a number of times on Discord, so figured I'd address it here as it's not a simple answer. First of all, no, it won't be the final 3.x version, because I fully intend to carry on supporting Phaser 3. There are so many large changes in 3.50 that I'm quite sure it will generate bugs I haven't encountered during my testing, so I'll of course resolve those.
However, I'm not expecting any more 'sweeping' API changes in Phaser 3's lifetime (other than perhaps the Audio API). Yes, I'll carry on fixing bugs, and almost certainly adding features where relevant, but once 3.50 is out, most of my attention will be on Phaser 4. The same way that Phaser 2 and 3 overlapped (and still do!), Phaser 3 and 4 will be treated in the same way. Eventually, 4 will get all of my attention and I'll likely hand v3 over to the community, but that won't be for a long time yet.
For as long as people keep backing me on Patreon, I'll keep working to improve Phaser 3 and building out our collective future in Phaser 4.
Now, on with the 3.50 updates since the last Dev Log...
Massive Mesh Overhaul
The Mesh Game Object has existed in Phaser since v3.0.0. It was meant as a way for you to be able to construct your own Game Objects built from your own vertices and texture data. Where-as a Sprite uses a standard quad, with a Mesh you were under no such limitations and could define the vertices and uvs exactly how you wanted them.
However, it really didn't offer any assistance in this regard, which I always thought was a shame. So, I took some time to rework the whole class and it's now properly useful and really quite powerful. Perhaps once I've explained what it can now do, you may start to think of some uses for it in your own games.
At its core, a Mesh Game Object consists of a series of Face objects. A Face consists of 3 Vertex objects and a bunch of helper methods. Both of these classes are new in 3.50 and help encapsulate all of the data needed for each. For example, a Vertex object is essentially a Vector3 along with extra properties for UV coordinates and translated positions. When the Mesh is rendered it loops through all of its faces and if the face can be seen by the Camera, adds it to the WebGL batch.
Let's create a Mesh from some simple hardcoded vertex data:
Here's a screenshot from an example with this running in it:
As you can see from the debug overlay, we've got a nice little zombie doggie rendering in a Mesh (and yes! indexed vertices are now finally supported by default). As it stands like this, it's absolutely no different to using a Sprite. However, because it's a Mesh, we can manipulate those vertices. By reading the mouse wheel we can call the Mesh `panZ` method, which allows us to zoom the Mesh in and out. By implementing a pointer move handler, we can also call `panX` and `panY` to allow the vertices to be dragged around.
This YouTube video demonstrates it in action.
Mesh Grid Generator
While it is perfectly possible to populate a Mesh by supplying it with arrays of raw vertices, as we did above, there are a couple of helpful new methods in 3.50 to make this process easier for you. For a start, there is a new function available in the Geometry Mesh namespace called `GenerateGridVerts`. This allows you to easily create vertices in a grid layout. Let's take our zombie dog from before, but this time use the generator to put him on a 6x6 grid:
Here's our new grid in action:
As you can see, when the grid is rotated it no longer distorts, even at extreme angles, unlike our hand-crafted one above that only used 2 faces. The more vertices you have, the cleaner an effect you can create.
The grid generator is more powerful than this, though. You can customize the size, the number of segments, the colors, alpha, and even set the texture to tile if you wish. Here's an example of a grid using an array of colors:
The result being:
This works because the vertices can be colored. Every vertex in the Mesh has its own unique color and alpha value. So if you don't apply a texture, but do give some colors, you can get neat results like the above picture. Of course, you can also mix them, using both a texture and colors, in which case it tints the end result:
Which looks like this:
Mesh OBJ Loader
Ok, so grids are quite fun. But to unleash the full power of the Mesh you really need to be able to import those vertices from outside of Phaser. As a result, there is now a complete Wavefront OBJ 3D Model loader built into 3.50. This allows you to prepare your models in a 3D package, export them as a triangulated obj file and use them directly in Phaser.
The loader even supports loading a Wavefront material file, so you don't even need to texture your models. Here's a quick example. I used the superb Asset Forge app by Kenney, added a cube primitive, and applied a material:
I then exported it as an obj file and used the new OBJ Loader in 3.50:
That's all there is to it! Viewing the scene now shows us our lovely textured cube:
Depending on your 3D skills, you're not limited to just primitives like this. I suck at modeling, so here's a nice skull I bought from a stock model site :)
And a video of it in action: https://youtu.be/Y42Ak5GXhmU
The Mesh Game Object now has the concept of a mini-camera built into it. Using this you can manipulate what you're looking at, in terms of the vertices being rendered. It also has three new properties `modelPosition`, `modelRotation` and `modelScale` which allows you to adjust, as you can guess from their names, the position, rotate and scale of the vertices in the Mesh. That is how I got the skull to rotate in the video above.
Now while this undoubtedly looks quite cool, it's very important to understand the limitations of what you're seeing here. This is not a 3D engine. Not even a 'mini' one. The Mesh is being drawn by projecting the vertices into a 2D orthographic space. The vertex positions are calculated at runtime (not in the shader), and while there is some smart internal logic to handle the 'dirty' state of the Mesh, it's still not how modern 3D games work.
Typically, a 3D engine will load all of the geometry into a buffer and then use shader uniforms to upload the transform matrix for the model and camera, leaving the shader to calculate the final position of the vertices. The Mesh works in the opposite way, in that if you call say the `panZ` function, the Mesh will flag itself as being dirty, and then during the next pre-render step it will recalculate the positions of all of its vertices by iterating through them and applying the transform math. This works well if the Mesh object is going to remain static, in that once calculated the vertices data is never touched again until it needs to be, and because we're not using a uniform for the model transform we can batch all of the verts in with everything else in the Scene.
There's also no lights and no depth buffer. The Mesh calculates if a face is visible using a counter-clockwise test and hiding those that are not. This works fine, most of the time. But for complex models, or ones featuring tightly packed or intersecting polygons, it can become a bit of a z-depth fighting nightmare. So, if you can't display complex models with the Mesh, what should you be thinking about using it for? This is best demonstrated with an example.
Mesh Race Track
I downloaded the Racing Kit (https://www.kenney.nl/assets/racing-kit) set of 3D models published by Kenney. These are great because they contain lots of really low poly models featuring road segments that just 'snap' together nicely. They also let me demonstrate where a Mesh becomes more powerful than regular textured Sprites.
First, I loaded all the track parts I wished to use:
Then, I created just one single Mesh object and used the `addVerticesFromObj` method to add a new piece of track to it:
The parameters are the obj data key, a scale (1), the x, y, and z values and then x, y, and z rotation. Because of the way the objects are drawn and the angle of our camera, I'm rotating the pieces by 90 degrees on the X. This part isn't required for all models, it's just needed for this specific set-up.
I then keep calling the method, adding new pieces, and adjusting the x, y, and z coordinates for each one. Here are a few more:
After all the pieces are added, let's take a look at the final track from above:
Sweet, that looks quite fun to drive on :) But, what's the benefit between a Mesh and just using separate PNGs for each piece of road? Well, for a start, you can zoom in as far as you like and it doesn't lose quality:
This just isn't possible with a normal texture because you'd need it at such a high resolution to cope with it still looking crisp when zoomed in, but our Mesh doesn't worry about that. Also, all road pieces here are just 171 kB in total and that's unzipped!
You may have noticed that in the shot about the Mesh is only rendering 436 faces out of a total of 10,136. That's because the Mesh has culling built-in and if a Face falls outside of the range of the Scene Camera, then it doesn't get rendered. If I turn on debugging you can see each of the faces in the Mesh:
So really, when used like this, you can think of a Mesh as being a way to layout lots of "Sprites", however you like, with full control over the vertices and textures and culling built-in. Plus, of course, you can do this by just changing the rotation:
Pretty neat, right? You just can't do that with regular Sprites :)
Have a play with the above demo on the Labs.
Of course, you could layout the whole track in a 3D package, rather than building it chunk by chunk as I did here, but I feel that doing it this way demonstrates another benefit of a Mesh - the ability to add vertices to it at any point. For a real game, this method could be used for run-time track generation.
There's quite a lot more to the Mesh than I've covered here, but these are the most important features. The class is now completely finished, including all documentation, so feel free to experiment with it in the latest beta :)
AVIF Support
AVIF is a new image file format supported by most current browsers and it provides for some stunning compression quality. I won't go into the full details behind it, because Jake Archibald wrote a great post on the topic which I urge you to read. However, I'm pleased to say that you can load AVIF files directly into Phaser and they'll just work, out of the box :)
Have a look at this demo to see the difference for yourself!
Tilemap Updates including Isometric, Hexagonal, and Staggered Tilemaps
When Phaser 3.0.0 was completed it had built-in support for Tilemap rendering. This was achieved via the Tilemap class and a whole bunch of supporting functions, allowing you to easily get Tiled, CSV, and Impact maps running in your games. One notable feature it didn't have, though, was support for Isometric tilemaps. This is one of those features requests that crops up on a quite regular, although somewhat lowkey basis.
Several un-official Isometric plugins were released for Phaser 3 which had varying levels of success. Then the community stuck a Bounty on the feature and someone stepped forward and built it. Originally, I was going to have this feature available under a compiler flag, so you specifically had to package it into your build. But after spending days working on it and tidying things up, it made sense to just include it outright. So, as of Phaser 3.50, you will now able to import isometric, hexagonal, and staggered isometric tilemaps from the Tiled Map Editor, directly into Phaser.
Here's a picture of the new isometric tilemap system running:
Play with the above example here. Use cursor keys to move the camera.
Loading an isometric map is identical to loading a regular tilemap. Tiled stores the extra information required in its standard map data, which 3.50 now parses out for you.
As mentioned, it also supports hexagonal maps:
And staggered isometric maps, too.
Functions, such as 'getTileAtXY' and so on have all been updated to work across the different map orientations. The only obvious area that you have to handle yourself is physics because Arcade Physics is built entirely around rectangles and cannot cope with the concept of overlapping tiles, as you get with isometric layouts. Even so, this adds a powerful new feature into the API which has been requested for a long time now.
As I was working through merging isometric support I took the opportunity to revisit the Tilemap API and make some quite dramatic changes. From the start, there had always been a Tilemap class, which was your base class, responsible for managing all of the tilemap data. This class would then create layers, either Static or Dynamic layers, which handled the rendering of all the tiles on that specific layer.
Internally the two were quite different. Static layers would build-up often quite large internal array buffers, blasting the whole lot to the GPU each frame. There was no culling and once the layer was created, you couldn't then modify it, such as via the Tilemap methods like `fill` and `shuffle`. The concept was that you traded flexibility for speed. If you need to be able to modify the layer data on the fly, or wanted to use tile culling, then the Dynamic Tilemap Layer was the counterpart to use.
The introduction of the Multi Pipeline however changed all of this. Previously, a Static Layer would upload its buffers every frame, one per unique tileset being used by the layer. This worked because Phaser assumed that every single thing inside of it would use the same texture ID of zero. The Multi Pipeline blows this concept out of the water. It now became more inefficient to flush the batch and upload the tilemap buffer, than it was to have it all pre-calculated in the first place.
As this dawned on me I decided to tidy things up. So in 3.50, there are no longer Static and Dynamic Tilemap Layers. There are just the new Tilemap Layer objects. This is a consolidation of the best features from them both. They fully support the new rendering orders needed for isometric maps. They support batching in with the Multi Pipeline. They support tile culling and, most importantly of all, they support all of the features Dynamic layers had. Meaning you can adjust tile properties (such as tinting a specific tile), or edit in, or modify, any part of the map at any point. Personally, I also think it makes the API a lot cleaner, too.
Fundamentally it means you will need to update your code, but the changes are tiny:
That's it. You just use the new `createLayer` method instead. You no longer have to worry about if the layer is dynamic or static, you can just perform any operation the Tilemap class offers on it. Internally, quite a bit has changed, and it also allowed me to really tidy up the documentation, too (filling in lots of blanks). I'm very pleased to have made the change and hope it improves developer quality of life as a result, as well as the cool new features it adds.
Grant for the Web
I'm very pleased to announce that Phaser was selected to receive a Grant for the Web, along with 34 other mid-level recipients. Mid level grants were funded at between $15,000 and $50,000 USD. The grant is to be used to focus on building Web Monetization tools for Phaser, so that game developers can take advantage of the new Web Monetization features landing in browsers such as Puma.
You can read more about it here: https://www.grantfortheweb.org/blog/2020-mid-grantees and it's great news for Phaser on several levels. First, I'm really excited to explore what this can mean for game devs, who traditionally have to rely on advertising for their games to earn. This provides a more direct means of revenue which is more seamless for the players. The grant will fund the development of a Phaser plugin and associated tutorials, and of course helps fund the development of Phaser itself for a good while, too.
Download v3.50 Beta 9
As you can see, Beta 9 contains even more changes than the previous beta. There's lots more than I've covered above! I could really do with your help testing it, though. The more eyes-on it gets, the more solid the final release will be.
The new v3.50 Beta 9 is available from both npm and GitHub.
You can get it from npm using the beta tag:
You'll find pre-built bundles to download on the releases page, or you can check out the master branch and build yourself.
Note that it does not have updated TypeScript defs yet, so if you want to use those, please pull down the repo and use `npm run tsgen` to build new ones locally.
If you find an issue report it to me either on Discord, or (even better) open it as an issue on GitHub and tag it "3.50 Beta 9".
Thank you to everyone who has been a part of 3.5. It's the biggest update Phaser has ever had and I'm really pleased with it so far. If you helped make it this good, via a GitHub issue, or even a suggestion on Discord, then thank you! One of the reasons it's taking so long to complete is that you keep reporting things, which is great :) Keep at it!
I've been collecting together a bunch of new demos for the October Backers Examples Pack, so all backers lookout for that landing shortly. I'm really looking forward to releasing 3.5, but I want to get it just right, so will take whatever time I need to do that. I appreciate your patience, but hopefully, you can clearly see from this dev reports it's going to be something special.
Woo! I'm getting so close to the release of 3.50 I can almost taste it :) There are literally just a couple of points left on my list that I need to look at, and then I can finally wrap this beast up. However, I wanted to post a new Dev Log before then, because, as usual, a lot has happened since the last one. Onwards! ...
As in the previous log, I've spent a long time working on issues reported on GitHub. Last time I wrote I had knocked the issue total down from well over 200 to 130. But I've been working hard on them since then and they now stand at a very healthy 63 issues (the total is a little higher, but the others are all feature requests, not bugs)
This is a great number to be at. There are lots of issues relating to Audio, because I haven't touched that yet and mostly they're to do with all of the recent changes made in iOS, which has screwed with audio in lots of games, across lots of frameworks, not just Phaser. I have decided that I absolutely need to recode the audio API. However, I intend to do this in Phaser 4 and will then back-port it to Phaser 3 in the near future.
Will 3.50 be the final version of Phaser 3?
I've been asked this a number of times on Discord, so figured I'd address it here as it's not a simple answer. First of all, no, it won't be the final 3.x version, because I fully intend to carry on supporting Phaser 3. There are so many large changes in 3.50 that I'm quite sure it will generate bugs I haven't encountered during my testing, so I'll of course resolve those.
However, I'm not expecting any more 'sweeping' API changes in Phaser 3's lifetime (other than perhaps the Audio API). Yes, I'll carry on fixing bugs, and almost certainly adding features where relevant, but once 3.50 is out, most of my attention will be on Phaser 4. The same way that Phaser 2 and 3 overlapped (and still do!), Phaser 3 and 4 will be treated in the same way. Eventually, 4 will get all of my attention and I'll likely hand v3 over to the community, but that won't be for a long time yet.
For as long as people keep backing me on Patreon, I'll keep working to improve Phaser 3 and building out our collective future in Phaser 4.
Now, on with the 3.50 updates since the last Dev Log...
Massive Mesh Overhaul
The Mesh Game Object has existed in Phaser since v3.0.0. It was meant as a way for you to be able to construct your own Game Objects built from your own vertices and texture data. Where-as a Sprite uses a standard quad, with a Mesh you were under no such limitations and could define the vertices and uvs exactly how you wanted them.
However, it really didn't offer any assistance in this regard, which I always thought was a shame. So, I took some time to rework the whole class and it's now properly useful and really quite powerful. Perhaps once I've explained what it can now do, you may start to think of some uses for it in your own games.
At its core, a Mesh Game Object consists of a series of Face objects. A Face consists of 3 Vertex objects and a bunch of helper methods. Both of these classes are new in 3.50 and help encapsulate all of the data needed for each. For example, a Vertex object is essentially a Vector3 along with extra properties for UV coordinates and translated positions. When the Mesh is rendered it loops through all of its faces and if the face can be seen by the Camera, adds it to the WebGL batch.
Let's create a Mesh from some simple hardcoded vertex data:
Here's a screenshot from an example with this running in it:
As you can see from the debug overlay, we've got a nice little zombie doggie rendering in a Mesh (and yes! indexed vertices are now finally supported by default). As it stands like this, it's absolutely no different to using a Sprite. However, because it's a Mesh, we can manipulate those vertices. By reading the mouse wheel we can call the Mesh `panZ` method, which allows us to zoom the Mesh in and out. By implementing a pointer move handler, we can also call `panX` and `panY` to allow the vertices to be dragged around.
This YouTube video demonstrates it in action.
Mesh Grid Generator
While it is perfectly possible to populate a Mesh by supplying it with arrays of raw vertices, as we did above, there are a couple of helpful new methods in 3.50 to make this process easier for you. For a start, there is a new function available in the Geometry Mesh namespace called `GenerateGridVerts`. This allows you to easily create vertices in a grid layout. Let's take our zombie dog from before, but this time use the generator to put him on a 6x6 grid:
Here's our new grid in action:
As you can see, when the grid is rotated it no longer distorts, even at extreme angles, unlike our hand-crafted one above that only used 2 faces. The more vertices you have, the cleaner an effect you can create.
The grid generator is more powerful than this, though. You can customize the size, the number of segments, the colors, alpha, and even set the texture to tile if you wish. Here's an example of a grid using an array of colors:
The result being:
This works because the vertices can be colored. Every vertex in the Mesh has its own unique color and alpha value. So if you don't apply a texture, but do give some colors, you can get neat results like the above picture. Of course, you can also mix them, using both a texture and colors, in which case it tints the end result:
Which looks like this:
Mesh OBJ Loader
Ok, so grids are quite fun. But to unleash the full power of the Mesh you really need to be able to import those vertices from outside of Phaser. As a result, there is now a complete Wavefront OBJ 3D Model loader built into 3.50. This allows you to prepare your models in a 3D package, export them as a triangulated obj file and use them directly in Phaser.
The loader even supports loading a Wavefront material file, so you don't even need to texture your models. Here's a quick example. I used the superb Asset Forge app by Kenney, added a cube primitive, and applied a material:
I then exported it as an obj file and used the new OBJ Loader in 3.50:
That's all there is to it! Viewing the scene now shows us our lovely textured cube:
Depending on your 3D skills, you're not limited to just primitives like this. I suck at modeling, so here's a nice skull I bought from a stock model site :)
And a video of it in action: https://youtu.be/Y42Ak5GXhmU
The Mesh Game Object now has the concept of a mini-camera built into it. Using this you can manipulate what you're looking at, in terms of the vertices being rendered. It also has three new properties `modelPosition`, `modelRotation` and `modelScale` which allows you to adjust, as you can guess from their names, the position, rotate and scale of the vertices in the Mesh. That is how I got the skull to rotate in the video above.
Now while this undoubtedly looks quite cool, it's very important to understand the limitations of what you're seeing here. This is not a 3D engine. Not even a 'mini' one. The Mesh is being drawn by projecting the vertices into a 2D orthographic space. The vertex positions are calculated at runtime (not in the shader), and while there is some smart internal logic to handle the 'dirty' state of the Mesh, it's still not how modern 3D games work.
Typically, a 3D engine will load all of the geometry into a buffer and then use shader uniforms to upload the transform matrix for the model and camera, leaving the shader to calculate the final position of the vertices. The Mesh works in the opposite way, in that if you call say the `panZ` function, the Mesh will flag itself as being dirty, and then during the next pre-render step it will recalculate the positions of all of its vertices by iterating through them and applying the transform math. This works well if the Mesh object is going to remain static, in that once calculated the vertices data is never touched again until it needs to be, and because we're not using a uniform for the model transform we can batch all of the verts in with everything else in the Scene.
There's also no lights and no depth buffer. The Mesh calculates if a face is visible using a counter-clockwise test and hiding those that are not. This works fine, most of the time. But for complex models, or ones featuring tightly packed or intersecting polygons, it can become a bit of a z-depth fighting nightmare. So, if you can't display complex models with the Mesh, what should you be thinking about using it for? This is best demonstrated with an example.
Mesh Race Track
I downloaded the Racing Kit (https://www.kenney.nl/assets/racing-kit) set of 3D models published by Kenney. These are great because they contain lots of really low poly models featuring road segments that just 'snap' together nicely. They also let me demonstrate where a Mesh becomes more powerful than regular textured Sprites.
First, I loaded all the track parts I wished to use:
Then, I created just one single Mesh object and used the `addVerticesFromObj` method to add a new piece of track to it:
The parameters are the obj data key, a scale (1), the x, y, and z values and then x, y, and z rotation. Because of the way the objects are drawn and the angle of our camera, I'm rotating the pieces by 90 degrees on the X. This part isn't required for all models, it's just needed for this specific set-up.
I then keep calling the method, adding new pieces, and adjusting the x, y, and z coordinates for each one. Here are a few more:
After all the pieces are added, let's take a look at the final track from above:
Sweet, that looks quite fun to drive on :) But, what's the benefit between a Mesh and just using separate PNGs for each piece of road? Well, for a start, you can zoom in as far as you like and it doesn't lose quality:
This just isn't possible with a normal texture because you'd need it at such a high resolution to cope with it still looking crisp when zoomed in, but our Mesh doesn't worry about that. Also, all road pieces here are just 171 kB in total and that's unzipped!
You may have noticed that in the shot about the Mesh is only rendering 436 faces out of a total of 10,136. That's because the Mesh has culling built-in and if a Face falls outside of the range of the Scene Camera, then it doesn't get rendered. If I turn on debugging you can see each of the faces in the Mesh:
So really, when used like this, you can think of a Mesh as being a way to layout lots of "Sprites", however you like, with full control over the vertices and textures and culling built-in. Plus, of course, you can do this by just changing the rotation:
Pretty neat, right? You just can't do that with regular Sprites :)
Have a play with the above demo on the Labs.
Of course, you could layout the whole track in a 3D package, rather than building it chunk by chunk as I did here, but I feel that doing it this way demonstrates another benefit of a Mesh - the ability to add vertices to it at any point. For a real game, this method could be used for run-time track generation.
There's quite a lot more to the Mesh than I've covered here, but these are the most important features. The class is now completely finished, including all documentation, so feel free to experiment with it in the latest beta :)
AVIF Support
AVIF is a new image file format supported by most current browsers and it provides for some stunning compression quality. I won't go into the full details behind it, because Jake Archibald wrote a great post on the topic which I urge you to read. However, I'm pleased to say that you can load AVIF files directly into Phaser and they'll just work, out of the box :)
Have a look at this demo to see the difference for yourself!
Tilemap Updates including Isometric, Hexagonal, and Staggered Tilemaps
When Phaser 3.0.0 was completed it had built-in support for Tilemap rendering. This was achieved via the Tilemap class and a whole bunch of supporting functions, allowing you to easily get Tiled, CSV, and Impact maps running in your games. One notable feature it didn't have, though, was support for Isometric tilemaps. This is one of those features requests that crops up on a quite regular, although somewhat lowkey basis.
Several un-official Isometric plugins were released for Phaser 3 which had varying levels of success. Then the community stuck a Bounty on the feature and someone stepped forward and built it. Originally, I was going to have this feature available under a compiler flag, so you specifically had to package it into your build. But after spending days working on it and tidying things up, it made sense to just include it outright. So, as of Phaser 3.50, you will now able to import isometric, hexagonal, and staggered isometric tilemaps from the Tiled Map Editor, directly into Phaser.
Here's a picture of the new isometric tilemap system running:
Play with the above example here. Use cursor keys to move the camera.
Loading an isometric map is identical to loading a regular tilemap. Tiled stores the extra information required in its standard map data, which 3.50 now parses out for you.
As mentioned, it also supports hexagonal maps:
And staggered isometric maps, too.
Functions, such as 'getTileAtXY' and so on have all been updated to work across the different map orientations. The only obvious area that you have to handle yourself is physics because Arcade Physics is built entirely around rectangles and cannot cope with the concept of overlapping tiles, as you get with isometric layouts. Even so, this adds a powerful new feature into the API which has been requested for a long time now.
As I was working through merging isometric support I took the opportunity to revisit the Tilemap API and make some quite dramatic changes. From the start, there had always been a Tilemap class, which was your base class, responsible for managing all of the tilemap data. This class would then create layers, either Static or Dynamic layers, which handled the rendering of all the tiles on that specific layer.
Internally the two were quite different. Static layers would build-up often quite large internal array buffers, blasting the whole lot to the GPU each frame. There was no culling and once the layer was created, you couldn't then modify it, such as via the Tilemap methods like `fill` and `shuffle`. The concept was that you traded flexibility for speed. If you need to be able to modify the layer data on the fly, or wanted to use tile culling, then the Dynamic Tilemap Layer was the counterpart to use.
The introduction of the Multi Pipeline however changed all of this. Previously, a Static Layer would upload its buffers every frame, one per unique tileset being used by the layer. This worked because Phaser assumed that every single thing inside of it would use the same texture ID of zero. The Multi Pipeline blows this concept out of the water. It now became more inefficient to flush the batch and upload the tilemap buffer, than it was to have it all pre-calculated in the first place.
As this dawned on me I decided to tidy things up. So in 3.50, there are no longer Static and Dynamic Tilemap Layers. There are just the new Tilemap Layer objects. This is a consolidation of the best features from them both. They fully support the new rendering orders needed for isometric maps. They support batching in with the Multi Pipeline. They support tile culling and, most importantly of all, they support all of the features Dynamic layers had. Meaning you can adjust tile properties (such as tinting a specific tile), or edit in, or modify, any part of the map at any point. Personally, I also think it makes the API a lot cleaner, too.
Fundamentally it means you will need to update your code, but the changes are tiny:
That's it. You just use the new `createLayer` method instead. You no longer have to worry about if the layer is dynamic or static, you can just perform any operation the Tilemap class offers on it. Internally, quite a bit has changed, and it also allowed me to really tidy up the documentation, too (filling in lots of blanks). I'm very pleased to have made the change and hope it improves developer quality of life as a result, as well as the cool new features it adds.
Grant for the Web
I'm very pleased to announce that Phaser was selected to receive a Grant for the Web, along with 34 other mid-level recipients. Mid level grants were funded at between $15,000 and $50,000 USD. The grant is to be used to focus on building Web Monetization tools for Phaser, so that game developers can take advantage of the new Web Monetization features landing in browsers such as Puma.
You can read more about it here: https://www.grantfortheweb.org/blog/2020-mid-grantees and it's great news for Phaser on several levels. First, I'm really excited to explore what this can mean for game devs, who traditionally have to rely on advertising for their games to earn. This provides a more direct means of revenue which is more seamless for the players. The grant will fund the development of a Phaser plugin and associated tutorials, and of course helps fund the development of Phaser itself for a good while, too.
Download v3.50 Beta 9
As you can see, Beta 9 contains even more changes than the previous beta. There's lots more than I've covered above! I could really do with your help testing it, though. The more eyes-on it gets, the more solid the final release will be.
The new v3.50 Beta 9 is available from both npm and GitHub.
You can get it from npm using the beta tag:
You'll find pre-built bundles to download on the releases page, or you can check out the master branch and build yourself.
Note that it does not have updated TypeScript defs yet, so if you want to use those, please pull down the repo and use `npm run tsgen` to build new ones locally.
If you find an issue report it to me either on Discord, or (even better) open it as an issue on GitHub and tag it "3.50 Beta 9".
Thank you to everyone who has been a part of 3.5. It's the biggest update Phaser has ever had and I'm really pleased with it so far. If you helped make it this good, via a GitHub issue, or even a suggestion on Discord, then thank you! One of the reasons it's taking so long to complete is that you keep reporting things, which is great :) Keep at it!
I've been collecting together a bunch of new demos for the October Backers Examples Pack, so all backers lookout for that landing shortly. I'm really looking forward to releasing 3.5, but I want to get it just right, so will take whatever time I need to do that. I appreciate your patience, but hopefully, you can clearly see from this dev reports it's going to be something special.