The PlayCanvas team is super excited to announce the Editor support of glTF GLB conversion with model and animation imports.
This gives developers an order of magnitude reduction in load times compared to the JSON format while keeping similar gzipped download size.
Using the Stanford Dragon model (2,613,679 vertices, 871,414 triangles), we can compare GLB and JSON parse times on a Macbook Pro 16 inch.
The JSON format took over 3 secs just to parse the data, a peak memory usage of ~498 MB and a gzipped package size of 28.1MB.
GLB speeds ahead taking only 0.193 secs which is 17x faster, uses a peak of ~25.2 MB of memory and a gzipped package size of 25.7MB 🚀!
That’s a huge saving in time and means applications will become snappier and more responsive to users, especially for content heavy games and product showcases.
We will be deprecating the use of JSON and the default format that model and animations files. Newly created projects will default to converting to GLB and in existing projects, this can be enabled in the projects settings:
If you would like to replace your current JSON assets with GLB, the User Manual has more information about the process to migrate over.
The conversion to GLB supports the importing of multiple animations in a single FBX which will help improve content workflows.
Remember our awesome glTF 2.0 Viewer? That is now integrated into the Editor to inspect any GLB asset from the project. Just right and select ‘Open In Viewer’!
Here is an example of a character with and without soft body cloth simulation running in PlayCanvas:
By default, PlayCanvas’ rigid body component system creates an ammo.js dynamics world that only supports generic rigid bodies. Cloth simulation requires a soft body dynamics world (btSoftRigidDynamicsWorld). Currently, there’s no easy way to override this, so for the purpose of these experiments, a new, parallel soft body dynamics world is created and managed by the application itself. Eventually, we may make the type of the internal dynamics world selectable, or maybe even allow multiple worlds to be created, but for now, this is how the demo was structured.
Step 2: Implement CPU skinning
PlayCanvas performs all skinning on the GPU. However we need skinned positions on CPU to update the soft body anchors (btSoftBody::Anchor) to match the character’s animation. CPU skinning may be supported in future PlayCanvas releases.
Step 3: Patch shaders to support composite simulated and non-simulated mesh rendering
Soft body meshes will generate vertex positions and normal data in world space, so in order to render the dynamically simulated (cloth) parts of character meshes correctly, we have to patch in support by overriding the current PlayCanvas vertex transform shader chunk. In a final implementation, no patching should be necessary, as we would probably add in-built support for composite simulated and non-simulated mesh rendering.
Step 4: Implement render meshes to soft body meshes conversion
PlayCanvas character meshes cannot be used directly by the soft body mesh creation functions (btSoftBodyHelpers::CreateFromTriMesh) and so require some conversion, so the PlayCanvas vertex iterator was used to access and convert the mesh data. Eventually this conversion could be done on asset import into the PlayCanvas editor.
Step 5: Implement per-bone attachments
PlayCanvas currently doesn’t have a way to attach objects to specific character bones via the Editor (it’s on our roadmap for the coming months!). Therefore, per-bone attachments was implemented in order to attach simplified rigid body colliders to different parts of the character to prevent the cloth from intersecting the character mesh. We are using simplified colliders instead of trying to use the full skinned character mesh because it runs much faster.
If you are feeling adventurous, you can find the prototype source code for the example above in this PlayCanvas project:
Are you shipping your PlayCanvas app or game in just one language? You may be preventing international users from enjoying it! Today, we are happy to announce the arrival of a localization system built right into the PlayCanvas Editor!
The system works in tandem with PlayCanvas’ text element component and it’s super-easy to use. The text element interface now provides a ‘Localized’ property and when checked, you can enter a Key instead of a Text string.
The Key is the string used to look up a localized string based on the user’s currently selected locale. The localized string data is stored in JSON assets and is documented, along with the rest of the system, here. You can even preview your localized User Interface by choosing a locale in the Editor Settings panel:
We look forward to playing your newly localized games!
Cambridge/Santa Monica, August 1 2019 – Arm and PlayCanvas are announcing the open sourcing of the renowned Seemore WebGL demo. First published in 2016, the graphical technical demo has been completely rebuilt from the ground up to deliver even more incredible performance and visuals. With it, developers are empowered to build their projects in the most optimised way, as well as using it to incorporate some of its performant features and components into their own projects.
“I’m so excited to be relaunching the Seemore demo. Open sourcing it in partnership with Arm will bring a host of benefits to the WebGL development community,” said Will Eastcott, CEO of PlayCanvas. “It’s an incredible learning resource that provides many clean, easy to follow examples of some very advanced graphical techniques. Sharing this project publicly is going to help move web graphics forwards for everybody.”
“PlayCanvas and Arm have always strived to push the boundaries of graphics and the original demo is a testament to that,” said Pablo Fraile, Director of Developer Ecosystems at Arm. “It’s encouraging to see how PlayCanvas have advanced mobile web rendering performance since the original demo. This re-release provides a unique resource into graphics for the mobile web that is both easy to follow and incorporate into your own projects.”
The Seemore demo was originally created as a graphical showcase for the mobile browser and takes full advantage of Arm Mali GPUs. It has been upgraded to utilize the full capabilities of WebGL 2, the latest iteration of the web graphics API. Some of the main features of the demo include:
Physical shading with image based lighting and box-projected cube maps.
Stunning refraction effects.
Interpolated pre-baked shadow maps as a fast substitute for real time shadow-mapping.
ETC2 texture compression to ensure that highly detailed textures occupy only a small amount of system memory.
Draw call batching.
Asynchronous streaming of assets to ensure the demo loads in mere seconds (approximately 5x faster than the original version).
Today we are talking to the Russian (from Latvia) Senior Engineer at PlayCanvas Maks!
How did you get into the video games industry?
I started making games when I was 13 years old and always knew what I wanted. A long journey but here I am, making game development better with PlayCanvas.
Can you briefly describe your role at PlayCanvas?
I’m a Full-stack developer and love to be involved in anything specific or generic. Making PlayCanvas service work fast and scale well is what makes me feel good.
What is your favourite aspect of PlayCanvas’ service?
Where do you see web based gaming in the future?
There are so many ways gaming in web can be moved forward, that we even can’t see where it will be in few years, only guess. The most important thing is well-connected and social games, where by just sharing a link you can invite your friends to challenge your record or even play in real-time with you.
How is PlayCanvas going to change the way people make games?
Collaboration and the fact you can make games straight away and test them out in minutes on hundreds of users, like your twitter followers. It’s something so powerful. We can’t predict what users will come up with being so accelerated by those features.
Can you describe one interesting thing about yourself?
I do care about things going on around and will always get obsessed by things I work on, I want to get as much as possible from my efforts.
The Quick Fire round(this is where things get a little interesting)