Skip to main content

6 posts tagged with "ar"

View All Tags

glTF Viewer 4.0 Adds WebGPU Support

· 4 min read

We're thrilled to announce the launch of the open source glTF Viewer 4.0, an update that supercharges your 3D model viewing experience with powerful features and support for the latest web technologies!

glTF Viewer 4.0

"Cyber Samurai" by KhoaMinh is licensed under CC BY 4.0.


This new release is chock-full with enhancements aimed at providing more realistic, insightful, and versatile viewing options for your glTF files. Let's dive into the headline features of glTF Viewer 4.0.

New WebGPU Renderer

WebGPU Logo

Topping the list of today's updates is support for WebGPU! WebGPU heralds a new era in graphics and compute capabilities, offering enhanced performance and efficiency. Users can now select WebGPU as their default renderer, and don't worry if your platform doesn't support it yet - the viewer gracefully falls back to WebGL 2, and subsequently WebGL 1, depending on API availability. Note that WebGPU support is considered beta for the moment and you'll need to proactively enable it and refresh the viewer to check it out:

Enable WebGPU for glTF Viewer

Also make sure you're running the viewer in a browser that supports WebGPU. At time of writing, this means Google Chrome!

Enhanced WebXR AR Mode

Take your 3D models into the real world with our revamped WebXR Augmented Reality (AR) mode! Available currently on Android devices, this enhanced AR mode lets you view any model in your actual environment, complete with intuitive new controls that allow you to accurately position and rotate objects in the real world. Let's hope Apple decides to roll out WebXR support on iOS soon! 🙏

Frame Selected Node

Navigating large scenes can be a pain - Viewer 4.0 addresses this by allowing you to select a node in the scene via the hierarchy panel on the left. You can then press 'F' on the keyboard to frame that node and recenter the orbit camera on that node's position.

Better Immersion with Projective Sky Dome

"130" by mononofu is licensed under CC BY 4.0.

Experience realistic photographic skies with our new projective sky dome! While previous versions allowed for skyboxex with an infinite projection, 4.0 introduces a dome-shaped skybox projection that incorporates a flat ground plane. This warps the skybox texture to have a more believable appearance, delivering a more authentic and immersive perspective, melding your 3D models with strikingly realistic backdrops.

Debug and Inspect with Render Mode

glTF Viewer Render Mode

Ensuring that developers can seamlessly troubleshoot and inspect glTF files, the new render mode allows you to select and display individual inputs/outputs of the render pipeline, including albedo, emissive, normals, gloss, AO, and more. This new level of insight is invaluable for debugging, making it even easier to work with your glTF data.

Enhanced Realism with VSM Shadows

The addition of Variance Shadow Mapping (VSM) casts your 3D scenes in a new light, literally! Shadows aren't merely aesthetic; they provide context and depth, especially in AR mode, assisting to ground your object naturally within its real-world environment. Explore scenes with a newfound depth and realism that draws viewers into the experience, both in standard and AR viewing modes. Find the new shadow controls in the Light Settings panel:

glTF Viewer Light Settings

Join Our Open Source Community

We're not just excited to share these innovations with you; we're eager to hear your thoughts and welcome your contributions! If there's a feature you're longing for, please don't hesitate to submit your requests.

Better yet, become an active contributor to our codebase! Our open-source community thrives on collaboration and fresh perspectives. So, dive right in, explore the code, and let's shape the future of 3D model viewing together! Your expertise and insights could help shape the next release.



With glTF Viewer 4.0, we're redefining the standards of 3D model viewing. From WebGPU-powered rendering to WebXR-powered AR, this update is designed to inspire, assist, and elevate your work with glTF data.

So stay creative, friends, and we'll see you on the forums! 👋

WebXR AR Made Easy with PlayCanvas

· One min read
Steven Yau
Partner Relations Manager

We are excited to announce the launch of our WebXR AR Starter Kit, available in the New Project dialog today!

New Project WebXR

WebXR is a technology that powers immersive and interactive AR and VR experiences to be accessed through supported web browsers. This allows us to build memorable, engaging content and share them with just a URL. No installs needed!

The starter kit comes with all you need to kickstart your AR experience for WebXR including:

  • Real world light estimation
  • AR shadow renderer
  • AR object resizing and positioning controls
  • Physics raycasting
  • And more!

Look how quickly you can create AR experiences below!

Pacman Arcade + animation by Daniel Brück is licensed under CC BY 4.0

Try it on your device

Give the Starter Kit a try today at where you can use it for free!

PlayCanvas now supports Microsoft volumetric video playback

· 10 min read
Steven Yau
Partner Relations Manager

Open in new tab

We are very excited to release our showcase demo for Microsoft Mixed Reality Capture Studios (MRCS) volumetric video technology.

PlayCanvas now supports MRCS volumetric video with a playback library for captured footage at their studios. Watch it on desktop, mobile with AR or even in a WebXR-enabled VR headset, all from a single URL!

The library can be easily added to any PlayCanvas project and used to create fantastic immersive mixed reality experiences.

About Microsoft Mixed Reality Capture Studios

MRCS records holographic video - dynamic holograms of people and performances. Your audiences can interact with your holograms in augmented reality, virtual reality and on 2D screens.

They are experts at capturing holographic video, advancing capture technology and have been pioneering its applications since 2010.

Learn more about Microsoft Mixed Reality Capture Studios here.

How was this created?

The demo was created with a combination of several tutorials and kits available on the PlayCanvas Developer Site, the MRCS playback library and freely available online assets.

You can find the public project for the demo here. We've removed the URL to the volumetric video file (due to distribution rights) and the proprietary MRCS devkit library. Please contact MRCS to gain access to the library and example videos.

Microsoft Video Playback Library

In the folder 'holo video', you will find the scripts and assets needed for playing back volumetric video. You will need to add the devkit library file name 'holo-video-object-umd.js' that will be provided from MRCS to complete the integration and be able to playback video.

Holo Video In Assets Panel

Due to the size and how the data files for the video need to be arranged, they have to be hosted on a separate web server (ideally behind a CDN service like Microsoft Azure).

The 'holo-video-player.js' script can be added to any Entity and be given a URL to the .hcap file. At runtime, the script will create the necessary meshes, materials, etc to render and playback the volumetric video.

Holo Video Script UI

Expect full documentation to be released soon on our site!

Creating a Multi Platform AR and VR experience

As you see in the video, we've made the experience available to view in the standard browser, AR on WebXR-enabled mobile devices (Android) and VR on devices like the Oculus Quest. iOS support for WebXR is in progress by the WebKit team.

This was done by combining several of our WebXR example projects and the scripts and assets can be found in the 'webxr' folder:

WebXR Folder In Assets Panel

'xr-manger.js' is controls how the XR experience is managed and handled throughout the experience:

  • Entering and leaving AR and VR.
  • Which UI buttons to show based on the XR capabilities of the device it is running on (e.g hides the VR UI button if AR is available or VR is not available).
  • Showing and hiding Entities that are specific to each experience.
  • Moving specific Entities in front of the user when in AR so the video can be seen more easily without moving.

Adding AR

AR mode was added first, taking the 'xr-manager.js' script as a base from WebXR UI Interaction tutorial. Key changes that had to be made to the project were:

  • Ensuring ‘Transparent Canvas’ is enabled in the project rendering settings.
  • Creating a second camera specifically for AR which is set to render the layers that are needed for AR (i.e. not including the skybox layer) and having a transparent clear color for video passthrough).

After copying and pasting 'the xr-manager.js' file from the tutorial project into the demo project, I hooked up the UI elements and buttons to enter AR and added extra functionality to disable and enable Entities for AR and non-AR experiences.

This was handled by adding tags to those Entities that the manager finds and disables/enables when the user starts and exits the XR experiences.

For example, I only want the AR playback controls entity to be available in AR so the tag 'ar' was added to it.

Entity Tagged With AR

There is also an additional tag 'ar-relative' that is used for entities that need to move in front of the user when the floor is found in AR. It provides a much better experience for the user as they don't have to move or look around to find the content.

When the user leaves the AR session, the Entities are moved back to their original position that were saved when they entered.

Adding VR

This was a little trickier than expected as we didn't have a complete example of the needed functionality and it also had to work with the existing AR functionality.

The goal was for the user to be able to move around holo video and also show the controllers that matched the VR input devices being used.

Our Starter Kit: VR has the scripts and functionality to interact with objects, teleport and move around an environment. We can tag entities in the scene with 'pickable' for the VR object picker logic in object-picker.js to test against when the VR input device moves or the select button is pressed.

Pickable And Teleportable Tags

Whether it is an object that we can teleport to or interact with is dependent on the other tags on the Entity.

In this case, the aim was to be able to teleport around the video so an Entity with a box render mesh was added to represent the area and 'pickable' and 'teleportable' tags were added too.

Next up was handling how the controllers should look in VR. The starter kit uses cubes to represent the controllers as they are meant to be replaced with something else by the developer.

VR Controllers

In my case, I wanted to use skinned hands or the representations of the VR controllers instead. Max (who built the PlayCanvas WebXR integration) created a project that does just that: WebXR Controller/Hand Models. And it was just a matter of merging the code and assets together.

WebXR Hand Tracking

Projected skybox

The skybox was obtained from Poly Haven and converted to a cube map with our texture tool. Donovan wrote a shader that projected the cubemap so there was a flat floor that the user could move around in.

It's a nice and easy effect that can be applied in similar scenes without having to build a model or geometry. See the scene without the effect applied (left) and with it (right):

Infinite SkyboxGround Projected Skybox

The shader code is applied by overriding the global engine chunk in projected-skybox-patch.js on application startup.

World Space UI in VR

In VR, there's no concept of 'screen space' for user interfaces so the playback/exit controls would need to be added somewhere in the world.

It was decided the controls should be placed near the holo-video and would always face the user as, generally, that is where their focus would be.


This was done by simply having UI buttons in world space as offset child Entities of a 'pivot' Entity. The pivot Entity is positioned at the feet of the holo-video and can be rotated to face the VR camera.

This was done by simply having UI buttons in world space as offset child Entities of a 'pivot' Entity. The pivot Entity is positioned at the feet of the holo-video and can be rotated to face the VR camera.

Setting Up UI In Editor

There's a script on the pivot Entity that gets a copy of the VR camera position and sets the Y value to be the same as the pivot Entity's. It then uses that position to look at so that the UI controls always stay parallel to the floor.

The other common place to have UI controls would be somewhere relative to a tracked controller such as on the left hand/controller. I decided against this because it's not always guaranteed that the VR device would have two hands/controllers such as Google Cardboard.

As the 'floor' is just a projected skybox, a solution was needed to render the shadows of the holo-video onto the scene.

Shadow 'catcher' material

Gustav provided a material shader that would sample the shadow map and make any area that doesn't have a shadow fully transparent.

To make this a bit easier to see, I've shown where the plane would be positioned below. Anywhere where it's white on the floor plane would be fully transparent as there is no shadow being cast there.

Shadow Receiver QuadFinal Shadow Effect

Other tutorials used

There is other functionality in the experience that has been taken from our tutorial/demo project section that have been slightly modified for this project.

These include:

  • Orbit Camera for the non XR camera controls. The orbit camera controls are disabled when the camera entity is disabled so that the camera wouldn't move while in a XR session.
  • Video Textures for the Microsoft video on the information dialog. It was modified so that it would apply the video texture directly to the Element on the Entity it was attached to.

Although not PlayCanvas related, it is worth shouting out: the awesome QR code (that is displayed if the device is not XR compatible) is generated with Amazing-QR. It's able to create colorful and animated QR codes that are more interesting and attractive than the typical black and white versions.

QR Code

Issues found

There were a couple of issues found while this project was being developed. We will be searching for solutions in the near future. For now, we've worked around them in a couple of ways.

In VR, clustered lighting with shadows enabled causes a significant framerate drop. As the shadows in the project are from the directional light and they are processed outside the clustered lighting system, clustered lighting shadows can be disabled with no visual change.

The demo uses screen space UI in AR and there's an issue with accuracy of UI touch/mouse events when trying to press UI buttons. This is because, when the user enters AR, the engine uses a projection matrix that matches the device camera so that objects are rendered correctly relative to the real world.

Unfortunately, the screen-to-world projections are using the projection matrix directly and instead, using the FOV properties on the camera component. The mismatch is what is causing the inaccuracy.

My workaround is to calculate the relevant camera values from the projection matrix on the first AR render frame and apply that back to the camera component. The code can be seen here in xr-manager.js.

Wrapping up

If you have reached here, thank you very much for reading and we hope you have found some useful takeaways that you can use in your own projects!

Useful links:

We would love to get your thoughts and feedback so come join the conversation on the PlayCanvas forum!

Web AR Experiences - Developer Spotlight with Animech

· 7 min read
Associate Partner Support Engineer

Welcome to the third instalment of Developer Spotlight, a series of blog articles where we talk to developers about how they use PlayCanvas and showcase the fantastic work they are doing on the Web.

Today we are excited to be joined by Staffan Hagberg, CMO of Animech.

Hi Staffan, welcome to Developer Spotlight! Tell us about yourself and Animech!

Animech was founded back in 2007, in the city of Uppsala, Sweden. With a mix of 3D artists, engineers, developers, and UI/UX experts, we have a team of 40 people and all the competence in-house. The studio started in the early days of real-time 3D. It was a mix of CAD engineers and developers who realized the power of visualization for selling complex products in the life sciences segment.

Since then, we have visualized pretty much anything you can think of online and offline. We’ve worked in VR, AR, MR, phones, tablets, desktops, and pretty much any other device that has a browser. We have developed VR applications for cars, the first real-time 3D configurator in native WebGL ever developed, one of the world's first configurators for Oculus Rift Devkit and much more.

We have also visualized experiences for hotel safes, medical instruments and lab products for 7 of the 10 largest life science companies, as well as built 3D converters from Unreal to glTF and a bunch of custom tools specially built for PlayCanvas.

Our core business is real-time 3D. We push the boundaries every day trying to invent new ways of using 3D, where our solution makes the difference.

Bathroom Planner for Iconic Nordic Rooms

Why did Animech choose PlayCanvas?

After an extensive search for a WebGL-based engine, we evaluated a few and selected PlayCanvas for its performance, out-of-the-box features, its extensibility and its valuable editor. Our customers expect the highest level of visual quality along with a smooth browsing experience - without the need for an app or plugins. PlayCanvas truly helps us deliver.

As for our artists’ perspective, they think it was (and still is) the most artist-friendly WebGL editor out there, with the added bonuses that it is open source, and supports many important features, such as PBR, tonemapping, render layers, etc.

Did your team face any initial challenges? How did you overcome them?

It's always challenging when customers have high quality and performance expectations. Though, at the same time, that is what drives us. Being able to create stunning 3D experiences linked to real business value is a unique opportunity and challenge. Adding AR to that process helps you to stand out against competitors.

Our particular challenge was to dynamically create an AR model of a procedurally generated mesh as a generic function. Our solution was to create a SaaS service that can take of whatever 3D object you’re looking at in PlayCanvas, and on the fly create AR models for both iOS and Android devices (ARKit or ARCore).

You’ve built several Web AR experiences. Can you tell us a little about them and how important you think Web AR is today?

We have been early adopters of both AR and VR, both as standalone applications and on the web. We believe it's important to use AR not as a gimmick, but as an application that provides real value for the user. For example, looking at how that greenhouse would look in your actual backyard or similar. In that sense, Web AR will get more and more important, both as something that stands out but also as something that provides value for users.

Why do you think that your clients want Web AR in their experiences?

To offer something more to their customers - both in marketing value and actual value. To help users make smarter, more informed decisions.

We have also developed our own web based 3D converter that takes our PlayCanvas 3D models to glTF and USD on the fly. It is a server side solution that takes everything we develop to AR.

How is building a web experience different from a native experience?

You must optimize for both loading time and performance. The application could be run on a wide range of devices – from several years old phones to high-end desktops.

The application is accessible to a wider audience since they don’t need to install anything.

What are the team's favorite features of PlayCanvas?

As a team consisting of both 3D artists and developers, PlayCanvas’ online editor provides a fantastic way to collaborate, prepare and preview our projects before pairing the solution with a stunning web UI or deploying it as a standalone viewer.

Our 3D artists also enjoy how the editor is robust and easy to use, and how its design promotes collaboration. Powerful material settings (per-texture UV and color channel, vertex colors, blend types, depth test/write, etc.), flexible texture compression and a fast response by the team when reporting bugs and requesting features are also great.

What is on the feature wish list for PlayCanvas this year?

As the future for 3D on the web continues to evolve, we are excited to see support for more accessible 3D formats, such as the glTF standard by the Khronos Group, which PlayCanvas are advocating for as well.

Beyond this, here are some things we look forward to:

  • Node-based shader editor
  • Support for editor extensions
  • Post processing (HDR bloom, chromatic aberration, SSAO, motion blur, color grading, eye adaption, etc.)
  • More customizable asset import options
  • Reflection probes
  • Material instances (see Unreal Engine)
  • Debug visualization (see Unreal Engine’s View Modes)
  • Expose currently hidden options in the editor (detail maps, etc.)

How do you see AR and 3D e-commerce evolve over the next few years?

The possibilities are enormous. The question is when do people actually start using AR. It has been around for many years, lots of interesting solutions and demos have been built, but the real value of AR has not reached the masses yet.

I think we are closing in on that though. Just the other day I was about to buy a new espresso coffee machine. One supplier had an AR model online in the e-store with which I could see that it looked good and covered my needs. With just one static USDZ file. It is such an easy way of helping your customer to make the right decision. Imagine how much value you add if you can see configured 3D models in AR and really see the potential of what you are about to buy.

Next phase would be to configure and change your 3D model directly in AR-mode which would make the experience even stronger.

As the graphics quality gets better and better online and the fashion industry keeps on digitizing their customer journey, AR will probably be the best and easiest way of trying on fashion products like bags, watches, jewelry and clothes. It will reduce faulty orders on a massive scale if you can do a virtual fitting before buying stuff online.

Animech helps our customers to get what they want. Simply put: we empower people to make smart decisions through intelligent visualization.

Thank you, Staffan! Is there anything else you'd like to share?

You can visit our website here. You can also follow us on Twitter! You can also check out our other projects here:

glTF Viewer Arrives on Mobile with AR Support

· 3 min read
Elliott Thompson
Software Engineer

Today we’re excited to announce the next major release of our glTF viewer. This version makes the viewer an ideal tool for reviewing how glTF models render on mobile as well as in augmented reality!


View Models in AR on Mobile

Once a model has been loaded into the viewer on mobile, you’ll be given the option to drop into an augmented reality experience. The mode you get currently differs based on the operating system you’re using.

glTF Viewer AR on iOSglTF Viewer AR on Android

Quick Look mode on iOS (left) and WebXR mode on Android (right)

On iOS, the model will be loaded with Apple’s AR Quick Look mode (above left), while on Android the model will be placed into your environment using WebXR (above right).

Mobile-Optimized Design

glTF Viewer Mobile StartglTF Viewer Mobile ControlsglTF Viewer Mobile Hierarchy

It’s now possible to verify the content and rendering of your assets no matter which device you’re working on. The viewer has been redesigned using mobile-first principles, so you can explore glTF content just as well on mobile as you can on desktop. The UI scales up or down depending on the device screen size and takes an uncluttered approach to ensure you can focus on the glTF content itself even on very small screens.

Quickly Load Models on Mobile Devices

When loading PlayCanvas viewer v3.0 on desktop, you’ll be presented with the option to load a glTF model from a URL.

glTF Viewer Start Screen

When this is used, the application will generate a QR code you can scan to share the current viewer scene between your devices or others:

Share with QR Code

New PlayCanvas Theme

The latest release of PCUI (v2.7.0) enables the use of additional themes in applications built using it. This allowed us to apply a new color theme to the model-viewer:

New PCUI Theme

The new muted gray tones of this theme should allow users to more readily focus on their model content. Over the coming months, you’ll begin to see this new theme applied to more applications in the PlayCanvas ecosystem! Be sure to pass any feedback onto us using the issue tracker of the PCUI library.

Open Source

PlayCanvas is fully committed to an open source strategy and our glTF viewer is therefore made available to you on GitHub. It is a TypeScript application built on PlayCanvas PCUI front-end framework and, of course, the PlayCanvas Engine runtime.

These open source projects have been years in the making and would not have been possible without the amazing OSS community. So why not explore our various GitHub repositories and consider making some contributions of your own. We also appreciate feature requests and bug reports, so don’t be shy!


We hope you find the new and improved glTF viewer useful for your projects. Stay tuned for further updates to it in the coming months!

Building WebAR Experiences - Developer Spotlight with Visionaries777

· 10 min read
Associate Partner Support Engineer

Nissan AR

Welcome to the second installment of Developer Spotlight! A series of blog articles where we talk to developers about how they use PlayCanvas and showcase the fantastic work they are doing on the Web.

Today we are excited to be joined by Frantz Lasorne, co-founder of Visionaries777.

Hi! Let's get started. Firstly, welcome to the developer spotlight! Frantz, if you could just tell me a little bit about yourself and your team and your studio.

My name is Frantz and I'm the co-founder of Visionaries777. Actually, we [founders] are three. We started as two French guys. We studied together in France; Interaction Design, and then we created this company about 10 years ago in Hong Kong.

[We currently employ] Around 35 people. We've been working on AR since 2010. It's our main focus. Within the realm of AR, we are involved in 3D real time AR, and all sorts of XR applications.

Me and my business partner were working at Lego in Denmark before, so we started to work on AR back then in the R&D department. We helped Lego bridge the physical and digital to create hybrid play experiences.

Afterwards, we left Lego and started our own company. Lego hired us as consultants to keep working for them for some time. In the early days, we did a lot of collaborations for marketing and promotional events using AR.

Nowadays, the things we work on are more industrial-focused; automotive or luxury. AR is now a properly matured product, with WebGL, Web AR experiences, VR and so on.

Thank you! I'm curious, why did Visionaries777 choose PlayCanvas?

Before, we always used Unity 3D for any 3D real time project, because they have a huge compatibility of hardware platforms. It's quite nice. The only platform they are lacking is the web.

We were looking around, trying to find what is the best platform to develop WebGL experiences. Then, we saw PlayCanvas, opened the editor, and were surprised how familiar it was for us. I think the people who designed the PlayCanvas editor knew Unity and got inspired in terms of the menu and layout. It's very similar to the way [Unity] works. It's just that it's an editor on the web rather than a desktop application. So for us, it was very easy to do the jump.

So far we've been really happy with all the engine capabilities, the loading, how lightweight it is, et cetera. For us, it's the best platform for developing web experiences.


Awesome! So, were there any initial challenges that you guys faced? How did you guys end up overcoming them?

In terms of challenges? I think mostly the model optimizations; how to get the WebGL experience as small as possible, but retaining maximum visual quality. Most of our clients are either automotive brands or luxury brands, so they are concerned about the product that you are looking at on the screen. It's pointless to show a product that you can see the rough edges of. They won't like it in the end.

That was our struggle at the beginning - to try and find the right balance of optimizing enough, but not too much, and be happy with loading time. So it took some trying to get this right and find the right compromise.

Right now, with our current approach and the tools in our pipeline, we’re quite happy. And it's also why we work with Cartier and are now doing all these products on their websites.


That's very interesting. Visionaries777 has worked on several Web AR experiences. Can you tell me how important you think Web AR is today?

It's very important - but we are back to the same problem that we had in the early 2010s, when we relied on markers for tracking. Now with where we are in WebAR we are still very limited. You need something like an image marker, or a floor with a world target, but it's not as stable as if you use AR Kit or AR Core in a native Unity app.

With a standalone Unity application with ARKit and ARCore it's mind blowing what you can do. There's barely any drift, it's super accurate. With web you're still constrained. Tracking is not perfect. There's a lot of drift. So I think the applications we see with the present state of tech are limited, experiences are considered a bit gimmicky. It's getting there, but it still needs to grow.

But at the end of the day, for marketing initiatives, no one wants to install an ad app on their smartphone as a user, as a consumer. You don't want to install a BMW app just to uninstall it three days later because you're done playing around.

These sorts of experiences were fine 10 years ago on an iPhone, but now people have moved on, and have different mindsets - things should be accessible through a web browser directly, not through an app. If it's inside an app, it has to be inside Snapchat, inside Instagram, or inside an app that has more to it than just one AR experience.


Extending a bit from that question, why do you think that the clients you work with want Web AR in their experiences?

Augmented reality has always been exciting for brands to show a product in 3D and also integrate it into [customers’] homes or their driveway. It’s quite appealing for a brand, marketing-wise. Then, for consumers, it's something new, it's fun. You get closer to the product.

It’s key to reduce friction.

You don't have to install things anymore. You download some assets in your web browser, but it's more transparent than going into the store and searching for the app and downloading it.

Brands are definitely interested in WebAR for these reasons, so AR will keep growing. It brings a lot of value. You can try a car in your driveway or you can try a watch on your wrist.

eCommerce in a more immersive way is really the next generation for eCommerce experiences.

When you're building your Web AR experiences, what features does PlayCanvas provide that you think were most helpful?

I think the true value of PlayCanvas is really how they are keeping up to date with all the WebGL standards, improving materials, improving compression, improving loading and so on. And their UI is very easy to use.

When you import your model, it gets converted to the GLB format. It makes it more lightweight, and you don't need to pre-export it as GLB.

On the programming side, it's just JavaScript. You can do whatever you want. It doesn't have any preset for you aside from an orbit camera, but that's not really important. Anyone can build more.

Cool! So, how would you say that building an HTML5 or a WebGL experience differs from developing a native experience or a native application?

You have to always concern yourself with the loading. In some cases, when you develop a web experience, you have to load something quickly for the user to play with right away. Then the model and the rest gets loaded progressively afterwards. Let's say you have a car and this car has variants with different wheels, roofs, and so on. All these elements need to be loaded, but you shouldn't load everything at once. Otherwise, the download would be huge.

I think that's one of the main differences compared to designing a native application, aside from UX/UI, because you also need to be concerned about the browser for web experiences. Are you using it in portrait? On a desktop? Do you need to embed it into an iframe? Is it going to be full screen?

Those are questions that are quite different from a standalone application where you don't need to concern yourself about the surroundings of the app.

So next up; we went a bit over this in one of the previous questions I asked. You explained how you guys use some of the features of the editor, but is there any feature PlayCanvas provides that is the team's favorite?

To me it's more that we can collaborate. The collaboration aspect of PlayCanvas is quite nice because you have one project and anyone can access it from their desktop machine on the web.

And you can, as an artist, populate the scene with the assets. Then, the developer can work with it. In parallel. And you can have someone to check things, maybe not edit anything, but do quality control. The collaboration aspect is one of the best features I would say that makes [PlayCanvas] so nice to work with.

As part of the interview, but also as a feedback exercise, what is a PlayCanvas feature that would be at the top of your wishlist?

It would be great if the PlayCanvas editor had a feature to assign different texture resolutions for different platforms (mobile or desktop), similarly to how Unity does it for different devices. It would make it so much easier to manage rather than doing it with code and tagging etc.

Thanks for sharing the feedback! Going back to another question, how do you see HTML5 and web experiences evolving over the next few years?

I think it will grow. We see two things at the moment, WebGL experiences and cloud streaming, which is not HTML or WebGL at all. Some brands will choose either doing a WebGL configurator, or a cloud streaming configurator. Those are two different approaches. I have a tendency to prefer WebGL because you have a much more crisp image.

Also, once the experience is loaded, it's much more responsive to commands. You're not constrained by latency of streaming and video glitches. Those are things that put me off when I do classroom experiences.

So I think WebGL will continue, especially right now with the whole discussion around the metaverse, and when we talk about the necessary file transmission. GLTF, USDZ, USD - those files can be translated from one platform to another, so I think there's a lot of potential for these.

I think the metaverse will most likely be built in WebGL rather than in cloud streaming, but I could be wrong.

Either way, I believe that in the end, it's very important for brands to start digitizing all their assets. For example, Cartier chooses to have a WebGL viewer and recreate every single one of their products into GLB formats. I think it's quite smart because once you have them, you can reuse them anywhere on the web; whether it's in their website, the metaverse or a Snapchat AR filter.

I think there's lots of opportunities, and as 5G expands, compression format algorithms get more efficient, things are going to be smaller and we'll be able to build richer experiences on the web. I think there's a long positive future, and cloud streaming is not necessarily going to replace it.

Thank you! Those were all the questions I had. Thank you for your time, Frantz! Is there anything that you would like to promote, a website, a Twitter handle, or a job opening that you would like to share?

We have a website purely focused on product configurators, utilizing WebAR:

And a main website too, where we show all of our work:

We’re also on twitter! Follow us there: