The Production Metaverse

Presenter: Steve Cronan (5th Kind)
Duration: 9 minutes

All talks

Slides & video

Keyboard shortcuts in the video player
  • Play/pause: space
  • Increase volume: up arrow
  • Decrease volume: down arrow
  • Seek forward: right arrow
  • Seek backward: left arrow
  • Captions on/off: C
  • Fullscreen on/off: F
  • Mute/unmute: M
  • Seek to 0%, 10%… 90%: 0-9

Hello and welcome to the production metaverse. Today, we're gonna walk through, what is the production metaverse? How is metadata used to orchestrate the pipeline of a studio and a production? What are some of the unique challenges? And how is it used in a virtual production across these studios?

Just to start off with some of the foundation of what a production process looks like, it really starts with a script, the foundation of the story. From there you build out research, where you're pulling images from the web, you're building unique pieces of content with storyboards and concept art, you're doing scouting and location photos from around the world, set design with architectural drawings, casting with a whole range of other content.

Then as you go into shooting, you're gonna be capturing all sorts of different resolutions at high frame rates, and potentially multiple cameras across multiple units in multiple locations. It's not uncommon for a production to generate hundreds of terabytes, if not petabytes, in a number of months, and require that all of that content be uploaded into the cloud, distributed out to multiple vendors, all working on that content in different ways and at different phases, and different needs for the production process.

Editorial will then cut that all together and connect all those VFX vendors and all those VFX layers and elements that come together, and all the while feeding the marketing machine and the licensing machine up into the studio.

As I said, the foundation really starts with the script. And what's interesting is a script is a structured document, you have scenes on the left, exterior, interior, day/night labels, story locations, characters, et cetera, all consistently structured in this document.

Taking that foundation as we expand out from the story location into, say, the physical location of where is that set going to be a shot, or are they going to create it in a particular stage? You also have virtual elements with sequences and shots, so how's the visual effects department going to break down the information, and how does this all connect together into this actual production metaverse?

Even going deeper into VFX, if you take, say, a particular shot, it's broken up into hundreds, if not thousands, of frames, and there's layers to those frames, maybe a foreground plate or a background plate. That foreground plate could be made up of models and rigs and textures, and combined with footage to come together just to create a couple of seconds of a beautiful image.

And as an example, we could use maybe a foreground plate of a guy on a horse. Whack in a background layer, some smoke and some people and fire, I'll light it, color it, and you get this beautiful image. So if you can think of that happening thousands, if not tens of thousands of times across all of these millions of frames, thousands of shots and distributed globally to create this one piece of content and surrounding articles.

So what's important is you get a consistent framework of how to structure that information from the file metadata, to how the files are structured within assets, the connective tissue of how a character is connected to an actor, leveraging AI analysis for automation of that as much as possible, connecting a shot to a vendor through external databases like File Maker, Ftrack or a Shotgun database.

How you pull in the pillars from all different phases of production from casting and pre-production to camera on set, to the editorial of post and the archive, all the way into the metaverse and into entities.

So we leverage all this data to orchestrate the pipeline of status changes and triggering events for transcodes and file transfers with different applications in different events, all in the need of moving this data as fast as possible to give creatives as much time to make those decisions before they hit that deadline.

That's one interesting thing about production is you've got a release date and the goal is how can you create the greatest film possible in that amount of time? So functionally our role is to help facilitate the orchestration, the speed of decisions, and creative flow as much as possible.

And so some of the key challenges that we find through browsers is things like dealing with big file sizes and the need for UDP acceleration of large file transfers distributed globally around the world. When you've gotta be uploading many terabytes overnight, and it's required to be delivered in a location for that next artist to work on it, you just really need the robustness of commonly used tools like (?) and Asignet but to be able to get native support in the browser would be amazing.

Upload folders, uploading folders, of course, many challenges there in a browser right now structured files, like a model that has a certain relationship to a rig and a texture maintaining those structures, commonly requires people to zip things up and then have them decompress on upload.

Those robust transfers across a long haul globally distributed teams where you're shooting in New Zealand and your post house is in LA and VFX house is in the UK, et cetera, just being able to move data really fast is required, especially on location, the idea of sort of drop zones of how do you find that, a big pipe to get the data to where you need as fast as possible.

And we also see a lot of duplication of files. You know, if there was the possibility of a client-side checksum where you would get a computable, a fingerprint, to realize you've already got that file in your system, because we're moving so many files around in different locations, just to be able to exchange a common hash, would just allow for a lot faster transfers without duplicate data.

And editing, of course, the hybrid nature of being on prem, on location, in a studio, in the home, orchestrating that data, getting it to where it needs to be in a reasonable amount of time. You know, this is also driving the need to bring the desktop to the data or the desktop in the cloud. So accelerating the needs of common operating system and common applications to run directly in the cloud is accelerating. And a lot of challenges dealing with raw media costs, constantly requiring transcoding, very difficult to protect with any form of watermarking.

Now with playback, we've seen a good evolution, obviously on the browsers, as we've seen, many video technologies accelerate with fragmented MP4s and low-latency streaming live is accelerating across this industry right now, but the need for higher bit rates, higher fidelity, 5.1 audio, et cetera, is definitely a higher requirement. Visual and forensic watermarking have always been fundamental to what we do.

We used to be able to have a compiled container back in the days when Flash was semi-secure and allowed you to have a DRM protected stream with a client side overlay that allowed you to leverage things like CDNs for optimize, streaming. As we have lost that and WebAssembly has not really replaced it in its ability, from a security perspective, to create a secure container. It has required a lot of server-side watermarking to occur, which then of course creates more challenges with buffering.

And VR 360 just being able to play that back with DRM, being able to maintain security as we move into more of these virtual worlds is going to be critical.

And then just for distribution, just offline tracking tends to come up a lot, the security, when things go offline, how can we keep some protection within the operating system or handoff between the browser and the operating system, being able to download folders so that you can maintain those structures, as we talked about the challenges on upload, and then just leveraging audio and AI analysis at scale, I think is going in the right direction and we'll be leveraging a lot more of that going forward.

So I hope you found that useful, as some of the pipelines and tools used in the orchestration of a studio and some of the challenges we run into and yeah, hopefully we can all work together to make it great.

All talks

Workshop sponsor

Adobe

Interested in sponsoring the workshop?
Please check the sponsorship package.