How to use a 3D object as a mask with a tracked camera?

 

Hi! 

How can I position a 3D object/shape in Aximmetry composer in the scene to mark a specific area and only use it for a mask? And obviously not have the object visible in the final composite. I'm sure this is doable and might figure it out, but it's a tad tricky with a tracked/moving camera. 


To give an concrete example of a use case, i might want to position a 3D plane and align it with the floor exactly where my talent is standing. Then I'd composite this plane as white over black background to achieve a perfectly tracked floor area mask for a talent. This would be useful for so many things like fixing more area specific keying problems or for example keying real shadows, color grading, fx in post, whatever. This mask could be used in real time comp in Aximmetry as well as recorded separately as a tool for post production treatment. 

Currently I can create the 3D object and get the tracking data to move the camera that is capturing the object, but I haven't yet figured out how to position and align the object to match a specific spot exactly in the actual scene/studio. 


I would love to learn to do this in Aximmetry composer. 


It would also be great to know if it is possible in UE to position a 3D object in the scene and then only use it as a mask in Aximmetry and not have it affect the final composite in composer. Sometimes you might deciede the spot for the talent already when creating the scene in UE, then it would be convenient to just put a plane or shape at the spot and make it transparent or somehow invisible to the comp and just access it somehow in composer to turn it into a mask.

Then sometimes it would be necessary to be able to quickly create a floor area mask in composer instead. So interested to learn both workflows if possible. 


Also, if I can get that far, wonder if it was possible to somehow draw a shape to use as a more complex floor/surface mask positioned in 3D space? Since I know Aximmetry has tools for painting and a 2D mask would be sufficent for the task as long as it was just laid on the right surface correctly. I'm aware this could be achieved with creating custon 3D shapes in like blender and importing, this paint method could just be crazy fast and great for certain situations. 


Thanks in advance! 


Emil

   Nestruction Studios

 
Profile Image
Eifert@Aximmetry
  -  

Hi Emil,

What you should keep in mind is that (render) cameras are not very resource extensive in Aximmetry, especially if you are only rendering a plane for a mask with it.

First, you can get the camera's location from the Camera compounds using a Transmit Transformation module, where the From Tunnel pin is set to CAMERA TRANSFORMATION.
You can also get the Camera's focus and aspect in a similar manner using the Transmit Scalar modules.

The advantage of this is that you can do this without opening the linked camera compound.
You can read more about tunnels here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/scripting-in-aximmetry/flow-editor/pin/#transmit-modules 

You can get the billboard's location from the Control Data collection. It is located in the BA subcollection's Transform key:

If you are using more than 1 billboard per camera, then they are located in the BA, BB, and BC subcollections.

You can read more about tunnels here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/scripting-in-aximmetry/flow-editor/collection-for-databases/ 

Now, for example, you can easily render a mask at the location of your billboard from the perspective of your current camera using this flow logic.
I suggest putting the rectangle to Two Sided and ZY plane so it would be under the billboard. You can position it using the Scene Node's pivot.

You could just copy the mask (rectangle) transformation from Unreal into Aximmetry. You can convert the Unreal Coord system into Aximmetry like it is explained here: https://my.aximmetry.com/post/2982-unreal-virtual-camera-in-aximmetry (in the next release there will be actually a compound to make this conversion easier)

If you want to actually render a mask in Unreal, that will be quite cumbersome if you don't want that mask to be in the final picture. You probably could only do it with a render texture in Unreal and that would be quite resource extensive. Instead, you should just use an additional camera in Aximmetry.

Here I have some tips on how to create masks in my comment: https://my.aximmetry.com/post/1796-how-to-mask-out-an-effect

I am not exactly sure what is your aim with the mask. If you can write a bit more about that, then I can probably write more.
Just to give some ideas, what is possible and I could write more about:
You could use the B Mask pin of the Unreal node to cut out the occluding objects of your billboard from your mask.
You can open the camera compound and have more than one keyer, as you suggested, and you could even do this in real-time under production.
You could detect if your billboard is within the mask's area and apply different effects according to this.

Warmest regards,



 
Profile Image
Nestruction Studios
  -  

Thanks a TON Eifert! You rock! 🤘🤘🤘


"I am not exactly sure what is your aim with the mask. If you can write a bit more about that, then I can probably write more." 


No problem! Ok I'll explain more about what I'm after. 

I have more of a Fusion Studio/Nuke VFX compositing perspective where the green screen footage is just comped as a 2D layer between the BG plate and the foreground elements. I will edit videos in DaVinci Resolve and I want to create options for my self as much as possible to enhance/recreate the composition in DR Fusion that I have in Aximmetry in the shootings. Doesn't matter whether I'm trying to capture the final result in Aximmetry live or not, I still value this.

DR has amazing masking tools, but if I want a mask for a certain area of the floor with moving cameras for example, I would need to animate the shape like crazy or track it, or tediously build the 3D scene in Fusion and import the tracking data etc, which could be a hassle with lens distortion etc.. Also: even if I did create a perfect mask in DR in post, it would mean I don't have it in Aximmetry. 

So, if I could create the mask inside Aximmetry I could use it straight in Aximmetry for real time comp AND I could also record it for post production! This would be HUGE. With no extra effort, I could automatically record very valuable masks along side with the rest of the footage in the shootings. No extra renderings. This is why I want one node to output the mask so I can do whatever I want with it. Use for keying and/or fx or whatever in Aximmetry and output for recording. 


To the process.. 


Position a 3D object to match a location in the physical studio:

To give an specific example, let's say there is a keyboard on a stand in the studio in music video shootings. It obviously doesn't move, just the camera does. It has highly reflective parts that causes issues with keying. Now I want to be able to get a mask of roughly the shape of the keyboard that stays in the right place with a moving camera that might orbit like 180° around the keyboardist, so also the depth of the 3D object would need to be on the right ballpark at least. I want to output that mask from a node to both a physical output  and also to masker nodes etc. 

So not really billboard related and I would really love to create this in a separate 3D scene to see everything that is happening and have full control. 


Position a 3D object to match a location in the virtual scene:

Even if these two ways could seem like completely identical goals, it's still different in the way what you're trying to match with what. Imagine this situation. The talent is standing on a kinda small floating platform and you want to keep real shadows. Shadows in studio would extend outside of the boundaries of the floating platform which is a problem. This could be solved by this 3D mask technique, just position  3D object to match the location of the virtual platform and use that as a mask for the shadows layer. In this example, you'd need to match it with the virtual scene instead of the physical studio. Could be done exactly the same way maybe, you tell me heh, if the monitoring is just built so that you can see the 3D object and the scene in the same time. This shouldn't be a problem if I can learn to position the objects to a good starting point. Which you just might have solved! 


Hope that clarifies what I'm trying to do :) Also these were just examples and there might be other ways to solve those issues, but this post is about learning to use the 3D scene to create masks to use within Aximmetry and also to record for use cases out side of Aximmetry. That could be used for so many things! 


"What you should keep in mind is that (render) cameras are not very resource extensive in Aximmetry, especially if you are only rendering a plane for a mask with it." 


Great! Yes I was expecting so! That's why I thought it would be a good idea to just create a separate 3D scene in Aximmetry composer with it's own camera and use that scene to render the masks from 3D objects. Doing this would reuire just the transformation of the tracked cam and a way to correctly position the shapes to actually match certain locations either in the virtual set or in the physical studio, depending on the situation/need.


"First, you can get the camera's location from the Camera compounds using a Transmit Transformation module, where the From Tunnel pin is set to CAMERA TRANSFORMATION." 

I had managed to copy the transformation of the tracked cam to the cam in the new 3D scene, but this would be a better way! The tunnel feature is great! 🙏 Although I don't mind doing the comp inside the compound, already doing that. Cool anyway! 


"You can get the billboard's location from the Control Data collection. It is located in the BA subcollection's Transform key:" 


Good to know! For some productions we do use billboards and that can be helpful for those situations. 


"You could just copy the mask (rectangle) transformation from Unreal into Aximmetry. You can convert the Unreal Coord system into Aximmetry like it is explained here: https://my.aximmetry.com/post/2982-unreal-virtual-camera-in-aximmetry


That was awesome! Really cool that you're also working on a compound that makes it easier to insert UE transformation values to Ax🤘🤘

As the main problem for me is the positioning of the 3D object, now understanding how I can match the coordinates can solve that. I can create a view where I can see the 3D object on top of the scene, and copy the transformation values correctly perhaps using the help of an object in UE, and then move the object around and fine tune by eye while seeing the virtual scene, the actors and the 3D object (mask) on top of each other at the same time. Maybe have the object with decreased opacity to see other things under it. That's easy to setup. 


"If you want to actually render a mask in Unreal, that will be quite cumbersome if you don't want that mask to be in the final picture." 


Ok, yeah not surprised. But no problem! Thanks for ruling that out. 


"You could use the B Mask pin of the Unreal node to cut out the occluding objects of your billboard from your mask." 

Now after I explained more what I'm trying to do you might realize this is perhaps not exactly what I'm looking for. Appreciate it none the less :) 


You can open the camera compound and have more than one keyer, as you suggested, and you could even do this in real-time under production." 

Yeah I'm already using my own compound where I have set up a basic Soft key+Hard key comp and separate key for real shadows, so three keyers, and it all works perfectly in real time!! Which is just madness to me 😃 Been doing this in Fusion studio, I can't comprehend Aximmetry can do all this in real time🫣 


As a summary, you already gave me new things to try and I can probaply figure something out with these new advice. If you however can think of more advice to give to this specific 3D scene topic, I'll appreciate all your efforts to the fullest! 


Thanks again! Extremely happy about your response 🙏🤩


Emil


 
Profile Image
Eifert@Aximmetry
  -  

Hi,

For Fusion Studio/Nuke VFX compositing, you could record the raw Input only, as it is described here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/setting-up-inputs-outputs-for-virtual-production/video/recording/how-to-record-camera-tracking-data/#recording
And then you could key it in post-production. You can use the [Common_Studio]:Compounds\Keyers\Keyer__All.xcomp compound to key in postproduction without tracking and extras that the tracked camera compounds have.

One trick here is that you can use Aximmetry in non-real-time, you just need to set the Frame Rate to anything other than realtime in the Video Recorder module. And then Aximmetry will record frames as fast as it can:

This way you can key or render a mask in post-production faster than the length of your recording. Or for example, you could render your scene again in post-production at a higher resolution or graphical settings. You can do this also with tracked camera compounds when you playback the recorded raw input with recorded tracking.

If you need the footage from the perspective of the tracked camera and you want to add extra keying or masks, then you will have to edit the camera compound, and put the extra keying or masks like it is described here: https://my.aximmetry.com/post/87-custom-masks-to-remove-unwanted-element-in
I think you already discovered that.
What you should keep in mind, is that once you open the camera compound it won't update when you update Aximmetry. So it is worth not doing any logic inside the camera compound, instead, use the transmit modules to send data out to the root, do the logic there, and send the final image back into the camera compound using transmit modules. This way, you can revert the camera compound once you update aximmetry and you will only need to add the transmit modules again to the newer version camera compound. More on linked compounds here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/scripting-in-aximmetry/flow-editor/compound/#linked-compound
Or instead of the transmit modules, you can do your own linked compound inside the tracked camera compound.

If you need the footage from the perspective of the tracked camera in post-production, but with no virtual scene, then you could just simply disconnect the rendered image from the Unreal node:

And turn off Allow Virtuals:

Light warp won't work in this case and you won't be able to have Unreal lights on the talent.


To make your own custom mask and make it easily placeable, you could copy what the STUDIO's cameras are doing when you set up the green in the virtual studio of the camera compound.

You can find these cameras actually near where you would put the mask inside the camera compound. They are located inside the STUDIO MASK compound:

What you find inside is that there are two cameras, one is rendering the mask and the other is rendering the preview. You can see this preview when you are in the Studio mode:

The trick here is that one model can have multiple shader indexes. And the Camera modules can be set to only render a specific shader index.
This way you can render two different videos of the same model. For example, one renders a solid white model for the mask and one camera renders a helper texture on the model, making it easier to place the mask. Cameras have a Shader Index pin that specifies which shader to render:

And there is a Shader Array module that specifies the different shaders:

In the above picture, I just copied the Studio Mask compound that was already used by the camera's STUDIO. You can do it by alt-clicking on a node, this way the copied node will also copy the connections. And added a Rectangle module, Shader Array module, Basic_Solid shader, and the Measure shader.
After this, you can add many things. For example, add a Rectangle module as ground and only have it visible in shader 2 (monitor camera). This way you get a better sense of where is your mask in the real world.


In the case of your keyboard example, where you want to mask a keyboard in your real-world studio, you can quite easily position the mask on your keyboard. Just move your camera tracker or your camera with the tracker on it near the keyboard. And you already get a position quite close to your keyboard. Then you could just use a measuring tape from your camera/tracker to your keyboard if you want to be very accurate.

In the case where you want to place the mask based on your Unreal scene, there are several things I think you could consider:

One more trick you might be interested in, you can have different keying based on the talent's keyed image. For example, this way you can have separate keying near the talent's feet if you want to keep or not keep the shadows there. For this, you strongly key the talent image, then use the Bounds Finder module to locate roughly where the talent's feet start. Then using this pixel position you can split the image into two and have different keyings on them.

We hope these tips help you.

Warmest regards,




 
Profile Image
Nestruction Studios
  -  

Thanks Eifert for the amazing guidance! 


Have been experimenting with the 3D masks and the potential is crazy. However there's still something I haven't yet figured out. 


How can I apply the lens distortion to the scene node camera? I've managed to match 3D objects with real world objects pretty accurately, but without applying lens distortion to the Aximmetrys 3D scene the worlds only align well in the center of the frame. 

Also the "Edge expand" on the scene control board affects the Aximmetry 3D scene weirdly, but is that perhaps an issue with aspect ratio changing which could be solved with making sure that stays in sync. Could maybe figure that out next time. 

Thanks! 


Emil

 
Profile Image
Eifert@Aximmetry
  -  

Hi Emil,

The lens distortion should be already applied inside the STUDIO MASK compound by the Lens Distorter modules:
Note, that they are only active if you turn on Lens Distortion in the SCENE panel.

When using the Edge Expand, make sure that the Out Size pin is connected:

This needs to be connected cause this tells Unreal to render more pixels. The additional pixels will be bent (lens distorted) into the final image instead of blackness.
The cameras inside the STUDIO MASK compound will also render more pixels because of the Edge Expand.

More on Edge Expand here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/green-screen-production/tracked-camera-workflow/scene-control-panel/#edge-expand 

Warmest regards,

 
Profile Image
Nestruction Studios
  -  

Hi Eifert, 


Sorry for not being clearer, it was some time ago we discussed this topic so would have been approriate of me to mention it is not the UE scnene I have this issue with lens distortion but rather a separate simple 3D scene inside Aximmetry. Idea was to import a 3D object to Aximmetry or use its own 3D shapes in the composer separately from the UE scene and also outside of the tracked cam compound. 


Above was your example you posted so pretty much like this (although I'm not using billboards) :



I have managed to match the position of a 3D object in this kinda Aximmetry 3D scene with a real world object in my studio, but it only matches in the center of the frame because this setting does not apparently capture the Aximmetrys 3D object with the lens distortion. 


But what you posted above made me realize there is this module called "lens distortion" for this! Who could have guessed that :D and it has this tasty video input pin! So I'd of course think I could just insert a lens distortion module after the camera node that outputs my 3D mask as a video out of my Aximmetry 3D scene and just drive relevant data to the lens distortion module. Is it that simple? I won't have a chance to try it out for a few days.. 


Thanks once again :) 


Emil

 
Profile Image
Eifert@Aximmetry
  -  

Hi Emil,

Yes, it is almost that simple:)
Also, this explains why you had a problem with the Edge Expand.

To get the Lens Data is a bit tricky, you need to get it from the Record Data collection pin.
You also need Guard Band settings and a cropper because the Edge Expand:

Warmest regards,