Hi Aximmetry Community,
I’m working on a virtual production project using Aximmetry DE and an Unreal Engine 3D set called “CORPORATE_STAGE-ED1,” cooked for Aximmetry. I’m filming a talent on a green screen with three real cameras, and I’m trying to set up a workflow with my Blackmagic ISO 8HD switcher. I’m stuck and would really appreciate your insights on how to achieve my goal!
My Setup:
• I have an Unreal Engine project (“CORPORATE_STAGE-ED1”) with three virtual cameras in Aximmetry: Camera_Virtual_1, Camera_Virtual_2, and Camera_Virtual_3, matching the angles of my three real cameras.
• I’m capturing the talent on a green screen with three real cameras, connected to Aximmetry via SDI inputs (using a DeckLink 8K Pro Mini card).
• In Aximmetry, I’m using the “MixedCam_Unreal_3+3_Cam” compound and the “CAMERAS” interface with the “SELECT CAMERA” switcher to toggle between V CAM 1, V CAM 2, and V CAM 3.
The Issue:
The “SELECT CAMERA” switcher only renders one virtual camera at a time (e.g., when I select V CAM 1, I see the texture in INPUT %N=1, but INPUT %N=2 and %N=3 are empty). I need to render all three virtual cameras (Camera_Virtual_1, Camera_Virtual_2, Camera_Virtual_3) simultaneously to achieve one of these two workflows:
Option 1: Green Screen + Backgrounds to Blackmagic ISO 8HD for Keying
• Send three separate SDI signals of the talent on green screen (not keyed) for each camera (Camera_Virtual_1, Camera_Virtual_2, Camera_Virtual_3).
• Send three separate SDI signals of the 3D backgrounds from Unreal Engine, each from the respective virtual camera angles.
• These signals would go to the Blackmagic ISO 8HD (e.g., green screen to inputs 1-3, backgrounds to inputs 4-6), where I’d perform the keying in the switcher.
Option 2: Composited Signals to Blackmagic ISO 8HD for Switching
• Send three composited SDI signals to the Blackmagic ISO 8HD, with the talent already keyed onto the 3D backgrounds:
• Camera_Virtual_1: Talent keyed on Camera_Virtual_1’s 3D background.
• Camera_Virtual_2: Talent keyed on Camera_Virtual_2’s 3D background.
• Camera_Virtual_3: Talent keyed on Camera_Virtual_3’s 3D background.
• These signals would go to the Blackmagic ISO 8HD (e.g., inputs 1-3), and I’d just switch between them without keying in the switcher.
My Questions:
1. How can I modify my Aximmetry project (or the “MixedCam_Unreal_3+3_Cam” compound) to render all three virtual cameras simultaneously and output their signals for Option 1 (green screen + backgrounds) or Option 2 (composited signals)?
2. For Option 1, how do I ensure the green screen signals and corresponding 3D backgrounds are sent as separate SDI outputs to the Blackmagic ISO 8HD?
3. For Option 2, how can I perform the keying in Aximmetry for all three camera angles at once and send the composited signals as separate SDI outputs?
4. Are there any specific settings or hardware considerations for this setup with my DeckLink 8K Pro Mini card?
I’d really appreciate any step-by-step advice, compound adjustments, or references to documentation that could help me achieve either workflow. I can share screenshots or more details if needed. Thanks in advance for your help!
Cheers,
All of these workflows require a multi machine setup where each camera has its own workstation, GPU, capture card and Aximmetry license. There is nothing Aximmetry can do about this, it's an inherent limitation of UE.
Alternatively you can do the switching and keying in Aximmetry itself and just use the ATEM ISO HD8 as a remote controller, CCU, recorder, talkback interface, streaming encoder and audio embedder. These are all functions that can theoretically be implemented within Aximmetry, but are much easier to do with a piece of dedicated external hw. You won't get transitions between cameras that way (hard cuts only) but you can still have them for DSK graphics, stingers and full-screen media playback.
For live operations that is all you can get from a single workstation setup. For live on tape production there is a tracking recorder in Axi that allows you to re-render the signals of the cameras that were not switched live in post, but that is currently only available for tracked cameras. For that to work on V CAMs you would need to either use only T CAM inputs, "faking" the physical camera position with the base cam transformation and using the (limited) VR path options on the T CAM controls or roll your own V CAM tracking recorder in the Axi flow.
We are currently exploring the second option as using T CAM inputs for V CAMs comes with a few other issues. You only get six VR camera paths per camera when you do that (and you can't use the sequencer to expand it either) and setting up a clean plate for this (static) camera would probably require us to rig up some sort of impromptu tracking device just to correctly capture the cp. Plus it won't record VR paths anyway so virtual camera moves would need to go directly from a sequencer into the tracking data input which makes programming them really tedious...