Hiya. I'm wondering if anyone has approached automation of rendering clips for which you've recorded the tracking data and multiple camera angles.
We've done a little of this but we still had an operator comping each clip and then rendering them one by one, which took some time.
I've actually set up a simple render queue before for when we were working with post compositing (rather than live). I'm sure I'd do a better job now but there's a bit of added complexity when working with multiple camera angles and key settings.
I'll probably automate the renaming and reloading of clips and then integrate some kind of preset selection (key, matte, CC etc) system based on the camera angle. Then some way of queuing these to be played out and recorded to a hyperdeck.
I'm gathering requirements for our project now but thought it was worth checking here first in case anyone's tackled it yet!
Hi,
Control Board Presets might be able to help you in your endeavor. You can learn more about Presets here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/scripting-in-aximmetry/flow-editor/special-compound-control-board/#presets
Additionally, why use Hyperdeck to record? If you utilize Aximmetry to record the final composition, you can use rendering quality and resolution that might otherwise overwhelm your computer's performance. More information on how to render in non-real-time with Aximmetry can be found here: https://aximmetry.com/learn/virtual-production-workflow/preparation-of-the-production-environment-phase-i/setting-up-inputs-outputs-for-virtual-production/video/recording/how-to-record-camera-tracking-data/#offline-rendering
Warmest regards,