No virtual camera recording button - Recording in the mixed camera compound

 

I am confused about starting a recording in the MixedCam compound. The record button can only be found in the TRK inputs control board and only records the tracked camera(s). 

Furthermore why in the CAMERAS control board in the preview monitor output only three camera's are mentioned (instead of VCAM 1-3 & TCAM 4-6). Does this offer any benefits?

For example I would like to first do a virtual camera move and when the position of the virtual camera matches my tracked camera transition into selfie camera mode.

   RikvdReijen

 
Profile Image
Stefan Reck
  -  
I´m kind of in the same position here - recording and playing back tracked cameras works fine, but according to the Aximmetry documentation you can only record the moves of virtual cameras in .FBX, but not play that back in Aximmetry itself for re-rendering. But maybe that has changed?
 
Profile Image
Eifert@Aximmetry
  -  

Hi,

The record button on the TRK inputs is designed to capture raw tracking data.
As @Stefan Reck correctly mentioned, you can currently use the Record_3-Audio.xcomp compound to record the final position of the camera into an FBX file. This is helpful for post-processing work in other software. More on this here: https://aximmetry.com/learn/virtual-production-workflow/setting-up-inputs-outputs-for-virtual-production/video/recording/how-to-record-camera-tracking-data/#final-composite-recording-1
It seems you are looking for a solution that falls between these two options, perhaps something that could replay every action performed in Aximmetry or simply record transformations (any camera's position) into a Sequence, which the Seqeunce Editor could use. We have a similar request noted on our internal request list and it is considered for future releases.


Regarding the Preview Monitor Output, it can be somewhat misleading when dealing with mixed cameras. Currently, if you select a V Cam, it can display V CAM 1, 2, and 3; if you select a T Cam, it can display T Cam 1, 2, and 3:

Unfortunately, it cannot display both tracked and virtual cameras simultaneously. If you would like, I can add this to our request list as well.


"For example I would like to first do a virtual camera move and when the position of the virtual camera matches my tracked camera transition into selfie camera mode."

For this, a tracked camera could be enough, you don't need a mixed camera. Just use the tracked camera's virtual camera movements. More information about tracked camera's virtual camera movements can be found here: https://aximmetry.com/learn/virtual-production-workflow/green-screen-production/tracked-camera-workflow/cameras-control-board-of-tracked-camera-compounds/#camera-x-vr-path

Warmest regards,

 
Profile Image
Stefan Reck
  -  

Hi Eifert, 

So what are my options in the meantime for recording and replaying the data from the camera sequencer that controls my V CAM 1 path when re-rendering it for post?

-Take the camera sequencer output directly and record it as tracking data (after converting it into the correct format): That works, but I end up with FBX files that are useless within aximmetry. Or is there a way to convert them to xdata?

- Somehow record the data from the camera sequencer output and play it back internally? Does Aximmetry have something like a timecode synchronized universal data recorder?

- Use a tracked camera input instead. But does this record the VR paths the same way as the (in this case static) tracking data? And what about the camera sequencer? I need it because I can't use smoothing with camera paths; it won't reset the camera correctly to its default angle after a shot otherwise...

- Re-render in UE directly... But I like to avoid this if at all possible. It adds an unnecessary layer of complexity and the keying quality is noticeably worse than in Aximmetry with virtuals off.


 
Profile Image
Eifert@Aximmetry
  -  

Hi Stefan,

The tracked camera (xdata) does not record VR paths, so that workaround will not work.

If you have only one sequence in the camera sequencer, you could simply play it back. However, you probably have multiple V Cam movements triggered at different times during your production.

One solution is to record in FBX format using the Record_3-Audio.xcomp compound. Since Aximmetry cannot directly use FBX animations but can handle DAE animations, you will need to convert the FBX recording into the DAE format. You can accomplish this conversion using Blender, which is a free modeling software. Once converted, open the DAE file in Aximmetry. Connect the sequencer, which runs the animation within the DAE model, to your logic.
Note, that this process will record only the final transformation. For example, if you switch camera inputs or change the Billboard during production, those changes will not be recorded and replayed.
To capture such actions, you can record them in a JSON file using Aximmetry's Flow Editor. You might even consider saving the final transformation in this manner, potentially eliminating the need to save it as an FBX file.
Some of the details along with many other things are discussed in this thread: https://my.aximmetry.com/post/2753-replaying-a-real-time-session-with-high-

Warmest regards,

 
Profile Image
Stefan Reck
  -  

Hi Eifert,

while the solution recommended here goes way beyond what I need (I don't need to record the camera switching, and I don't need to do anything about the tracked cameras either, as they now come with their own working recording and playback function...) it goes in the right direction. Can I use the JSON "recorder" to record and play back the output data (position, zoom and focus) of the camera sequencer along with time code? How could that look like?

 
Profile Image
Eifert@Aximmetry
  -  

Hi Stefan,

You can implement the following logic to save position, zoom, and focus into a JSON file according to the Master Timecode and then play it back with the Master Timecode:

Warmest regards,

 
Profile Image
Stefan Reck
  -  

ok, that looks promising. I don't want it to play back to master timecode though as this is a time-of-day free running TC in our case which makes organizing takes in post a lot easier. So for the playback side I need to take the time code from the raw video recording generated alongside the json "tracking data" in a normal video recording module instead, right?

I assume that I can then feed the output of the collectors in the replay logic straight back into the sequencer pins of the 3+3 mixed cam module... What about the time code though? Will it pass along frame accurate with the video and end up correctly in a recorder at the PGM output? Or do I have to route and shift it manually?


 
Profile Image
Eifert@Aximmetry
  -  

Hi,

Yes, you should use the timecode from the video recording. The Video Player modules have a timecode output pin that you can use. However, it might be best to simply enable the Master Timecode in the module and follow the above logic with the Master Timecode:



Yes, It is probably best if you connect back the recorded camera position to the sequencer pins in virtual camera compounds.

Regarding your question about shifting the timecode, I am unclear as to why would that be needed. You can set everything to record the same timecode and then play back with the same timecode seamlessly.
The Video Player modules also have a Timecode Sync pin and a Ref Timecode input pin that you can use to sync multiple videos with each other.

Warmest regards,


 
Profile Image
Stefan Reck
  -  

So I've set up and tested your recording logic in the studio, and it generates a JSON file with the correct cam transformations like it should. However I can't play it back. On further examination I found that this is a timecode issue; peeking at the master timecode shows me two lines, a Frame number (?) in the first line and the actual master timecode in the second line. The integer to text module strips off the actual time code, leaving only the frame number. This is what gets recorded in the JSON file, and it does not match what I will get by later also feeding the camera signal players timecode output through an integer to text module. The actual time code of the video file is correct, however the frame number is not. So no number from the player "timecode" ever matches the data in the JSON file, and obviously nothing gets played back.

What am I missing here? Shouldn't the system record an integer-to-text of the actual time code instead of the frame number?

 
Profile Image
Eifert@Aximmetry
  -  

Hi Stefan

The second line when peeking at the Master Timecode pin represents the hexadecimal version of the same integer number. All integer pins display these two lines. The Integer to Text module doesn't alter the actual value, the difference is that the text pins do not show the hexadecimal line when peeking.

Could it be that the video was recorded at a frame rate different from the project's rendering frame rate? You should always record live production in Realtime frame rate. Only in post-production (offline rendering) can you change the Frame Rate of the video recorder. You can find more information on this topic here: https://aximmetry.com/learn/virtual-production-workflow/setting-up-inputs-outputs-for-virtual-production/video/recording/how-to-record-camera-tracking-data/#offline-rendering
Note that the Video Player has a Frame Rate pin so you can easily change the recorded video's frame rate.

The Timecode consists of seconds and frames within a second. You might consider using the Timecode to Text module instead of the Integer to Text module to see this more clearly. Furthermore, you can use this to record in the JSON file, although it will make the file very slightly larger.

Note that if you record at a higher frame rate, you could interpolate the frames down to a lower frame rate. However, it's best to ensure that everything runs at the same frame rate.

Warmest regards,

 
Profile Image
Stefan Reck
  -  

So I did some more testing, and switching to timecode to text format did not fix the issue either. I get the impression though that the hold collection module in the recording section does not flush out old text lines when I stop a recording and then start a new one - how is this supposed to work ?

Also there seems to be a general issue with the master time code. Turning it on in a video recorder module will not work, I actually have to wire the master timecode from system params into it to get it to record correctly.

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

The Hold Collection is there to keep your tracking data in the Flow Editor even after you stop recording. Depending on your setup, you might not even need it. In the logic above, the data must pass through the Hold Collection during recording to save the tracking data. To do so, ensure the Pass If pin on the Hold Collection is activated, and the Pass Periodically option is deactivated.

When you enable the Timecode Master, the recorder module should be connected after that module. Otherwise, if the recorder is not connected to that logic, it may execute before the other module sets the master timecode.
To resolve it, you can wire the master timecode from system parameters as you did. Or use a Force Execution module with high priority after the module that sets the Timecode Master.

Warmest regards,

 
Profile Image
Stefan Reck
  -  

So I got this to record and play back, thanks for that. I'm currently using direct timecode wiring from my reference input instead of the master timecode function which does not seem to work for me, but  that's ok for now.

There were a few hiccups, for example I really need to watch the order commands are given in. First I need to start collection, then start the video recorder. Next up export the JSON file and *then* turn off collection and recording. Then the module that loads the JSON file for playback needs to have it`s "open" pin cycled, otherwise it won't load the correct file.

Now to make it a bit more usable. My goal would be to have this behave the same way as the tracked camera recorder. Press a button on a panel to start recording, with auto numbering for both video and the json file. Then click on a second button, select the file (preferably with the JSON file with the same name selected automatically as well) and play it back... 

This would also need some switching logic for changing the inputs for V CAM1 on the camera compound, so let's start here.

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

You need something like this:

This setup records a video along with the tracking data, using the video's timecode during playback. The video can help in identifying the recordings even if you don't need the video.
The naming of the files is managed by the Video Recorder module. Once the recording is stopped, the file is stored in the Pin Collector’s Video File pin, from which the Playback logic retrieves the file locations.
The Pin Collector can be added as a panel on any Control Board, similar to the recording panels in tracked camera compounds.
More about Pin Collector here: https://aximmetry.com/learn/virtual-production-workflow/scripting-in-aximmetry/flow-editor/special-compound-pin-collector/
More about Control Board here: https://aximmetry.com/learn/virtual-production-workflow/scripting-in-aximmetry/flow-editor/special-compound-control-board/

Note that I had to move the Force Execution after the Hold Collection module, compared to my previous image.


I am using the SEQ paths for playback.
If you would like to automatically switch to SEQ when playback is activated and revert when it stops, consider the following:
No virtual camera recording button - Recording in the mixed camera compound

This solution switches to the External Control Mode option and then back to normal camera mode: https://aximmetry.com/learn/virtual-production-workflow/green-screen-production/virtual-camera-workflow/cameras-control-board-of-virtual-camera-compounds/#external-control-mode

Warmest regards,

 
Profile Image
Stefan Reck
  -  

That looks nice, thank you. In my case the video recorder would be fed with the raw camera signal of the V CAM1 so I have that available for playback. Putting the name of the Files I just recorded into the playback logic is a nice touch, but it would actually be more useful If I could make it automatically load the matching JSON file whenever I manually select a video file. 

On another note I also need the Camera Sequencer active during live recording; the 16 "normal" camera paths are pretty much unusable for me atm as I can't use automatic smoothing on them (see my other post). Also the JSON "recording" always needs to reflect what happens to V CAM 1, regardless of what other camera is selected in the meantime. So I actually need to switch the Seq Cam Transf 1, Seq Zoom Factor 1 and Seq Focus Distance 1 pins between the output of the sequencer and the  collection transformation whenever I play a JSON File back. 

 
Profile Image
Stefan Reck
  -  
So what would be the best module to switch the input signals, both video and tracking data? 
 
Profile Image
Eifert@Aximmetry
  -  

Hi,

The logic parses any video file you define for playback in the Video File pin of the Pin Collector, even if you have manually defined it.

I used "Tracking" as a suffix for the JSON file, you are free to use any other label as long as you update it consistently in both relevant locations:

To switch between the Input and recording, you can use the If or Switch modules. Such a module is available for every data (pin) type. Except for certain types that can be directly converted to a vector data type. In those cases, you can directly connect the pin to the If or Switch module, such as in the case of trigger pins.

Warmest regards,

 
Profile Image
Stefan Reck
  -  

Ok, so with a few if modules automatic switching seems to work fine as well. Thanks for that :-)

One thing I can`t figure out is how to get the video file box in the control board to interface with the file explorer. I have to manually type the path there, it does not have the three dots at the end of the line like the video recorder and player modules do. I can`t drag and drop files in there either...

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

If you don't see the three dots, it means that the Pin Collector's pin is not directly connected with a pin of a module that also has three dots. For example, if you are using an If Text module in between:

If the direct connection is not possible, you can add a Dash File module, provided that the Allowed Types include "Video":

Warmest regards,

 
Profile Image
Stefan Reck
  -  

That's cool, we're *almost* there...

Putting a Dash File module set to video between the Pin collector that is connected to the control board gives me access to the file browser, which is fine for selecting a file for playback. But I can't actually set a base file name for new recordings there, because the name and path for the recording are set directly through the #Record pin on the pin collector. I tried to pass the file name directly into the recorder, but that did not work.

So setting the recording path and base file name still needs to be done manually. It's doable, but it would be a lot easier to use if I could also use the file browser for this. 

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

I am not exactly sure how you want to automate this, but here are some options:

You can create the following logic with the Dash File module (and set Dash File's Allowed Types pin to Video):
No virtual camera recording button - Recording in the mixed camera compound

The above Video Recorder will overwrite any video that you dragged and dropped into the Pin Collector on the Control Board:
No virtual camera recording button - Recording in the mixed camera compound

Note that I am using the Exact Path pin of the Video Recorder instead of the Output Folder and Output File pins, which are an alternative way to define the save location. With the Exact Path pin, the Auto Numbering option doesn't work.

If you prefer to use Auto Numbering to avoid overwriting recordings, you can connect the Path Split module to the Output Folder and Output File pins. Alternatively, you could develop your own auto-numbering logic using the Path Split along with other modules.
No virtual camera recording button - Recording in the mixed camera compound

Another option is to just directly connect the Output Folder and Output File pins to a Pin Collector. You can then set their values when selecting the Pin Collector on the Control Board.

Warmest regards,

 
Profile Image
Stefan Reck
  -  

Thanks, that got me on the right track. I ended up making two different pin collectors for recording and playback, just like the control board for the tracked cameras has. This also allowed me to put a few controls for things like bitrate onto the recording control as well, which is nice.

Now for the finishing touches:

- I´d like the play button to reset itself when a video file has ended. There is an "ended" pin on the player, but where does it need to go to do this?

Alternatively I also thought about adding a stop button to both the play and record panel, as the default button style makes it really hard to tell if they are active or not. In this case I would need something like a flip-flop module, but I can´t find one...

- How can I get feedback about the state of the player and recorder out to bitfocus companion?


 
Profile Image
Eifert@Aximmetry
  -  

Hi,

You can simply reconnect it to the Pin Collector:


If you want to use two buttons to perform a flip-flop action, you can use the Bitoggle module.
Note that you will need to use the trigger data type for the button in this case.:



I am not certain that Bitfocus Companion supports feedback. Aximmetry can also receive commands through OSC from Bitfocus and not just through the Aximmetry connection type. But I don't believe there's a way to send OSC to Bitfocus.
On the other hand, if you're using Stream Deck or Loupedeck, you should use Aximmetry’s official plugin for these devices to receive feedback:
Stream Deck: https://aximmetry.com/learn/virtual-production-workflow/setting-up-inputs-outputs-for-virtual-production/external-controllers/using-elgato-stream-deck-to-control-a-scene/
Loupedeck: https://aximmetry.com/learn/virtual-production-workflow/setting-up-inputs-outputs-for-virtual-production/external-controllers/using-loupedeck-consoles-razer-stream-controller-to-control-a-scene/

Warmest regards,

 
Profile Image
Stefan Reck
  -  

Thanks, I'll have a look into that...

On the topic of remote control I would really like to see more cooperation between system providers/manufacturers like yourself and the folks over at Bitfocus. Companion has become the de facto industry standard control environment in streaming studios and event production because it provides a convenient, completely vendor agnostic way of sending commands and receiving feedback. Anything that the Companion community (it's open source after all) has been able to figure out the control protocol for can be integrated. You don't even need a stream deck to do this - our camera switching and recording controls for example are triggered from the hardware buttons on a Blackmagic ATEM HD8 ISO video switcher. The native stream deck software on the other hand is severely limited by the willingness (or lack thereof) of the actual equipment manufacturers and software companies to write plugins for Elgato.

 
Profile Image
Eifert@Aximmetry
  -  

Hi Stefan,

I've looked into Companion, and while you cannot trigger the feature named "feedback" in Companion, you can trigger other actions from Aximmetry. To achieve this, you can use various connections, but probably the best is to use the HTTP option. The available commands are listed in the Settings tab:

In Aximmetry, you simply need an HTTP Request module with POST turned on:

Note that if you change a variable in Companion with such a command, you can likely create various functions and events in Companion that will respond to it.

I've added this to our request list for better integration with Companion, but note that we didn't develop the Companion connection (plugin) for Aximmetry.

Warmest regards,

;