Hello- How does Aximmetry determine the bit depth in a workflow from input to output? For example, if I set the output on a SDI port to 10-bit, but the input live camera capture to set to Auto (which defaults to 8-bit), - what is the net result of this configuration? Additionally, what happens if I set the input/capture to 10-Bit, but the render output to 8-Bit?
I know, ideally I should have these be the same, but I found that if I set all ports to 8-Bit, I don't have a problem with GPU and CPU overloading and my frame rate stays steady at 24fps. If I set either of these to 10-Bit, my GPU and CPU gets overloaded. From my reading/research, for keying, I would want 10-Bit to get the best possible key. I'm using SDI ports for input and output.
In the end I probably need a new GPU but want to know the internal specifics so that I can understand and troubleshoot with some confidence, and justify the expenditure of a new GPU and maybe a new CPU/computer as well.
thanks