Scene object placer with Aruco Marker

 

Hi Eifert (and all),

I've been recently testing the marker detector (which works pretty well for camera placement) in order to place virtual objects in my Unreal scene (with Broadcast DE).

What I thought would be an easy setup is proving more complicated than I anticipated.

From my understanding, the marker detector gives the relative location/orientation of the marker center from the point of view of the camera looking at it.

So, I thought :

Camera Origin transform + Aruco Marker transform relative to camera = Marker world transform.

In the flow editor, I proceeded as follow in a simplified way :

Scene object placer with Aruco Marker

The trouble is I get inconsistent results between the "Cam transform values", "origins", "scene node transform binding types", "add Transf Type" etc...

For instance having the same marker for the center of the scene (detect origin in TRK inputs) and object to place should return a zero (or near zero accounting for rounding errors), which doesn't seem to be the case here.

So my 2 questions :

- Is there already a working compound that does this exactly ?

- If not, what am I doing wrong ?

Thanks for your help.

   ericmarodon

 
Profile Image
ericmarodon
  -  

No one else tried to use the markers to place objects in the scene ?

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

I think, the only thing you might be missing here, is the distinction between the tracked camera transformation and the virtual camera transformation. They are connected to the Unreal module in two separate ways.
What you connected in your image is the virtual camera's transformation:
Scene object placer with Aruco Marker

However, the tracked camera's transformation is located inside the Control Data collection pin instead of the Cam Transform pin.
To obtain the tracked camera's transformation from the collection, you should use a Collection Transformation module and specify the Key to Cam Transform:

This setup exists because, historically, tracked cameras and virtual cameras were two separate compounds and they connect to the Unreal module through these two distinct ways.

Warmest regards,

 
Profile Image
ericmarodon
  -  

Thanks a lot Eifert, I'll try that ASAP.

Another question quick regarding the scene node :  there's a "Binding type" (None, Look at, Look at Horizon, Look at plane...) and an "Add Transfer Type" (None, Local, World).

I believe I should choose "World" for the "Add Transfer Type" in my case since I'm looking for World coordinates in the end.

But what is Binding Type about ? It does seem to make a difference when switching from one to another.

Thanks a lot,

Eric

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

Firstly, the Scene Node module is designed to control objects within Aximmetry rendered scenes, for example, it even has gizmos in the preview panel when selected. Instead of using the Scene Node, you could consider using the much simpler Transformation Concat module to combine two transformations.

Using the Binding type of the Scene Node, you can link two objects together so that one for example always faces toward the other. There are some examples of it here: https://my.aximmetry.com/post/144-target-camera-type
Of course, you could also use this for the transformation data you send to Unreal Engine. There is an example of that in this discussion: https://my.aximmetry.com/post/3695-look-at-camera-3d-mesh

Warmest regards,

 
Profile Image
Eifert@Aximmetry
  -  

Hi,

Also, you can simplify the logic by connecting the final camera transformation directly to the Marker Detector. This eliminates the need for the Scene node or Transformation Concat module. Like this:

The reason for this is that the Marker Detector provides a transformation relative to its Cam Transform input pin.

Warmest regards,


;