Chat plugin Intergrating AI chat models into Aximmetry

Dear Aximmetry Team,


I am writing to formally request the development of a new feature and plugin for Aximmetry that integrates AI models directly into the virtual production software to enhance the production workflow. I understand that there are already some open source models that works with Unreal Engine, making it now easier to use unreal.


I believe the integration of AI could significantly aid users through the following three core functions:


1. Replica and Blueprint Generation: The AI should be able to study an image and build a digital replica, including a blueprint, based on a simple user prompt.

2. Guided Process with On-Screen Display: To assist users when they are stuck, the plugin should provide a step-by-step guide via an on-screen display to navigate complex tasks.

3. Studio Assist: This feature would allow the AI to fully operate the production based on guided time frames, automatically switching views and sources. This would be particularly beneficial for solo producers who lack the additional staff to manage multiple inputs simultaneously.


Regarding the business model, I suggest that this could be offered as a subscription service with monthly or annual tiers for Studio users, while perhaps remaining free for Broadcast users, or free for all to encourage new user adoptions.


Thank you for considering this request. I believe these features would greatly add to the value and accessibility of Aximmetry for all types of creators.


   Digiafrik

コメント

Eifert@Aximmetry
  -  

Hi,

I am glad to say that Aximmetry is embracing AI as a tool to enhance, not replace, production workflows.

We have just released Aximmetry 2026.2.0 BETA, which introduces new compounds for accessing OpenAI’s text generation, image generation, image editing, and video generation capabilities. More information about this update is available here: https://aximmetry.com/learn/virtual-production-workflow/which-aximmetry-is-right-for-you/software-version-history/#%E2%96%BA-2026-2-0 

This will be followed by MCP support in version 2026.3.0, which we will be showcasing with video demonstrations at NAB Show 2026 in April.

We are introducing native MCP (Model Context Protocol) support, making Aximmetry the first virtual production platform to adopt this framework.
The integration allows production teams to bring their preferred AI agent, connecting it to Aximmetry through a structured tool interface that provides controlled access to runtime operations, compound inspection, validation workflows, and optional GUI assistance. All within clear operational boundaries. 

For studio technicians, MCP introduces intelligent assistance for managing complex productions in Aximmetry. AI agents can inspect modules and pin values, trigger predefined actions, adjust parameters, assist with show setup, and help operators troubleshoot compounds. Because MCP exposes structured tool definitions and policy controls, AI systems operate within defined constraints, reducing the risks associated with ad hoc scripting or unsecured automation.

For technical directors and production leaders, the integration delivers operational benefits through more repeatable workflows and faster access to system knowledge. Teams can use AI assistance to reduce setup friction, improve consistency, and support operators during time-critical production scenarios.

As for your exact suggestions:

  1. If you expect AI to fully create a scene with traditional 3D models based on a single image, current models are probably not advanced enough to do that yet. Also, note that MCP support is being developed for Aximmetry, not for Unreal, at least initially.

  2. One of MCP’s main advantages is that the AI does not need to ask you to set every parameter yourself. It will be able to change and set almost anything on its own, which makes an on-screen display largely unnecessary.

  3. You can most likely already build these kinds of workflows using the new AI compounds available in version 2026.2.0 BETA. From a practical perspective, a model like ChatGPT 5.4 Nano may be a good option because it provides fast and cost-effective image understanding.

    I would also be happy to help with setups of this kind. We are still discovering all the use cases and the most effective ways to work with these new compounds. If you run into any issues, we can share the relevant documentation as well.

One more note: using open-source models locally on your computer is unlikely to be a practical solution for quite a while. Many of these models cannot run efficiently on consumer-grade graphics cards, and smaller versions are often much less capable than the original models. They may also consume significant GPU resources that would otherwise be needed for rendering, while still delivering relatively slow performance.

Warmest regards,