Hello All,
We’re currently facing performance challenges with 3D asset processing in our OTMM environment and would appreciate your expert guidance.
Recently, our customer has started uploading large and complex .obj 3D files in bulk. The size of individual files ranges from 100 MB up to 1.67 GB and even 3 GB. During the transformation phase, the system becomes unresponsive, and we observe that Blender consumes most of the available memory, significantly impacting ingestion operations.
To address this, we provisioned a dedicated physical server equipped with a graphics card, and deployed it as the dmts-image
node.
Current Server Specs:
- OS: Windows Server 2019
- CPU: Intel Core i9-10900X @3 .70GHz
- RAM: 32 GB
- GPU: NVIDIA GeForce RTX 3080 Ti
- Blender Settings: CUDA and OptiX enabled, GPU Compute mode selected
- Windows Graphics Settings: Blender set to High Performance
- OTMM Version 21.4.10, Blender version 2.93.3
Issue Observed:
Even after this setup:
- Blender consumes ~29+ GB RAM, pushing total memory usage to 99%
- Meanwhile, CPU and GPU remain mostly idle:
- GPU usage: < 5%
- CPU usage: < 20%
This results in system instability and blocks other ingestion processes from executing smoothly.
Request:
We need assistance in tuning Blender and the DMTS image service so that:
- GPU resources are better utilized
- RAM consumption is optimized
- Overall performance of transformation and ingestion improves
Has anyone faced a similar scenario, or could suggest configuration optimizations for Blender to offload more work to GPU and reduce memory strain?
Thanks,
jayaram