Detecting objects in 3D space from monocular input is crucial for applications ranging from robotics to scene understanding. Despite advanced performance in the indoor and autonomous driving domains, existing monocular 3D detection models struggle with in-the-wild images due to the lack of 3D in-the-wild datasets and the challenges of 3D annotation. We introduce LabelAny3D, an analysis-by-synthesis framework that reconstructs holistic 3D scenes from 2D images to efficiently produce high-quality 3D bounding box annotations. Built on this pipeline, we present COCO3D, a new benchmark for open-vocabulary monocular 3D detection, derived from the MS-COCO dataset and covering a wide range of object categories absent from existing 3D datasets. Experiments show that annotations generated by LabelAny3D improve monocular 3D detection performance across multiple benchmarks, outperforming prior auto-labeling approaches in quality. These results demonstrate the promise of foundation-model-driven annotation for scaling up 3D recognition in realistic, open-world settings.
(a) Given an image, we first extract high-resolution object crops; (b) A holistic 3D scene is then built from robust depth estimation, 3D object reconstruction, and 2D-3D alignment algorithms. (c) Lastly, 3D labels can be easily extracted from the reconstructed 3D scene.
Performance comparison of models trained on COCO3D annotations across multiple benchmarks. Our LabelAny3D-generated annotations enable strong performance on diverse 3D detection tasks.
@inproceedings{yao2025labelany3d,
title={LabelAny3D: Label Any Object 3D in the Wild},
author={Jin Yao and Radowan Mahmud Redoy and Sebastian Elbaum and Matthew B. Dwyer and Zezhou Cheng},
booktitle={Neural Information Processing Systems (NeurIPS)},
year={2025}
}