Publication Details
Cheap Rendering vs. Costly Annotation: Rendered Omnidirectional Dataset of Vehicles
Realistic Rendering, Dataset of Vehicles, Omnidirectional Views, Computer Vision, Object Detection
Detection of vehicles in traffic surveillance needs good and large training datasets in order to achieve competitive detection rates. We are showing an approach to automatic synthesis of custom datasets, simulating various major influences: viewpoint, camera parameters, sunlight, surrounding environment, etc. Our goal is to create a competitive vehicle detector which "has not seen a real car before." We are using Blender as the modeling and rendering engine. A suitable scene graph accompanied by a set of scripts was created, that allows simple configuration of the synthesized dataset. The generator is also capable of storing rich set of metadata that are used as annotations of the synthesized images. We synthesized several experimental datasets, evaluated their statistical properties, as compared to real-life datasets. Most importantly, we trained a detector on the synthetic data. Its detection performance is comparable to a detector trained on state-of-the-art real-life dataset. Synthesis of a dataset of 10,000 images takes only several hours, which is much more efficient, compared to manual annotation, let aside the possibility of human error in annotation.
@inproceedings{BUT111601,
author="Peter {Šlosár} and Roman {Juránek} and Adam {Herout}",
title="Cheap Rendering vs. Costly Annotation: Rendered Omnidirectional Dataset of Vehicles",
booktitle="Proceedings of Spring Conference on Computer Graphics",
year="2014",
pages="105--112",
publisher="Comenius University in Bratislava",
address="Smolenice",
doi="10.1145/2643188.2643191",
isbn="978-80-223-3601-7",
url="http://medusa.fit.vutbr.cz/SynthCars/"
}