GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for
Object Grasping
H. Fang, C. Wang, M. Gou, and C. Lu. (2019)cite arxiv:1912.13470Comment: Report for our recent work.
Abstract
Object grasping is critical for many applications, which is also a
challenging computer vision problem. However, for the clustered scene, current
researches suffer from the problems of insufficient training data and the
lacking of evaluation benchmarks. In this work, we contribute a large-scale
grasp pose detection dataset with a unified evaluation system. Our dataset
contains 87,040 RGBD images with over 370 million grasp poses. Meanwhile, our
evaluation system directly reports whether a grasping is successful or not by
analytic computation, which is able to evaluate any kind of grasp poses without
exhausted labeling pose ground-truth. We conduct extensive experiments to show
that our dataset and evaluation system can align well with real-world
experiments. Our dataset, source code and models will be made publicly
available.
Description
[1912.13470] GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for Object Grasping
%0 Generic
%1 fang2019graspnet
%A Fang, Hao-Shu
%A Wang, Chenxi
%A Gou, Minghao
%A Lu, Cewu
%D 2019
%K 2019 dataset grasp
%T GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for
Object Grasping
%U http://arxiv.org/abs/1912.13470
%X Object grasping is critical for many applications, which is also a
challenging computer vision problem. However, for the clustered scene, current
researches suffer from the problems of insufficient training data and the
lacking of evaluation benchmarks. In this work, we contribute a large-scale
grasp pose detection dataset with a unified evaluation system. Our dataset
contains 87,040 RGBD images with over 370 million grasp poses. Meanwhile, our
evaluation system directly reports whether a grasping is successful or not by
analytic computation, which is able to evaluate any kind of grasp poses without
exhausted labeling pose ground-truth. We conduct extensive experiments to show
that our dataset and evaluation system can align well with real-world
experiments. Our dataset, source code and models will be made publicly
available.
@misc{fang2019graspnet,
abstract = {Object grasping is critical for many applications, which is also a
challenging computer vision problem. However, for the clustered scene, current
researches suffer from the problems of insufficient training data and the
lacking of evaluation benchmarks. In this work, we contribute a large-scale
grasp pose detection dataset with a unified evaluation system. Our dataset
contains 87,040 RGBD images with over 370 million grasp poses. Meanwhile, our
evaluation system directly reports whether a grasping is successful or not by
analytic computation, which is able to evaluate any kind of grasp poses without
exhausted labeling pose ground-truth. We conduct extensive experiments to show
that our dataset and evaluation system can align well with real-world
experiments. Our dataset, source code and models will be made publicly
available.},
added-at = {2020-01-04T20:44:17.000+0100},
author = {Fang, Hao-Shu and Wang, Chenxi and Gou, Minghao and Lu, Cewu},
biburl = {https://www.bibsonomy.org/bibtex/28dafd4a060ff68cc76d0493302728291/analyst},
description = {[1912.13470] GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for Object Grasping},
interhash = {c69f4935382372d97671573cb5360801},
intrahash = {8dafd4a060ff68cc76d0493302728291},
keywords = {2019 dataset grasp},
note = {cite arxiv:1912.13470Comment: Report for our recent work},
timestamp = {2020-01-04T20:44:17.000+0100},
title = {GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for
Object Grasping},
url = {http://arxiv.org/abs/1912.13470},
year = 2019
}