Google Suche
Univ.-Prof. Dr.-Ing. Horst-Michael Groß
Fachgebietsleiter
E-Mail: fg-nikr@tu-ilmenau.de
Telefon: +49 3677 692858
Anschrift:
Technische Universität Ilmenau
Fakultät für Informatik und Automatisierung
Fachgebiet Neuroinformatik und Kognitive Robotik
Postfach 10 05 65
98684 Ilmenau
Besucheradresse:
Helmholtzplatz 5 (Zusebau)
Raum 3060
98693 Ilmenau
The dataset aims for providing a base for training of machine learning problems in various domains in the context of robotic grasping of objects presented by a human.
Therefore, the dataset consists of serveral data fields captured synchronously, as well as automatic generated label information.
There are two parts in the database. First is a dataset of Background images that are used for augmentation of the hand object szenes consisting of point clouds RGB, thermo, and depth images.
Second is the hand object data. Here, a green screen is used for recording objects with the holding hand that allows for automatic segmentation and augmentation with the backgrounds.
A) Backgrounds
There are saparate folders for the individual samples containing each the following files:
B) Hand/Objects
For each object there is a folder containing many samples of the objects in several positions.
For each position there is a reference image of the object without the human hand followed by a couple of images showing a human hand holding the object still in its fixed position.
For each sample the following data fields have been recorded:
Cameras have been calibrated using the Azure Kinect as a reference (this device provides its intrinsic and external parameters).
The intrinsics of the cameras of Astra Orbbec S and the thermo camera have been estimated using a checkerboard.
Unfortunately, the resulting focal lengths are not correct, such that the resulting point clouds do not match.
A manual adjustment has been done in order to make the point clouds match each other visually.
The external positions of the cameras have been measured and the rotation was adjusted manually in order to bring the point clouds in a match.
A problem is the Astra Orbbec camera, which has a known bug. The depth data of that camera is drifting away over time due to thermal problems in the device.
A work around is a manual correction of the depth data which is done by multiplying the depth values with a factor that is changing with the x position in the image.
The setup consists of a green screen background, a mounting post also covered by a green screen and the cameras.
Furthermore, there is a replaceable aruco marker board that defines the reference position of the mounting pole.
That marker board is used to define the 3D position of a region of interest, where a cube is cut from the point cloud.
Also the mounting pole is removed from the point cloud by means of that board position.
After the mounting pole position is defined, the 3D mesh model is fitted into the remaining pointcloud using the ICP algorithm.
Then all markers are removed and the green screen is placed to cover the background.
Now a reference sample is taken containing the object only.
Afterwards, the holding hand can reach into the scene and the point cloud is segmented automatically into hand and object by means of the point distance to the mesh object.
The parameters of that labeling procedure are contained in the properties file.
The point cloud of the Azure Kinect is not suitable for that labeling due to the blurry depth image which leads to false 3d points at object borders, that can not be distinguished from the hand / background.
Therefore, we use the Astra Orbbec S point cloud for labeling, while the aruco marker board is detected in the Azure Kinect full HD image.
The RGB image of the Astra Orbbec is to low res for a robust marker detection.
The meta data for each sample contains a <split> tag which marks a sample to belong to one of the four subsets: train, validation, test, testUnseenObject
The split has been done such that the subsets do not contain objects in the same pose.
Additionally, the testUnseenObject contains object instances that have not been in the training and validation at all.
If you consider using the data sets on this page, please reference the following:
Stephan, B.; Köhler, M.; Müller, S.; Zhang, Y.; Gross, H.-M.; Notni, G.
OHO: A Multi-Modal, Multi-Purpose Dataset for Human-Robot Object Hand-Over.
in: Sensors. 2023; 23(18):7807. https://doi.org/10.3390/s23187807
Data set website: www.tu-ilmenau.de/neurob/data-sets-code/oho-dataset
To get access via FTP, please send the completed form by email to nikr-datasets-request@tu-ilmenau.de (for research purposes only).