Getting started with ItemPick¶
This tutorial shows how to set up the rc_visard to start using the ItemPick module and guides the reader to the computation of the first grasps.
Before we start¶
In order to go through this tutorial, the following prerequisites should be met:
The rc_visard is properly configured:
- The rc_visard is running the latest firmware (version 22.04) and the rc_visard’s license includes the ItemPick module. This can be verified on the page of the Web GUI.
One or more workpieces are in the field of view of the camera. They should meet the following requisites:
- The workpiece surface, shape and weight are suitable for picking by suction (i.e. the vacuum generated by the gripper system together with the size of the suction cup generate a suction force large enough to lift and move the workpiece).
- The workpiece surface appears in the depth image. This can be verified by placing the workpiece in front of the rc_visard and checking the Web GUI’s Depth image page. If the workpiece presents holes or low confidence regions (i.e. average gray pixels in the Confidence image), one can follow the Tuning of image parameters tutorial.
- The workpiece has at least the minimum dimensions required by the ItemPick module. This threshold is set to 300 pixels in the depth image, which approximately correspond to an object of size 0.04 m x 0.04 m at a distance of 1.2 m from the sensor.
Setting up the scene¶
We recommend to mount the rc_visard on a tripod or a static support as explained in the rc_visard’s Mechanical interface description.
Alternatively, the rc_visard can also be mounted on the end-effector of a robot, but the integration with the robot is not included in this tutorial (therefore, no hand-eye calibration is required to complete the tutorial).
The mounting should be stable such that the sensor remains static while ItemPick acquires data.
The workpieces to be grasped should be placed in the field of view of the rc_visard. The optimal distance to the objects depends on the rc_visard model, as shown in the table below.
|Minimum distance||Maximum distance|
|rc_visard 160m/c||0.5 m||1.3 m|
|rc_visard 65m/c||0.2 m||1.0 m|
Since the depth accuracy declines quadratically with an object’s distance from the sensor, the rc_visard should be placed as close to the objects as possible.
Configuring image parameters¶
Once the scene has been set up, we recommend to check the Web GUI’s Depth image page to verify that the images are well-exposed and the depth image is dense with no holes on the workpieces. The tutorial Tuning of image parameters covers all required steps to get the best quality for stereo and depth images. The depth image density can also be increased by adding the rc_randomdot projector to the rc_visard.
ItemPick provides its best results when the depth image quality is set to High or Full. The Static mode might be beneficial in static scenes, but it increases the data acquisition time.
Computing the first grasps¶
The compute_grasps service of the ItemPick module triggers the computation of suction grasps on the objects in the scene.
This section shows how to place a
compute_grasps request to ItemPick
using the rc_visard’s REST-API interface. This can be done in Swagger UI,
in command lines or scripts using curl, and programmatically
using a client library (e.g. from the robot controller).
In this tutorial we focus on the first two options.
compute_grasp service can also be triggered in the Try Out section
of the Web GUI’s page.
The required arguments are pre-set with default values,
therefore the service can be directly called by hitting the Detect Grasp Points button.
In the below examples, the service will be triggered with the minimum set of required arguments. For this reason, the results might be sub-optimal (e.g. multiple grasps on a single workpiece, or grasps on unwanted objects).
The images in the top section of the Web GUI’spage show the results of the last successful request to ItemPick.