Overview

This file illustrates few image processing pipeline using vision module from niryo_edu package. This module is based in OpenCV and its functions are detailed in Functions documentation.

The package PyNiryo comes up with the vision module which contains image processing functions including thresholding, blob detection, …
You can import it using from pyniryo import vision.

Note

You can also import only the methods you want: from pyniryo import uncompress_image.

Play with Robot video stream

Get & display image from stream

Ned can share its video stream through TCP. As sending raw images will lead to heavy packets which can saturate the network, it sends compressed images. You access it through the robot’s function: get_img_compressed(). Once your image is received, you firstly need to uncompress via uncompress_image() and you can then display it with show_img_and_wait_close().

 1from pyniryo import NiryoRobot
 2from pyniryo.vision import uncompress_image, show_img_and_wait_close
 3
 4# Connecting to robot
 5robot = NiryoRobot('<robot_ip_address>')
 6
 7# Getting image
 8img_compressed = robot.get_img_compressed()
 9# Uncompressing image
10img = uncompress_image(img_compressed)
11
12# Displaying
13show_img_and_wait_close("img_stream", img)

Note

show_img_and_wait_close() will wait for the user to press either Q or Esc key, before closing the window.

Undistort and display video stream

Note

Since the version 5.8.0 of the robot, you don’t need to undistort the image beforehand.

In this section, we are going to display the raw video stream & the undistorted video stream.

As Ned’s camera is passing raw images to the robot, these images are distorted due to the camera lens. In order to undistort them, we need to use Ned’s camera intrinsics.

To undistort the raw image, we use undistort_image() which needs to be called with the parameters given by Ned through get_camera_intrinsics().

Once, we have both raw & undistorted images, we can concatenate them in order to display them in once with concat_imgs(). Finally, we display the image show_img().

 1from pyniryo import NiryoRobot, PoseObject
 2from pyniryo.vision import uncompress_image, undistort_image, concat_imgs, show_img
 3
 4observation_pose = PoseObject(0.18, -0.01, 0.35, 0.0, 1.56, 0.0)
 5
 6# Connecting to robot
 7robot = NiryoRobot('<robot_ip_address>')
 8robot.calibrate_auto()
 9
10# Getting calibration param
11mtx, dist = robot.get_camera_intrinsics()
12# Moving to observation pose
13robot.move(observation_pose)
14
15while "User do not press Escape neither Q":
16    # Getting image
17    img_compressed = robot.get_img_compressed()
18    # Uncompressing image
19    img_raw = uncompress_image(img_compressed)
20    # Undistorting
21    img_undistort = undistort_image(img_raw, mtx, dist)
22
23    # - Display
24    # Concatenating raw image and undistorted image
25    concat_ims = concat_imgs((img_raw, img_undistort))
26
27    # Showing images
28    key = show_img("Images raw & undistorted", concat_ims, wait_ms=30)
29    if key in [27, ord("q")]:  # Will break loop if the user press Escape or Q
30        break

Note

To see more about camera distortion/undistortion, go on OpenCV Documentation about Camera Calibration.

Pure image processing functions

In order to illustrate functions, we are going to use the following image.

Image illustration

Attention

In this section it is supposed that:

  • You have imported pyniryo.vision

  • The variable img is containing the image on which image processing is applied

Color thresholding

Color thresholding is very useful in order to detect object with an uniform color. The implemented function to realize this operation is threshold_hsv().

The following code is using parameters from ColorHSV enum in order to threshold Red features & hand made parameters to extract Blue:

 1from pyniryo.vision import threshold_hsv, ColorHSV, show_img, show_img_and_wait_close
 2
 3img_threshold_red = threshold_hsv(img_test, *ColorHSV.RED.value)
 4
 5blue_min_hsv = [90, 85, 70]
 6blue_max_hsv = [125, 255, 255]
 7
 8img_threshold_blue = threshold_hsv(img_test, list_min_hsv=blue_min_hsv, list_max_hsv=blue_max_hsv, reverse_hue=False)
 9
10show_img("img_threshold_red", img_threshold_red)
11
12show_img_and_wait_close("img_threshold_blue", img_threshold_blue)
Images result

Thresh color

Image result

Blue

Image Threshold Blue

Red

Image Threshold Red

Morphological transformations

Morphological transformations are some simple operations based on the image shape. It is normally performed on binary images. It needs two inputs, one is our original image, second one is called structuring element or kernel which decides the nature of operation. Two basic morphological operators are Erosion and Dilation.

Then its variant forms like Opening, Closing also comes into play. Learn more on Wikipedia page.

The implemented function to realize these operations is morphological_transformations(). It uses MorphoType and KernelType to determine which operation should be applied on the image.

The code shows how to do a Closing & an Erosion:

 1from pyniryo.vision import (threshold_hsv,
 2                            ColorHSV,
 3                            morphological_transformations,
 4                            MorphoType,
 5                            KernelType,
 6                            show_img,
 7                            show_img_and_wait_close)
 8
 9img_threshold = threshold_hsv(img_test, *ColorHSV.ANY.value)
10
11img_close = morphological_transformations(img_threshold,
12                                          morpho_type=MorphoType.CLOSE,
13                                          kernel_shape=(11, 11),
14                                          kernel_type=KernelType.ELLIPSE)
15
16img_erode = morphological_transformations(img_threshold,
17                                          morpho_type=MorphoType.ERODE,
18                                          kernel_shape=(9, 9),
19                                          kernel_type=KernelType.RECT)
20
21show_img("img_threshold", img_threshold)
22show_img("img_erode", img_erode)
23show_img_and_wait_close("img_close", img_close)
Images result

Morpho type

Image result

None

Image Threshold Any

Erode

Image Erode

Close

Image Close

Contours finder

Contours can be explained simply as a curve joining all the continuous points (along the boundary), having same color or intensity. The contours are a useful tool for shape analysis and object detection and recognition. See more on OpenCV Documentation.

The implemented function to realize these operations is biggest_contours_finder() which takes a Black & White image, and extracts the biggest (in term of area) contours from it.

The code to extract and draw the 3 biggest contours from an image is the following:

 1from pyniryo.vision import (threshold_hsv,
 2                            ColorHSV,
 3                            morphological_transformations,
 4                            MorphoType,
 5                            KernelType,
 6                            biggest_contours_finder,
 7                            draw_contours,
 8                            show_img,
 9                            show_img_and_wait_close)
10
11img_threshold = threshold_hsv(img_test, *ColorHSV.ANY.value)
12img_threshold = morphological_transformations(img_threshold,
13                                              morpho_type=MorphoType.OPEN,
14                                              kernel_shape=(11, 11),
15                                              kernel_type=KernelType.ELLIPSE)
16
17cnts = biggest_contours_finder(img_threshold, 3)
18
19img_contours = draw_contours(img_threshold, cnts)
20
21show_img("init", img_threshold)
22show_img_and_wait_close("img with contours", img_contours)
Images result

Thresh + Opening

Image Threshold Any & Open

3 contours

3 contours

Find object center position

In order to catch an object, we need to find a pose from where the end effector can grasp the object. The following method uses contours which have been found in the previous section and finds their barycenter and orientation via the functions get_contour_barycenter() & get_contour_angle().

 1from pyniryo import vision
 2
 3img_threshold = vision.threshold_hsv(img_test, *vision.ColorHSV.ANY.value)
 4img_threshold = vision.morphological_transformations(img_threshold,
 5                                                     morpho_type=vision.MorphoType.OPEN,
 6                                                     kernel_shape=(11, 11),
 7                                                     kernel_type=vision.KernelType.ELLIPSE)
 8
 9cnt = vision.biggest_contour_finder(img_threshold)
10
11cnt_barycenter = vision.get_contour_barycenter(cnt)
12cx, cy = cnt_barycenter
13cnt_angle = vision.get_contour_angle(cnt)
14
15img_debug = vision.draw_contours(img_threshold, [cnt])
16img_debug = vision.draw_barycenter(img_debug, cx, cy)
17img_debug = vision.draw_angle(img_debug, cx, cy, cnt_angle)
18vision.show_img("Image with contours, barycenter and angle", img_debug, wait_ms=30)
Images result

Thresh + Opening

Image Threshold Any & Open

Barycenter + Angle

Barycenter + Angle

Note

The drawn vector is normal to the contour’s length because we want Ned to catch the object by the width rather than the length. Indeed, it leads to least cases where the gripper cannot open enough.

Markers extraction

As image processing happens in a workspace, it is important to extract the workspace beforehand! To do so, you can use the function extract_img_workspace().

1from pyniryo import extract_img_workspace, show_img, show_img_and_wait_close
2
3status, im_work = extract_img_workspace(img, workspace_ratio=1.0)
4show_img("init", img_test)
5show_img_and_wait_close("img_workspace", img_workspace)
Images result

Original

Original Image

Extracted

Extracted Image

Debug mode

If Ned’s functions are failing, you can use Debug functions which are debug_threshold_color() & debug_markers() in order to display what the robot sees.

You can use the functions as follow:

1from pyniryo.vision import debug_markers, debug_threshold_color, show_img, show_img_and_wait_close, ColorHSV
2
3debug_color = debug_threshold_color(img_test, ColorHSV.RED)
4_status, debug_markers_im = debug_markers(img_test, workspace_ratio=1.0)
5
6show_img("init", img_test)
7show_img("debug_color", debug_color)
8show_img_and_wait_close("debug_markers", debug_markers_im)
Images result

Original

Original Image

Debug red

Debug Red Image

Debug Markers

Debug Markers Image

Do your own image processing!

Now that you are a master in image processing, let’s look at full examples.

Display video stream with extracted workspace

In the current state, the following code will display the video stream and the extracted workspace image. You can add your own image processing functions maybe to apply additional process.

 1from pyniryo import NiryoRobot, PoseObject
 2from pyniryo import vision
 3
 4# Connecting to robot
 5robot = NiryoRobot('<robot_ip_address>')
 6robot.calibrate_auto()
 7
 8# Moving to observation pose
 9observation_pose = PoseObject(0.2, 0.0, 0.3, 0.0, 1.56, 0.0)
10robot.move(observation_pose)
11
12while "User do not press Escape neither Q":
13    # Getting image
14    img_compressed = robot.get_img_compressed()
15    # Uncompressing image
16    img = vision.uncompress_image(img_compressed)
17    # Trying to find markers
18    workspace_found, res_img_markers = vision.debug_markers(img)
19    # Trying to extract workspace if possible
20    if workspace_found:
21        img_workspace = vision.extract_img_workspace(img, workspace_ratio=1.0)
22    else:
23        img_workspace = None
24
25    ...
26
27    # - Display
28    # Concatenating extracted workspace with markers annotation
29    if img_workspace is not None:
30        resized_img_workspace = vision.resize_img(img_workspace, height=res_img_markers.shape[0])
31        img = vision.concat_imgs((img, res_img_markers, resized_img_workspace))
32    # Showing images
33    key = vision.show_img("Image & Markers", img)
34    if key in [27, ord("q")]:  # Will break loop if the user press Escape or Q
35        break

Vision pick via your image processing pipeline

You may want to send coordinate to Ned in order to pick the object of your choice! To do that, use the function get_target_pose_from_rel() which converts a relative pose in the workspace into a pose in the robot’s world!

  1import cv2
  2
  3from pyniryo import NiryoRobot, PoseObject, ObjectColor, ObjectShape, vision
  4
  5# -- MUST Change these variables
  6simulation_mode = True
  7if simulation_mode:
  8    robot_ip_address, workspace_name = "127.0.0.1", "gazebo_1"
  9else:
 10    robot_ip_address, workspace_name = '<robot_ip_address>', "workspace_1"
 11
 12# -- Can Change these variables
 13grid_dimension = (3, 3)  # conditioning grid dimension
 14vision_process_on_robot = False  # boolean to indicate if the image processing append on the Robot
 15display_stream = True  # Only used if vision on computer
 16
 17# -- Should Change these variables
 18# The pose from where the image processing happens
 19observation_pose = PoseObject(0.17, -0.0, 0.35,  0.0, 1.56, 0.0)
 20
 21# Center of the conditioning area
 22center_conditioning_pose = PoseObject(0.01, -0.25, 0.13, 3.14, -0.01, -1.52)
 23
 24# -- MAIN PROGRAM
 25
 26
 27def process(niryo_robot):
 28    """
 29
 30    :type niryo_robot: NiryoRobot
 31    :rtype: None
 32    """
 33    # Initializing variables
 34    obj_pose = None
 35    try_without_success = 0
 36    count = 0
 37
 38    # Loop
 39    while try_without_success < 5:
 40        # Moving to observation pose
 41        niryo_robot.move(observation_pose)
 42
 43        if vision_process_on_robot:
 44            ret = niryo_robot.get_target_pose_from_cam(workspace_name,
 45                                                       height_offset=0.0,
 46                                                       shape=ObjectShape.ANY,
 47                                                       color=ObjectColor.ANY)
 48            obj_found, obj_pose, shape, color = ret
 49
 50        else:  # Home made image processing
 51            img_compressed = niryo_robot.get_img_compressed()
 52            img = vision.uncompress_image(img_compressed)
 53            # extracting working area
 54            im_work = vision.extract_img_workspace(img, workspace_ratio=1.0)
 55            if im_work is None:
 56                print("Unable to find markers")
 57                try_without_success += 1
 58                if display_stream:
 59                    cv2.imshow("Last image saw", img)
 60                    cv2.waitKey(25)
 61                continue
 62
 63            # Applying Threshold on ObjectColor
 64            color_hsv_setting = vision.ColorHSV.ANY.value
 65            img_thresh = vision.threshold_hsv(im_work, *color_hsv_setting)
 66
 67            if display_stream:
 68                vision.show_img("Last image saw", img, wait_ms=100)
 69                vision.show_img("Image thresh", img_thresh, wait_ms=100)
 70            # Getting biggest contour/blob from threshold image
 71            contour = vision.biggest_contour_finder(img_thresh)
 72            if contour is None or len(contour) == 0:
 73                print("No blob found")
 74                obj_found = False
 75            else:
 76                img_thresh_rgb_w_contour = vision.draw_contours(img_thresh, [contour])
 77
 78                # Getting contour/blob center and angle
 79                cx, cy = vision.get_contour_barycenter(contour)
 80                img_thresh_rgb_w_contour = vision.draw_barycenter(img_thresh_rgb_w_contour, cx, cy)
 81                cx_rel, cy_rel = vision.relative_pos_from_pixels(im_work, cx, cy)
 82
 83                angle = vision.get_contour_angle(contour)
 84                img_thresh_rgb_w_contour = vision.draw_angle(img_thresh_rgb_w_contour, cx, cy, angle)
 85
 86                vision.show_img("Image thresh", img_thresh_rgb_w_contour, wait_ms=30)
 87
 88                # Getting object world pose from relative pose
 89                obj_pose = niryo_robot.get_target_pose_from_rel(workspace_name,
 90                                                                height_offset=0.0,
 91                                                                x_rel=cx_rel,
 92                                                                y_rel=cy_rel,
 93                                                                yaw_rel=angle)
 94                obj_found = True
 95        if not obj_found:
 96            try_without_success += 1
 97            continue
 98        # Everything is good, so we go to the object
 99        niryo_robot.pick(obj_pose)
100
101        # Computing new place pose
102        offset_x = count % grid_dimension[0] - grid_dimension[0] // 2
103        offset_y = (count // grid_dimension[1]) % 3 - grid_dimension[1] // 2
104        offset_z = count // (grid_dimension[0] * grid_dimension[1])
105        place_pose = center_conditioning_pose.copy_with_offsets(0.05 * offset_x, 0.05 * offset_y, 0.025 * offset_z)
106
107        # Placing
108        niryo_robot.place(place_pose)
109
110        try_without_success = 0
111        count += 1
112
113
114if __name__ == '__main__':
115    # Connect to robot
116    robot = NiryoRobot(robot_ip_address)
117    # Changing tool
118    robot.update_tool()
119    # Calibrate robot if robot needs calibration
120    robot.calibrate_auto()
121    # Launching main process
122    process(robot)
123    # Ending
124    robot.go_to_sleep()
125    # Releasing connection
126    robot.close_connection()