Vision
This file presents the different api_doc/vision:Vision - Command functions, api_doc/vision:Vision - Enums, api_doc/vision:Vision - Niryo Topics & api_doc/vision:Vision - NamedTuple available with the Vision API
Vision - Command functions
This section reference all existing functions to control your robot arm, which include
Getting camera image
Detecting objects
Managing workspaces
All functions to control the robot are accessible via an instance of the class api_doc/niryo_robot:NiryoRobot
robot = NiryoRobot(<robot_ip_address>)
robot.vision.vision_pick("workspace_1", 0.0, ObjectShape.ANY, ObjectColor.ANY)
robot.vision.detect_object("workspace_1", ObjectShape.ANY, ObjectColor.ANY)
...
See examples on Examples Section
List of functions subsections:
- class pyniryo2.vision.vision.Vision(client, arm=None, tool=None)
Vision robot functions
Example:
ros_instance = NiryoRos("10.10.10.10") # Hotspot vision_interface = Vision(ros_instance)
- Parameters:
client (NiryoRos) – Niryo ROS client
Camera functions
- property Vision.get_img_compressed
Get image from video stream in a compressed format. Use
uncompress_imagefrom the vision package to uncompress itExamples:
import pyniryo img_compressed = vision.get_img_compressed() camera_info = vision.get_camera_intrinsics() img = pyniryo.uncompress_image(img_compressed) img = pyniryo.undistort_image(img, camera_info.intrinsics, camera_info.distortion)
- Returns:
string containing a JPEG compressed image
- Return type:
- property Vision.get_camera_intrinsics
Get calibration object: camera intrinsics, distortions coefficients The topic return a namedtuple(intrinsics: list[list[float]], distortion: list[list[float]])
Examples:
vision.get_camera_intrinsics() vision.get_camera_intrinsics().value def camera_info_callback(camera_info): print(camera_info.intrinsics) print(camera_info.distortion) vision.get_camera_intrinsics.unsubscribe() vision.get_camera_intrinsics.subscribe(camera_info_callback)
- Return type:
- property Vision.get_image_parameters
Return the NiryoTopic to get last stream image parameters: Brightness factor, Contrast factor, Saturation factor. The topic return a namedtuple(brightness_factor: float, contrast_factor: float, saturation_factor: float)
Brightness factor: How much to adjust the brightness. 0.5 will give a darkened image, 1 will give the original image while 2 will enhance the brightness by a factor of 2.
Contrast factor: A factor of 1 gives original image. Making the factor towards 0 makes the image greyer, while factor>1 increases the contrast of the image.
Saturation factor: 0 will give a black and white image, 1 will give the original image while 2 will enhance the saturation by a factor of 2.
Examples:
vision.get_image_parameters() vision.get_image_parameters.value def image_parameters_callback(image_parameters): print(image_parameters.brightness_factor) print(image_parameters.contrast_factor) print(image_parameters.saturation_factor) vision.get_image_parameters.unsubscribe() vision.get_image_parameters.subscribe(image_parameters_callback)
- Returns:
ImageParameters namedtuple containing the brightness factor, contrast factor and saturation factor.
- Return type:
- Vision.set_brightness(brightness_factor)
Modify image brightness
- Parameters:
brightness_factor (float) – How much to adjust the brightness. 0.5 will give a darkened image, 1 will give the original image while 2 will enhance the brightness by a factor of 2.
- Return type:
None
- Vision.set_contrast(contrast_factor)
Modify image contrast
- Parameters:
contrast_factor (float) – A factor of 1 gives original image. Making the factor towards 0 makes the image greyer, while factor>1 increases the contrast of the image.
- Return type:
None
- Vision.set_saturation(saturation_factor)
Modify image saturation
- Parameters:
saturation_factor (float) – How much to adjust the saturation. 0 will give a black and white image, 1 will give the original image while 2 will enhance the saturation by a factor of 2.
- Return type:
None
Detection functions
- Vision.get_target_pose_from_cam(workspace_name, height_offset=0.0, shape=ObjectShape.ANY, color=ObjectColor.ANY)
First detects the specified object using the camera and then returns the robot pose in which the object can be picked with the current tool
- Parameters:
workspace_name (str) – name of the workspace
height_offset (float) – offset between the workspace and the target height
shape (ObjectShape) – shape of the target
color (ObjectColor) – color of the target
- Returns:
object_found, object_pose, object_shape, object_color
- Return type:
(bool, PoseObject, ObjectShape, ObjectColor)
- Vision.vision_pick(workspace_name, height_offset=0.0, shape=ObjectShape.ANY, color=ObjectColor.ANY)
Picks the specified object from the workspace. This function has multiple phases:
1. detect object using the camera2. prepare the current tool for picking3. approach the object4. move down to the correct picking pose5. actuate the current tool6. lift the objectExample:
robot = NiryoRobot(ip_address="x.x.x.x") robot.arm.calibrate_auto() robot.arm.move_pose(<observation_pose>) obj_found, shape_ret, color_ret = robot.vision.vision_pick(<workspace_name>, height_offset=0.0, shape=ObjectShape.ANY, color=ObjectColor.ANY)
- Parameters:
workspace_name (str) – name of the workspace
height_offset (float) – offset between the workspace and the target height
shape (ObjectShape) – shape of the target
color (ObjectColor) – color of the target
- Returns:
object_found, object_shape, object_color
- Return type:
(bool, ObjectShape, ObjectColor)
- Vision.move_to_object(workspace_name, height_offset, shape, color)
Same as get_target_pose_from_cam but directly moves to this position
- Parameters:
workspace_name (str) – name of the workspace
height_offset (float) – offset between the workspace and the target height
shape (ObjectShape) – shape of the target
color (ObjectColor) – color of the target
- Returns:
object_found, object_shape, object_color
- Return type:
(bool, ObjectShape, ObjectColor)
- Vision.detect_object(workspace_name, shape=ObjectShape.ANY, color=ObjectColor.ANY)
Detect object in workspace and return its pose and characteristics
- Parameters:
workspace_name (str) – name of the workspace
shape (ObjectShape) – shape of the target
color (ObjectColor) – color of the target
- Returns:
object_found, object_pose, object_shape, object_color
- Return type:
(bool, PoseObject, str, str)
Workspace functions
- Vision.get_target_pose_from_rel(workspace_name, height_offset, x_rel, y_rel, yaw_rel)
Given a pose (x_rel, y_rel, yaw_rel) relative to a workspace, this function returns the robot pose in which the current tool will be able to pick an object at this pose.
The height_offset argument (in m) defines how high the tool will hover over the workspace. If height_offset = 0, the tool will nearly touch the workspace.
- Parameters:
workspace_name (str) – name of the workspace
height_offset (float) – offset between the workspace and the target height
x_rel (float) – x relative pose (between 0 and 1)
y_rel (float) – y relative pose (between 0 and 1)
yaw_rel (float) – Angle in radians
- Returns:
target_pose
- Return type:
- Vision.save_workspace_from_robot_poses(workspace_name, pose_origin, pose_2, pose_3, pose_4)
Save workspace by giving the poses of the robot to point its 4 corners with the calibration Tip. Corners should be in the good order. Markers’ pose will be deduced from these poses
Poses should be either a list [x, y, z, roll, pitch, yaw] or a PoseObject
- Parameters:
workspace_name (str) – workspace name
pose_origin (Union[list[float], PoseObject])
pose_2 (Union[list[float], PoseObject])
pose_3 (Union[list[float], PoseObject])
pose_4 (Union[list[float], PoseObject])
- Return type:
None
- Vision.save_workspace_from_points(workspace_name, point_origin, point_2, point_3, point_4)
Save workspace by giving the points of worskpace’s 4 corners. Points are written as [x, y, z] Corners should be in the good order.
- Parameters:
workspace_name (str) – workspace name
point_origin (list[float])
point_2 (list[float])
point_3 (list[float])
point_4 (list[float])
- Return type:
None
- Vision.delete_workspace(workspace_name)
Delete workspace from robot’s memory
- Parameters:
workspace_name (str) – name of the saved workspace
- Return type:
None
- Vision.get_workspace_ratio(workspace_name)
Get workspace ratio from robot’s memory
- Parameters:
workspace_name (str) – workspace name
- Return type:
float
- Vision.get_workspace_list()
Get list of workspaces’ name store in robot’s memory
- Return type:
list[str]
Vision - Niryo Topics
The use of these functions is explained in the api_doc/niryo_topics:NiryoTopic section. They allow the acquisition of data in real time by callbacks or by direct call.
Name |
Function |
Return type |
|---|---|---|
|
|
|
|
|
Vision - Enums
List of enums:
ObjectColorObjectShapeManageWorkspace
- class pyniryo2.vision.enums.ObjectColor(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
Enumeration of Colors available for image processing
- RED = 'RED'
- BLUE = 'BLUE'
- GREEN = 'GREEN'
- ANY = 'ANY'
- class pyniryo2.vision.enums.ObjectShape(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
Enumeration of Shapes available for image processing
- SQUARE = 'SQUARE'
- CIRCLE = 'CIRCLE'
- ANY = 'ANY'
- class pyniryo2.vision.enums.ManageWorkspace(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)
Enumeration of actions available for workspaces management
- SAVE = 1
- SAVE_WITH_POINTS = 2
- DELETE = -1
Vision - Namedtuple
- class pyniryo2.vision.objects.CameraInfo(intrinsics, distortion)
Create new instance of CameraInfo(intrinsics, distortion)
- distortion
Alias for field number 1
- intrinsics
Alias for field number 0
- class pyniryo2.vision.objects.ImageParameters(brightness_factor, contrast_factor, saturation_factor)
Create new instance of ImageParameters(brightness_factor, contrast_factor, saturation_factor)
- brightness_factor
Alias for field number 0
- contrast_factor
Alias for field number 1
- saturation_factor
Alias for field number 2