anki_vector.vision

Utility methods for Vector’s vision

Vector’s can detect various types of objects through his camera feed.

The VisionComponent class defined in this module is made available as anki_vector.robot.Robot.vision and can be used to enable/disable vision processing on the robot.

Classes

VisionComponent(robot) VisionComponent exposes controls for the robot’s internal image processing.
class anki_vector.vision.VisionComponent(robot)

VisionComponent exposes controls for the robot’s internal image processing.

The anki_vector.robot.Robot or anki_vector.robot.AsyncRobot instance owns this vision component.

Parameters:robot – A reference to the owner Robot object.
close()

Close all the running vision modes and wait for a response.

enable_custom_object_detection(detect_custom_objects=True)

Enable custom object detection on the robot’s camera

Parameters:detect_custom_objects (bool) – Specify whether we want the robot to detect custom objects.
import anki_vector
with anki_vector.Robot() as robot:
    robot.vision.enable_custom_object_detection()
enable_display_camera_feed_on_face(display_camera_feed_on_face=True)

Display the robot’s camera feed on its face along with any detections (if enabled)

Parameters:display_camera_feed_on_face (bool) – Specify whether we want to display the robot’s camera feed on its face.
enable_face_detection(detect_faces=True, estimate_expression=False)

Enable face detection on the robot’s camera

Parameters:
  • detect_faces (bool) – Specify whether we want the robot to detect faces.
  • detect_smile – Specify whether we want the robot to detect smiles in detected faces.
  • estimate_expression (bool) – Specify whether we want the robot to estimate what expression detected faces are showing.
  • detect_blink – Specify whether we want the robot to detect how much detected faces are blinking.
  • detect_gaze – Specify whether we want the robot to detect where detected faces are looking.