anki_vector.robot

The main robot class for managing Vector.

Classes

AsyncRobot([serial, ip, config, …]) The AsyncRobot object is just like the Robot object, but allows multiple commands to be executed at the same time.
Robot([serial, ip, config, default_logging, …]) The Robot object is responsible for managing the state and connections to a Vector, and is typically the entry-point to running the sdk.
class anki_vector.robot.AsyncRobot(serial=None, ip=None, config=None, default_logging=True, behavior_activation_timeout=10, cache_animation_list=True, enable_face_detection=False, enable_camera_feed=False, enable_audio_feed=False, enable_custom_object_detection=False, enable_nav_map_feed=None, show_viewer=False, show_3d_viewer=False, requires_behavior_control=True)

The AsyncRobot object is just like the Robot object, but allows multiple commands to be executed at the same time. To achieve this, all grpc function calls also return a concurrent.futures.Future.

1. Using with: it works just like opening a file, and will close when the with block’s indentation ends.

import anki_vector
# Create the robot connection
with anki_vector.AsyncRobot() as robot:
    # Run your commands
    robot.anim.play_animation("anim_turn_left_01").result()

2. Using connect() and disconnect() to explicitly open and close the connection: it allows the robot’s connection to continue in the context in which it started.

import anki_vector
# Create a Robot object
robot = anki_vector.AsyncRobot()
# Connect to Vector
robot.connect()
# Run your commands
robot.anim.play_animation("anim_turn_left_01").result()
# Disconnect from Vector
robot.disconnect()
Parameters:
  • serial (Optional[str]) – Vector’s serial number. Used to identify which Vector configuration to load.
  • ip (Optional[str]) – Vector’s IP Address. (optional)
  • config (Optional[dict]) – A custom dict to override values in Vector’s configuration. (optional) Example: {"cert": "/path/to/file.cert", "name": "Vector-XXXX", "guid": "<secret_key>"} where cert is the certificate to identify Vector, name is the name on Vector’s face when his backpack is double-clicked on the charger, and guid is the authorization token that identifies the SDK user. Note: Never share your authentication credentials with anyone.
  • default_logging (bool) – Toggle default logging.
  • behavior_activation_timeout (int) – The time to wait for control of the robot before failing.
  • enable_face_detection (bool) – Turn on face detection.
  • enable_camera_feed (bool) – Turn camera feed on/off.
  • enable_audio_feed (bool) – Turn audio feed on/off.
  • show_viewer (bool) – Render camera feed on/off.
  • requires_behavior_control (bool) – Request control of Vector’s behavior system.
class anki_vector.robot.Robot(serial=None, ip=None, config=None, default_logging=True, behavior_activation_timeout=10, cache_animation_list=True, enable_face_detection=False, enable_camera_feed=False, enable_audio_feed=False, enable_custom_object_detection=False, enable_nav_map_feed=None, show_viewer=False, show_3d_viewer=False, requires_behavior_control=True)

The Robot object is responsible for managing the state and connections to a Vector, and is typically the entry-point to running the sdk.

The majority of the robot will not work until it is properly connected to Vector. There are two ways to get connected:

1. Using with: it works just like opening a file, and will close when the with block’s indentation ends.

import anki_vector

# Create the robot connection
with anki_vector.Robot() as robot:
    # Run your commands
    robot.anim.play_animation("anim_turn_left_01")

2. Using connect() and disconnect() to explicitly open and close the connection: it allows the robot’s connection to continue in the context in which it started.

import anki_vector

# Create a Robot object
robot = anki_vector.Robot()
# Connect to the Robot
robot.connect()
# Run your commands
robot.anim.play_animation("anim_turn_left_01")
# Disconnect from Vector
robot.disconnect()
Parameters:
  • serial (Optional[str]) – Vector’s serial number. The robot’s serial number (ex. 00e20100) is located on the underside of Vector, or accessible from Vector’s debug screen. Used to identify which Vector configuration to load.
  • ip (Optional[str]) – Vector’s IP address. (optional)
  • config (Optional[dict]) – A custom dict to override values in Vector’s configuration. (optional) Example: {"cert": "/path/to/file.cert", "name": "Vector-XXXX", "guid": "<secret_key>"} where cert is the certificate to identify Vector, name is the name on Vector’s face when his backpack is double-clicked on the charger, and guid is the authorization token that identifies the SDK user. Note: Never share your authentication credentials with anyone.
  • default_logging (bool) – Toggle default logging.
  • behavior_activation_timeout (int) – The time to wait for control of the robot before failing.
  • cache_animation_list (bool) – Get the list of animations available at startup.
  • enable_face_detection (bool) – Turn on face detection.
  • enable_camera_feed (bool) – Turn camera feed on/off.
  • enable_audio_feed (bool) – Turn audio feed on/off.
  • enable_custom_object_detection (bool) – Turn custom object detection on/off.
  • enable_nav_map_feed (Optional[bool]) – Turn navigation map feed on/off.
  • show_viewer (bool) – Render camera feed on/off.
  • show_3d_viewer (bool) – Render camera feed on/off.
  • requires_behavior_control (bool) – Request control of Vector’s behavior system.
accel

anki_vector.util.Vector3 – The current accelerometer reading (x, y, z)

import anki_vector
with anki_vector.Robot() as robot:
    current_accel = robot.accel
Return type:Vector3
anim

A reference to the AnimationComponent instance.

Return type:AnimationComponent
audio

The audio instance used to control Vector’s audio feed.

Return type:AudioComponent
behavior

A reference to the BehaviorComponent instance.

Return type:BehaviorComponent
camera

The camera instance used to control Vector’s camera feed.

import anki_vector

with anki_vector.Robot(enable_camera_feed=True) as robot:
    image = robot.camera.latest_image
    image.show()
Return type:CameraComponent
carrying_object_id

The ID of the object currently being carried (-1 if none)

import anki_vector
with anki_vector.Robot() as robot:
    current_carrying_object_id = robot.carrying_object_id
Return type:int
conn

A reference to the Connection instance.

Return type:Connection
connect(timeout=10)

Start the connection to Vector.

import anki_vector

robot = anki_vector.Robot()
robot.connect()
robot.anim.play_animation("anim_turn_left_01")
robot.disconnect()
Parameters:timeout (int) – The time to allow for a connection before a anki_vector.exceptions.VectorTimeoutException is raised.
Return type:None
disconnect()

Close the connection with Vector.

import anki_vector
robot = anki_vector.Robot()
robot.connect()
robot.anim.play_animation("anim_turn_left_01")
robot.disconnect()
Return type:None
enable_audio_feed

The audio feed enabled/disabled

Getter:Returns whether the audio feed is enabled
Setter:Enable/disable the audio feed
import asyncio
import time

import anki_vector

with anki_vector.Robot(enable_audio_feed=True) as robot:
    time.sleep(5)
    robot.enable_audio_feed = False
    time.sleep(5)
Return type:bool
enable_camera_feed

The camera feed enabled/disabled

Getter:Returns whether the camera feed is enabled
Setter:Enable/disable the camera feed
import asyncio
import time

import anki_vector

with anki_vector.Robot(enable_camera_feed=True) as robot:
    time.sleep(5)
    robot.enable_camera_feed = False
    time.sleep(5)
Return type:bool
events

A reference to the EventHandler instance.

Return type:EventHandler
faces

A reference to the FaceComponent instance.

Return type:FaceComponent
force_async

A reference to the Robot object instance.

Return type:bool
get_battery_state()

Check the current state of the robot and cube batteries.

Vector is considered fully-charged above 4.1 volts. At 3.6V, the robot is approaching low charge.

Battery_level values are as follows:
Low = 1: 3.6V or less. If on charger, 4V or less.
Nominal = 2
Full = 3: This state can only be achieved when Vector is on the charger.
import anki_vector
with anki_vector.Robot() as robot:
    battery_state = robot.get_battery_state()
    if battery_state:
        print("Robot battery voltage: {0}".format(battery_state.battery_volts))
        print("Robot battery Level: {0}".format(battery_state.battery_level))
        print("Robot battery is charging: {0}".format(battery_state.is_charging))
        print("Robot is on charger platform: {0}".format(battery_state.is_on_charger_platform))
        print("Robot's suggested charger time: {0}".format(battery_state.suggested_charger_sec))
Return type:BatteryStateResponse
get_network_state()

Get the network information for Vector.

import anki_vector
with anki_vector.Robot() as robot:
    network_state = robot.get_network_state()
Return type:NetworkStateResponse
get_version_state()

Get the versioning information for Vector, including Vector’s os_version and engine_build_id.

import anki_vector
with anki_vector.Robot() as robot:
    version_state = robot.get_version_state()
Return type:VersionStateResponse
gyro

The current gyroscope reading (x, y, z)

import anki_vector
with anki_vector.Robot() as robot:
    current_gyro = robot.gyro
Return type:Vector3
head_angle_rad

Vector’s head angle (up/down).

import anki_vector
with anki_vector.Robot() as robot:
    current_head_angle_rad = robot.head_angle_rad
Return type:float
head_tracking_object_id

The ID of the object the head is tracking to (-1 if none)

import anki_vector
with anki_vector.Robot() as robot:
    current_head_tracking_object_id = robot.head_tracking_object_id
Return type:int
last_image_time_stamp

The robot’s timestamp for the last image seen.

import anki_vector
with anki_vector.Robot() as robot:
    current_last_image_time_stamp = robot.last_image_time_stamp
Return type:int
left_wheel_speed_mmps

Vector’s left wheel speed in mm/sec

import anki_vector
with anki_vector.Robot() as robot:
    current_left_wheel_speed_mmps = robot.left_wheel_speed_mmps
Return type:float
lift_height_mm

Height of Vector’s lift from the ground.

import anki_vector
with anki_vector.Robot() as robot:
    current_lift_height_mm = robot.lift_height_mm
Return type:float
localized_to_object_id

The ID of the object that the robot is localized to (-1 if none)

import anki_vector
with anki_vector.Robot() as robot:
    current_localized_to_object_id = robot.localized_to_object_id
Return type:int
motors

A reference to the MotorComponent instance.

Return type:MotorComponent
nav_map

A reference to the NavMapComponent instance.

Return type:NavMapComponent
photos

A reference to the PhotographComponent instance.

Return type:PhotographComponent
pose

anki_vector.util.Pose – The current pose (position and orientation) of Vector.

import anki_vector
with anki_vector.Robot() as robot:
    current_robot_pose = robot.pose
Return type:Pose
pose_angle_rad

Vector’s pose angle (heading in X-Y plane).

import anki_vector
with anki_vector.Robot() as robot:
    current_pose_angle_rad = robot.pose_angle_rad
Return type:float
pose_pitch_rad

Vector’s pose pitch (angle up/down).

import anki_vector
with anki_vector.Robot() as robot:
    current_pose_pitch_rad = robot.pose_pitch_rad
Return type:float
proximity

Component containing state related to object proximity detection.

..code-block

import anki_vector
with anki_vector.Robot() as robot:
    proximity_data = robot.proximity.last_valid_sensor_reading
    if proximity_data is not None:
        print(proximity_data.distance)
Return type:ProximityComponent
right_wheel_speed_mmps

Vector’s right wheel speed in mm/sec

import anki_vector
with anki_vector.Robot() as robot:
    current_right_wheel_speed_mmps = robot.right_wheel_speed_mmps
Return type:float
say_text(text, use_vector_voice=True, duration_scalar=1.0)

Make Vector speak text.

import anki_vector
with anki_vector.Robot() as robot:
    robot.say_text("Hello World")
Parameters:
  • text (str) – The words for Vector to say.
  • use_vector_voice (bool) – Whether to use Vector’s robot voice (otherwise, he uses a generic human male voice).
  • duration_scalar (float) – Adjust the relative duration of the generated text to speech audio.
Return type:

SayTextResponse

Returns:

object that provides the status and utterance state

screen

A reference to the ScreenComponent instance.

Return type:ScreenComponent
status

A property that exposes various status properties of the robot.

This status provides a simple mechanism to, for example, detect if any of Vector’s motors are moving, determine if Vector is being held, or if he is on the charger. The full list is available in the RobotStatus class documentation.

import anki_vector
with anki_vector.Robot() as robot:
    if robot.status.is_being_held:
        print("Vector is being held!")
    else:
        print("Vector is not being held.")
Return type:RobotStatus
touch

Component containing state related to object touch detection.

import anki_vector
with anki_vector.Robot() as robot:
    print('Robot is being touched: {0}'.format(robot.touch.last_sensor_reading.is_being_touched))
Return type:TouchComponent
viewer

The viewer instance used to render Vector’s camera feed.

import time

import anki_vector

with anki_vector.Robot(show_viewer=True) as robot:
    # Render video for 5 seconds
    robot.viewer.show_video()
    time.sleep(5)

    # Disable video render and camera feed for 5 seconds
    robot.viewer.stop_video()
Return type:ViewerComponent
viewer_3d

The 3D viewer instance used to render Vector’s navigation map.

import time

import anki_vector

with anki_vector.Robot(show_3d_viewer=True, enable_nav_map_feed=True) as robot:
    # Render 3D view of navigation map for 5 seconds
    time.sleep(5)
Return type:Viewer3DComponent
vision

Component containing functionality related to vision based object detection.

import anki_vector
with anki_vector.Robot() as robot:
    robot.vision.enable_custom_object_detection()
Return type:VisionComponent
world

A reference to the World instance, or None if the WorldComponent is not yet initialized.

Return type:World