top of page

Intro to Vision Tracking

Vision tracking is an incredibly useful part of robotics, it lets our robots see the world around them. Often called computer vision outside of FIRST Robotics, vision tracking can help the robot autonomously complete tasks, or assist the drivers of the robot carry out their tasks faster and more accurately. Vision tracking is done by adding a camera to the robot, and then processing the images or video feed from the camera using code. Most years, the FRC competition field has reflective markers (known as "vision targets") that are specifically designed to be easy to vision track. The actual processing and analyzing of images can be done directly on the robot's main computer (the roboRIO), on the laptop used to drive the robot, or on a separate computer on the robot (known as a coprocessor).

  • Detecting and automatically aligning the robot to pick up and place game pieces from designated locations on the field. Examples include shooting fuel into the hoppers in the 2017 game, placing power cubes into the exchange in 2018 (shown above), and picking up hatch panels from the loading station in 2019.

  • Tracking individual game pieces for acquisition, such as the power cubes stacked into pyramids in 2018.

  • Determining the robot's position for autonomous driving.

Vision tracking can be used during the autonomous period of the game to assist the robot with localization (figuring out where it is) and aligning the game elements or to completely automate an aspect of the game during the driver-operated part of the game, such as acquiring or placing specific game pieces. It can also be used to assist the driver to placing or acquiring specific game pieces by controlling only some aspects of the robot such as steering the robot, moving and elevator/arm to the right height, or keeping a turret pointed in the right direction. Hardware Vision tracking processing can happen in a few different places, each with their one benefits and downsides.

  • Directly on the roboRIO. This is cheap, as no additional computers need to be purchased, but the roboRIO isn't exactly the fastest computer out there and more complicated vision processing can strain its processing power.

  • On the drive laptop used to control the robot. Like doing processing on the roboRIO this is cheap, but laptops are generally quite a bit faster than the roboRIO so this allows more complicated processing to be done. However, sending images to the laptop over the robot's WiFi, processing the images, and sending the results back to the robot can be quite slow and additionally takes up bandwidth that is also used for other communications with the robot.

  • On a coprocessing device. This could be a Raspberry Pi, or a camera and computer rolled into one such as the JeVois or the Limelight. This is often more expensive than other methods (a Raspberry Pi costs around $35, the JeVois is $50, and Limelights are $400 each) but is faster and allows more complex vision processing.

Coprocessors The three main coprocessors used on FRC robots are Raspberry Pi, JeVois, and Limelight. Limelight

Limelights are the easiest to setup/use, come with built-in lights to illuminate reflective targets and have quite powerful hardware, but are also by far the most expensive and don't allow as advanced of vision processing.

Raspberry Pi Raspberry Pis need to be wired into a power source, and unlike the Limelight and JeVois don't come with a camera, and have the slowest hardware. However they are also the cheapest and have the most documentation and users, meaning many problems you may run into already have an answer posted somwhere online.

JeVois JeVois is quite powerful, allows very complex custom vision software, and runs data and power over a USB cable, and are much smaller than either the Limelight or Raspberry Pi, however they may need more setup than a Limelight. Software Vision tracking code can be written in several different ways. Code written to run directly on the roboRIO, the Limelight, or the drive laptop can use a drag-and-drop vision pipeline building tool called GRIP. Code written on the laptop, roboRIO, JeVois, or Raspberry Pi can also use an opensource computer vision library known as OpenCV. OpenCV code can be written in C++ or Python, and can range from very simple color detection to extremely complex 3D shape transforms and pose estimation, using tools such as SURF and SIFT. JeVois also supports using machine learning tools suchs as DNNs, CNNS, etc. via the Tensorflow library/toolkit. Similarly to GRIP and the Limelight, the JeVois can be programmed quickly using a tool known as JeVois Inventor. JeVois Inventor allows quick tuning to pick out specific colors, compensate for blurriness and lighting, find specific shapes, etc. Further Reading


Recent Posts

See All

Learning Programming

One of the best ways to learn programming is by working through problems, and building projects. Working on a series of projects, especially primarily by yourself, is a great way to solidify the proce

bottom of page