Chalkydri is a blazingly fast vision system for FRC written in Rust.
In development
Chalkydri isn't ready to use quite yet.
New here?
Check out our Getting Started guide!
About
Chalkydri was created by and is managed by FRC Team 4533 (Phoenix).
We're trying to make vision less of a black box, so all FRC teams can use it, no matter their size. We also want it to be easier to use effectively with less hassle.
Chalkydri has been built from the ground up for performance on popular FRC vision hardware and uses less CPU and memory than existing solutions.
Credits
-
- Main inspiration for the design of Chalkydri
-
Lincoln (Student, 4533)
- Phoenix vision code lead
-
Chloe (Student, 4533)
- Implemented pose estimation
-
Eda (Student, 4533)
- Implemented AprilTags edge checking algorithm
-
Drake (Student, 4533)
- Past vision code lead
This is a generic guide. You probably want a guide specific to whatever device you're using:
- (Recommended) Raspberry Pi 3/4/5
If your device isn't listed here, it may work, but there's no guarantee. You can see our instructions for building from source if we aren't providing binaries for your target architecture.
Before we start, click here to start downloading our latest release. It might take a little while on poor connections.
Flashing your microSD card
We recommend Etcher, since it's easy to use and cross-platform.
It will figure everything out for you and flash Chalkydri.
That's it! Now you can learn how to use Chalkydri.
Chalkydri system design
Chalkydri has a somewhat complicated design, as most vision things do.
Once the robot is powered on, each Chalkydri device will:
- Boot up
- Attempt to connect to the roboRIO's NetworkTables server
- Initialize camera(s)
- Start subsystems
Chalkydri waits until it connects to the NetworkTables server successfully to actually start running. It will negotiate with the roboRIO and start processing frames.
NetworkTables API
Chalkydri/
Robot/
Position/
X
Y
Rotation
Devices/
1/
2/
3/
...
Chalkydri/Robot/
Topics relevant to the robot as a whole
- Chalkydri/Robot/Position/X (64-bit float) The robot's current X coord on the field
- Chalkydri/Robot/Position/Y (64-bit float) The robot's current Y coord on the field
- Chalkydri/Robot/Rotation (64-bit float) The robot's current rotation
Chalkydri/Devices/
Each device's device-specific topics are grouped under Chalkydri/Devices/{device id}/
- Chalkydri/Devices/X/Version (string) The device's Chalkydri version
Maintenance guide
This is for people working on Chalkydri itself
If you are a Chalkydri user, this isn't what you want.
Being a fairly advanced project as FRC goes, it's harder to just pick up and work on.
This is the incomplete guide to maintaining it, how the internals work, and the best learning resources.
The Chalkydri project has several subprojects, mostly libraries we needed to build it.
MiniNT
Our NetworkTables client implementation
The existing Rust implementation was overly complex and pulled in too many extra dependencies. We tried a fork, but ended up just implementing NT4 from scratch.
TFLedge
Our bindings to TensorFlow Lite and libedgetpu
Here's all the awesome learning resources I've found so far:
Getting started
- (Linux) Command Line for Beginners
- (Linux) The Linux Command Handbook
- (Rust) The Book
- (Rust) Rust by Example (interactive, web)
- (Rust) Rustlings (interactive, native)
Getting better
- (Rust) The rustdoc Book
- (Rust) The Performance Book
- mdBook Docs
Getting deeper
- The Rustonomicon (unsafe Rust code)
Chalkydri is built on and for Linux, so using other operating systems may be a little more difficult.
Dev containers
Dev containers streamlines the development process by sharing a preconfigured environment that has everything we need inside of a container. If you're using VS Code, you can follow Microsoft's guide here to get everything set up on Linux, Windows, or Mac.
Manually
WIP
Raspberry Pi
The Pi 3, 4, and 5 are supported.
Pi cameras
libcamera is required to interact with v3.
Coral Edge TPU
We made our own bindings to TFLite and libedgetpu.
Releases should be done with CI.
To release a new version, simply create a release on Github. CI will test and build the main branch.
AprilTags
We're using our own custom AprilTag library, chalkydri-apriltags
, built from the ground up.
We'll refer to it as "CAT".
Why?
We have a few reasons for writing our own, rather than using what's already out there. The reference C library:
- is very resource intensive
- uses a lot of advanced math which isn't covered in high school classes, making it harder for students to understand how it works
- has memory leaks (needs a citation)
Design overview
CAT is based on existing algorithms, but with some tweaks specific to our use case.
- Get a frame
- Grayscale & threshold
- Detect corners
- Edge checking
- Decode tags
Grayscale & threshold
Converting RGB to grayscale and thresholding are combined into one step for performance.
I can't find the original reference I used for grayscale values.
We need to implement "iterative tri-class adaptive thresholding" based on Otsu's method.
Corner detection
Corner detection is done using the aptly named FAST algorithm. Another advantage: it's very simple and easy to understand.
16 | 1 | 2 | ||||
15 | 3 | |||||
14 | 4 | |||||
13 | p | 5 | ||||
12 | 6 | |||||
11 | 7 | |||||
10 | 9 | 8 |
Edge checking
"Edge checking" reuses some of our corner detection algorithm.
We simply check a few points alongside the paths of each imaginary line between two corners.
Decode tags
Decoding tags is done pretty much the same way the C library does it.