This project aims to design a two-stage Indoor Positioning System (demonstrated by a moving robot), combining two methods: “Moving Distance Tracking” and “Optical Recognition”. The first stage, “Moving Distance Tracking”, makes use of motor encoders and trilateration to estimate the distance travelled by a robot, and hence compute the current location coordinates. This approach is fast and easy to implement, but as the moving distance accumulates, so does the error. Eventually the accumulated error could become too large. To reset this error, a second stage, “Optical Recognition”, is added to the system. The second stage kicks in when the robot is stationary. It uses OpenCV (image recognition open-source software) to measure angles between “beacons” placed at different locations in the area, and “triangulation” to compute the current location coordinates. For a test area of 4.2 metre x 3.6 metre, this second stage is able to achieve accuracy of about +/- 10 cm.

Since optical recognition takes a long time for processing, the second stage is not suitable while the robot is in motion. Therefore, while the robot moves, the faster process “Moving Distance Tracking” is used. While the robot is stationary, the much slower “optical Recognition” is used to reset the accumulated error.

This two-stage “Indoor Position System” can be used for autonomous robot navigation.

Chat with us