Obstacle avoidance with a Pushbot robot based on eDVS vision and a simple neuromorphic controller using ROLLS chip
This project aims to develop a simple closed-loop controller for the pusbot to anable obsatcle avoidance based on the DVS vision only. The solution is based on a simple heuristics: the robot turns away from the direction (left or right) where more events come from the lower half of the DVS pixel array. In the first vresion of the controller, demonstrated in the first weel, we created a histogram of the DVS events for each coloumn of the DVS array, collecting events from the lower half of the array. The preprocessing of the sensory stream included dropping 80% of events, which improved the signal to noise ratio. The histogram bins from the left and right halfs voted for increasing speed of the right and left wheels, respectively. This led to a fairly robust obstacle avoidance. If the robot faced a large object, a "freeeing" maneuvre.
Here's a snapshot of the video showing the Pushbot driving in the lab, avoding obstacles:
Pushbot driving in the lab avoiding obstacles
The following figures show the histograms that we used for the first version of obstacle avoidance systems for an object on the left and right side of the image, as well as objetcs on both sides and an objetc in the upper part of the image only:
In the second week, we have set-up two populations of spiking neurons on the ROLLS chip, representing turning left and turning right respectively. We created interfaces to stimulate these populations with event from the eDVS and tuned the populations to be activated by events in the right and left half of the image, respectively. After many hours of setting biases and tuning the parameters of the couplings we arrived at an amazing neuromorphic controller for obstacle avoidance for the pushbot.
This figure shows output from the ROLLS chip, with spikes from neurons of the "left" population colored blue and neuros of the "right" population -- red. One can see the transition from the "left" to the "right" state as an object moves from left to right in front of the DVS of the robot:
ROLLS output
The overall goal is to implement on the robot a sequence learning algorithm already implemented on the the ROLLLS chip in the past. It's purpose would be to present that the robotic agent is capable of fulfilling not only reactive tasks but also of learning on line.
Here we present the output of our neural network developed by student Raphaela Kreiser, able to successfully learn different sequences (ABA, AAC).
Sequence learning of ABA Sequence learning of AAC
Looming detector
The goal of this project was to realise the looming detector of Clair Rind in a neuronal network and implement it in hardware. The detector was succesfully implemented in pyNN spiking neural networks simulator, the next step bing porting it onto SpiNNacker hardware as well as cxquad chip. We have collected eDVS data with a looming stimulus, approaching the robot and will use this data to fine-tune and test the detector. Sketch of the developed architecture is shown in the following diagram:
The list of all our planed projects looked like this:
Project list
And here are some sessions that we've held:
- sEMD
- Looming Detector
- ROLLS
- Parallela
- Collect data from eDVS
- Keyboard controller