menu

Self-driving neuromorphic agent

Animals, when moving through the environment actively alter their visual perception, thus enriching the quality and quantity of information about certain objects and the entire scene. This information is used to recognize and but also to avoid objects. In order to avoid collisions and navigate through the environment, the basic motor commands have to be associated with the visual input that precedes the required motion changes. In this project, we will reproduce this sensory-motor learning in with a neuromorphic robotic setup. The visual features that we will use are direction and strength of motion, i.e. optic flow which is measured by a spiking Elementary Motion Detector. We will use an omni-directional wheeled robotic platform (Omnibot), which is equipped with a DVS, a Parallela computer to interact with the robot and the DVS, as well as the ROLLS neuromorphic chip to build a neuronal architecture for sensory processing and sensory-motor learning.

Login to become a member send

Timetable

Day Time Location
Tue, 26.04.2016 16:00 - 17:00 Disco
Thu, 28.04.2016 16:00 - 17:00 Disco

Animals, when moving through the environment actively alter their visual perception, thus enriching the quality and quantity of information about certain objects and the entire scene.
This information is used to recognize and to avoid objects. Processing of sensory information, e.g. visual stimuli, in the nervous system is thought to be event-driven, thus we are using the event-driven Dynamic Vision Sensor (DVS).
When an agent, whether robotic or biological, moves through its environment it has a basic set of motor commands it can execute, e.g. moving forward, stopping, turning left or right. In order to avoid collisions and navigate through the environment, these basic motor commands have to be linked to respective feature activity, which resulted from sensory inputs. The visual feature we areg going to use are direction and strength of motion, i.e. optic flow which is measured by the spiking Elementary Motion Detector (sEMD, Figure semd).

sEMD.pngSpiking Elementary Motion Detector (sEMD)
In order to associate given feature map activity with motor commands to be executed an intermediate layer of random connected neurons (RCNs) [Barak et al. 2013] .

Network_layout.pngNetwork Architecture

We are using an omni-directional, wheeled robotic platform (Omnibot), which is equipped with a DVS, a Parallela computer to interact with the robot and the DVS, as well as a ROLLS neuromorphic chip to process the incoming events.

omnibot.jpg
The goals of this workgroup are
    - build up the feature maps (Figure sEMD),
    - set up the network (Figure Architecture)
    - remotely control the robot and record the visual input and the executed motor commands
    - train the network using the recorded feature map activity and the respective motor commands

The ultimate goal is that the robot navigates autonomously through the environment without collision.

Icon Barak et al. 2013 (2.1 MB) Icon Babadi and Sompolinksy 2014 (2.8 MB)

Workpackages:

 

1) Robot interface with keyboard presses

The first step was to write a robot interface in  C++, which consists of a driver for the eDVS-camera as well as a keyboard control module to enable manual driving as well as recording of data (eDVS events and motor signals) while driving. This interface was intended to be work with both available robots (OmniBot and PushBot) with different communications protocols (serial USB resp. TCP). Therefore, an abstract class Robot with has been implemented, from which the robot specific sub-classes with the respective funtionality inherited. Due to different kinematics of the two platforms, two robot-specific log-modules have been implemented, to write the data of the motor controllers, as well as the eDVS events to log-files for later processing within the implemented network (see section 3). 

2) Calibration of DVS and keyboard events

In order to record data for training the model, we wanted to log both visual events coming from the DVS and keyboard entries pressed by the user. As the computer and the DVS have different clocks, the timestamps of the events were'nt synchronized. A simple procedure for calibrating the timestamps was developped : it consists of presenting to the DVS a blinking frame (black and white squares) with a known timing of change. Then, the received events from the DVS, after filtering (frequency extraction in a given neighborhood), are aligned to the computer time reference for logging.

3) Network implementation

Network has been implemented in brian, the spiking neural network simulator. The events as emitted by the DVS were fed to the feature map, which consisted of 50 x 50 spiking Elementary Motion Detectors (sEMD). The sEMDs were tuned to three different velocity ranges. The output of the sEMD layer prjected to to the Random Connected Neuron (RCN) layer. The connectivity probability was set to 10 % and the weights were randomly initialized between 0 and 70 pico ampere. The RCN layer was fully-connected to the Motor layer with plastic synpases. For the plastic synapses, we used Fusi synapses, which adapted their weights not only based on the time difference between pre- and post-synaptic spikes, e.g. standard STDP, but also modeled the Ca-level of the post-synaptic neuron. The motor neurons were stimulated with a teacher signal, which was extracted from the keyboard events. Here, the time between two keyboard presses was used to stimulate the motor neurons with a constant frequency.

The network is implemented but the tuning of the parameters took too long and thus we could not test the architecture.

Leaders

Moritz Milde
Yulia Sandamirskaya

Members

Alessandro AImar
Arren Glover
Germain Haessig
siohoi ieng
Giacomo Indiveri
Aleksandar Kodzhabashev
vincent lorrain
Bragi Lovetrue
Jianlin Lu
Moritz Milde
Florian Mirus
Guido Novati
Gary Pineda-Garcia
Claire Rind
Yulia Sandamirskaya
Arman Savran
Evangelos Stromatias
André van Schaik
Valentina Vasco
Nikolaos Vasileiadis
Borys Wrobel
Qi Xu
Yexin Yan