General-purpose neuromorphic chips, intended to implement the 'cognitive core' come in, broadly, 2 styles.
There are fixed-model designs where the neuron and synapse models are 'hard-wired' into the (usually analogue) circuitry and then externally configured to create a specific initial connectivity pattern to run simulations. Such chips can run very fast and at extremely low power indeed, but with the obvious limitation that the neuron model has been chosen for you.
There are also programmable designs where little about the neuron model has been established in hardware; instead, the user specifies what neuron and synapse models they want to run as well as the topology of the network and configure the chip to run the model in a similar way to the first type. This has the benefit of being able to be flexible and experiment with different models. Power consumption is still reasonable if not as aggressively low as the first type; and the chip will have a slower maximum simulation speed (or to be more exact, time-scale factor; this determines how fast or slow the simulation is running relative to 'real' time ). An interesting aspect of the second type is the ability to run several different neuron models within the same simulation, on the same chip. Among other things we will do in this workgroup is experiment with such multi-model simulations. Also if people are interested in implementing a new neuron model in hardware, this is the workgroup to do it! We will try to implement new neuron and synapse models which may have 'interesting' properties.
The SpiNNaker chip, which we will have in this workgroup, is an example of the second type - an array of parallel ARM968 processors with a packet-switched interconnect between them that allows virtually any model to be specified and instantated on-chip. We have created an automated tool-chain which relieves you of the need to worry about the low-level 'nuts and bolts' of creating a network - instead you can specify it from a high-level script and have the toolchain automatically build it for you. We will introduce the tool chain in depth and make sure you've got all the software installed to run models on the SpiNNaker boards, containing 48 chips each with a (current) capacity of about 200,000 neurons. If you want to create new neuron models there will be some additional details on the tool chain but again all you'll need to know is how to modify some core code that runs the model itself. No detailed knowledge of SpiNNaker internals is required. Documentation is available at SpiNNaker Arbitrary documentation
From the second week on we expect to have a new release of the software 'Another Fine Product from the Nonsense Factory' available and it will be brought by myself (Alex Rast) so that if you want to try out new functionality and rid yourself of some limitations then this is the release to try.
The scripting language we are using is PyNN, a standard that works not only with SpiNNaker but with multiple software and hardware platforms. This makes it easy to specify a script and try it out in software first before trying it on hardware. We will give a PyNN tutorial to introduce working with the language and what you need to do to specify your platform and run a simulation.
There are several directions for exploration in this workgroup: here are a few:
1) Large-scale simulation of cortical circuits: By linking together SpiNNaker boards, large simulations with potentially millions of neurons can be configured and run in real-time. Do you have a cortical model you'd like to try? Here's a chance to have a go.
2) New neuron models: With SpiNNaker it's possible to add new neuron and synapse models just by adding a relevant code chunk. Bring your neuron model and we can show you how to analyse it and implement it on-chip. Noting that there *are* limitations: e.g. a full Hodgkin-Huxley model wouild be a formidable challenge, probably would not run in real-time, and reduce the number of neurons per core, you can try almost anything. While we're not volunteering to implement your model for you, we will have people available to consult with on the best way to implement your model and think about options.
3) Interfacing with hardware: In cooperation with the AER protocols workgroup we will explore ways to link systems directly, passing AER messages back and forth to create a multi-chip simulation. This is a somewhat 'advanced' project: you will need some familiarity with FPGA/microcontroller programming (assuming your device has such an interface) but we have got very preliminary links to work in the past. This year we want to run a real neural simulation using an integrated link.
4) Comparing platforms. PyNN offers the opportunity to compare results between different simulators. It can be interesting to learn about each one's peculiarities. A possible project would be to run the same large-scale network on 2 different supported PyNN back-ends and compare results.
5) Or?? You may have many other ideas. Anything where you need access to a large-scale real-time simulation resource which can be run in a reasonably 'push-button' way (no tricky configuration - just set up your simulation and go) is a good candidate for a project.
Accomplished by the end of the workshop
There were 2 successful projects that happened in the scope of the workgroup.
Alan Stokes collaborated with Claire Rind to implement a model of the locust 'looming circuit' for avoidance of large objects coming towards them in the visual field. The model worked on SpiNNaker; future work will involve integrating a plug-in retina.
We collaborated with the Single Neuron Challenge workgroup to implement the tests they proposed on SpiNNaker. Results for that are available on the Single Neuron Challenge workgroup page. Alan Stokes again created a very prototype version of a StepCurrentSource which still needed debugging at end of workshop but it is a first step in implementing this useful bit of PyNN on a hardware platform.