The learning principles used by the nervous system are subject to many constraints imposed by the nature of its computational substrate. Could it be that these constraints are useful for determining canonical learning mechanisms that can be used in multiple areas and domains, or for building cognitive agents that interact with the environment in real-time?
What are the constraints that are relevant for learning? Could these constraints restrict the space of all possible learning mechanisms in a useful way? Are these constraints easy to comply with when building neuromorphic learning circuits in CMOS and/or hybrid CMOS-resistive memory technologies?
We will start by making a list of what we think are the relevant constraints, and follow up with idea implementing models that comply with these constraints, and identifying problems and application areas that are well suited for these learning models.
Login to become a member send
||16:00 - 17:00
In this discussion group we tried to identify the most relevant constraints that biology imposes on learning. The assumption is that perhaps, if we develop learning models, mechanisms, and circuits following the same constraints we might be able to develop a "canoncical learning" principle that can be applied to multiple tasks, ranging from pattern recognition of sensory signals, to learning to make associations and inference on concepts, to learning to control actuators.
The other assumption is that, while this canoncical learning principle might not be the best one for specific tasks or data sets, it might be a very good one for autonomous agents that have to re-use the same hardware configurations in different parts (e.g. sensory peiphery or motor control).
During the first meeting we determined the list of constraints
Constraints useful for learning