>Business >Enabling continual learning in neural networks

Enabling continual learning in neural networks

Computer programs that go about learning to carry out tasks also usually forget them very swiftly. In this blog, it is demonstrated that the learning rule can be altered so that an application can recall old tasks when going about learning a fresh one. This is a critical step towards more smart programs that are capable of learning adaptively and progressively.  

Deep neural networks are at present, the most successful machine learning strategy for identifying solutions to a broad variety of tasks which includes language translation, image classification, and image generation. But, they have usually been developed to learn several tasks only if the data is put forth all at once. As a network goes about training on a specific task, its parameters are adapted to find a solution to the task. When a fresh task is presented, new adaptations overwrite the knowledge that the neural network had prior acquired. This phenomenon is referred to in cognitive science as ‘catastrophic forgetting’, and is viewed as one of the fundamental restrictions of neural networks. 

By comparison, our brains function in a very different fashion. We are able to go about learning incrementally, acquiring skills at a time and leveraging our prior know-how when learning new activities. As a beginning point for the PNAS paper, in an approach is put forth to surpass catastrophic forgetting in neural networks, the researchers drew inspiration from neuroscience-based theories regarding the consolidation of prior acquired skills and memories in mammalian and human brains. 

Neuroscientists have distinguished two variants of consolidation that happen in the brain, systems consolidation and synaptic consolidation. Systems consolidation is the procedure through which memories that have been obtained by the quick-learning portions of our brain are imprinted into the slow-learning parts. This imprinting is thought to be mediated by conscious and unconscious recall – for example, this can occur during the course of dreaming. In the second variant, synaptic consolidation, connections amongst neurons possess a reduced likelihood of being overwritten if they have been critical in prior learned activities. The algorithm particularly takes inspiration from this mechanism to tackle the issue of catastrophic forgetting. 

A neural network is made up of various connections in much the same fashion as a human brain. Upon learning a task or activity, we compute how vital each connection is to that activity. When we go about learning a new activity, every connection is safeguarded from alteration by an amount proportional to its criticality to the old tasks. Therefore it is doable to learn the new activity without overwriting what has been learned in the prior task and without incurring a considerable computational cost. Speaking mathematically, we can think of the safeguard we attach to every connection in a fresh task as being linked to the old protection value through a spring, whose stiffness is proportional to the connection’s criticality. For this purpose, the algorithm was referred to as Elastic Weight Consolidation (EWC). 

To evaluate the algorithm an agent was exposed to Atari titles in sequence. Learning a title from the score alone is a challenge for anybody, but learning several games in sequence is even more of a challenge as every title needs its own strategy and techniques. As demonstrated in the figure below, without EWC, the agent swiftly forgets every game after it ceases to play it. This implies that on average, the agent hardly learns a singular title. But, if we use EWC (brown and red), the agent does not forget so easily and can go about learning to play several titles, in sequence. 

Present day computer programs cannot learn from data adaptively and in real time. But, it has been demonstrated that catastrophic forgetting is not an impossible challenge for neural networks. The hope is that this research indicates a step towards programs that can learn in a more dynamic fashion. 

The research also advances our comprehension of how consolidation occurs in the human brain. The neuroscientific theories that the work has its basis on, as a matter of fact, have primarily been proven in very simplistic instances. By demonstrating that these same theories can have application in a more realistic and complicated machine learning context, the hope is to provide further weight to the concept that synaptic consolidation is critical to memory retention and know-how. 

Add Comment