>>September (Page 5)

In one of the studies put forth in Nature, DeepMind put forth a type of memory-augmented neural network referred to as a differentiable neural computer, and display that it can learn to leverage its memory to provide solutions to questions about complicated, structured data, which includes artificially produced stories, family trees, and even a map of the London Underground. It is also shown that it can identify a solution to a block puzzle game leveraging reinforcement learning. 

Plato compared memory to a tablet made of wax on which an impression, made on it once, would stay fixed. He expressed in metaphor the industrial notion of plasticity, that our brains can be shaped and reshaped by experience. However, the wax of our memories doesn’t only create impressions, it creates connections, from one memory to the other. Philosophers such as John Locke held the belief that memories connected if they were made nearby in time and space. Rather than wax, the most potent metaphor that gives expression to this is Marcel Proust’s madeleine cake, for Proust, one bite of the confection as a grown up undammed a torrent of associations from this childhood years. These episodic memories (event memories) are known to be dependent on the hippocampus in the human brain. 

Currently, our metaphors for memory have undergone refining. We no longer perceive memory as a wax tablet but as a reconstructive procedure, wherein experiences are brought back together again from their constituent parts. And rather than a simplistic association amongst stimuli and behavioural reactions, the relationship amongst memories and action varies, conditioned on context and priorities. A simple article of memorized know-how, for instance a memory of the layout of the London Underground, can be leveraged to provide a solution to the question: “How do you get from Piccadilly Circus to Moorgate?” in addition to the question, “What is directly adjacent to Moorgate, going north on the Northern line?” It is all dependent on the question, the contents of memory and their use can be separated. Another perspective holds that memories can undergo organization in order to perform computation. More resembling lego than wax, memories can be brought back together again dependent on the issue at hand. 

Neural networks are excellent at pattern recognition and swift, reactive decision-making, but we are only just starting to develop neural networks that can think slowly – that is, consciously or reason leveraging knowledge. For instance, how could a neural network record memories for facts like the connections in a transport network and then logically reason about its pieces of knowledge to provide solutions to questions? In a latest paper, it was demonstrated that neural networks and memory systems can be brought together to make learning machines that can record knowledge swiftly and reason about it flexibly. These models, which are referred to as differentiable neural computers (DNCs) can go about learning from instances such as neural networks, but they can also record complicated information like computers. 

In a typical computer, the processor can read and write data from and to random access memory (RAM), RAM provides the processor a lot more space to organize the intermediate results of computations. Temporary placeholders for data are referred to as variables and are recorded in memory. In a computer, it is a trivial operation to form a variable that possess a numerical value. And it also simple to create data structures – variables in memory that consist of links that can be followed to get to other variables. One of the simplest data structures is a list – a sequencing of variables that possesses a numerical value. And it is also easy to create data structures, variables in memory that consist of links that can be followed to get at other variables. One of the simplest data structures is a list – a sequencing of variables that can be read item by item. For instance, one could record a list of player’s names on a sports team and then read every name one by one. A more complex data structure is a tree. In a family tree for example, links from children to parents can be followed to read out a line of ancestry. One of the most complicated and generalized data structures is a graph, like the London Underground Network. 

When DNCs were developed, the wish was for machines that could go about learning to form and navigate complicated data structures on their own. At the core of a DNC is a neural network referred to as a controller, which is analogous to the processor in a computer system. A controller is accountable for taking input in, reading from and writing to memory, and generating output that can be interpreted as a solution. The memory is a grouping of locations that can each record a vector of data.  

A controller can execute various operations on memory. At each tick of the clock, it opts whether to write to memory or not. If it opts to write, it can opt to record data at a new, unused location or at a location that already has data the controller is looking for. This enables the controller to updated what is recorded at a location. If all the locations in memory are filled up, the controller can determine to free locations, just like how a computer can reallocate memory that is no longer required. When the controller does write, it transmits a vector of data to the chosen location in memory. Each time data is written, the locations are connected through links of associations, which indicate the order in which data was recorded. 

In addition to writing, the controller can read from several locations in memory. Memory can be looked into on the basis of the content of every location, or the associative temporal links can be followed forward and backward to recall data written sequentially or the other way around. The read out data can be leveraged to generate answers to questions or behaviours to take in a scenario, setting, or environment. In conjunction, these operations provide DNCs the capability to make choices about how they assign memory, record information in memory, and easily identify it once it gets there. 

To the reader with minimal technical background, it may appear a bit weird that we have consistently leveraged phrases such as “the controller can” or “differentiable neural computers … make choices.” We talk like this as differentiable neural computers learn how to leverage memory and how to generate answers fully from the ground up. They learn to do so by leveraging the magic of optimisation: when a DNC generates an answer, we contrast the answer to a desired correct answer. Over the course of time, the controller goes about learning to generate answers that are closer and closer to the right answer. In the procedure, it finds out how to leverage its memory. 

The researchers wished to evaluate DNCs on issues that consisted of building data structures and leveraging those data structures to provide solutions to questions. Graph data structures are very critical for representing data items that can be randomly connected to make paths and cycles. In the research paper, it was demonstrated that a DNC can learn autonomously to write down a detail of a random graph and provide solutions to questions about it. When the researchers detailed the stations and lines of the London Underground, we could query a DNC to provide solutions to questions such as: “Starting at Bond Street, and taking the Central Line in a direction one stop, the Circle line in a direction for four stops, and the Jubilee line in a direction for two stops, at which stop do you wind up at?” Or, the DNC could go about planning routes provided questions such as “How do you get from Moorgate to Piccadilly Circus?” 

In a family tree, it was demonstrated that it could provide solutions to questions that need complicated deductions. For instance, even though we just described parent, child, and sibling relationships to the network, we could query it questions such as “Who is Freya’s maternal great uncle?” It was also discovered to be possible to undertake analysis how DNCs leveraged their memories through visualization which locations in memory were being interpreted by the controller to generate what answers. Conventional neural networks in our comparisons either could not record the data, or they could not go about learning to reason in a fashion that would generalize to new instances. 

We could also go about training a DNC through reinforcement learning. In this framework, we allow the DNC generate actions but never display the answer to it. Rather, we score it with points when it has generated a good sequence of actions, (such as the children’s game “hot or cold”). The researchers connected a DNC to a simplistic environment with coloured blocks arranged in piles. They could provide it guidelines for objectives to accomplish: “Put the light blue block below the green, the orange to the left side of the red, the purple underneath the orange, the light blue to the right side of the dark blue, the green underneath the red, and the purple to the left side of the green.” 

The researchers could setup a large number of such potential objectives and then ask the network to carry out the actions that would generate one or another goal state on command: in this scenario, much like a computer system, the DNC could record various subroutines in memory, one per potential objective, and carry out one or another. 

The query of how human memory functions is ancient and our comprehension is still evolving. The hope is that DNCs furnish both a new utility for computer science and new metaphor for cognitive science and neuroscience: here is a learning machine that, without programming beforehand, can organize data into connected facts and leverage those facts to find solutions to problems. 

Data can be a very capable tool for societal progression, assisting our most critical institutions to enhance how they serve their communities. As cities, hospitals, and transport systems find novel and innovative ways to comprehend what people need from them, they’re uncovering opportunities to alter how they function today and identifying thrilling concepts and ideas for the future.