Google DeepMind's New Algorithm adds "Memory" to A.I.
When DeepMind burst into prominent view in 2014 it taught its machine learning systems how to play Atari games. The system could learn to defeat the games, and score higher than humans, but not remember how it had done so.
For each of the Atari games, a separate neural network had to be created. The same system could not be used to play Space Invaders and Breakout without the information for both being given to the artificial intelligence at the same time. Now, a team of DeepMind and Imperial College London researchers have created an algorithm that allows its neural networks to learn, retain the information, and use it again.
The group says it has been able to show "continual learning" that's based on 'synaptic consolidation'. In the human brain, the process is described as "the basis of learning and memory".