Transfer learning


Transfer learning is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem. For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks. This area of research bears some relation to the long history of psychological literature on transfer of learning, although formal ties between the two fields are limited. From the practical standpoint, reusing or transferring information from previously learned tasks for the learning of new tasks has the potential to significantly improve the sample efficiency of a reinforcement learning agent.
Andrew Ng said in his NIPS 2016 tutorial that TL will be the next driver of ML commercial success after supervised learning to highlight the importance of TL.

History

In 1993, Lorien Pratt published a paper on transfer in machine learning, formulating the discriminability-based transfer algorithm.
In 1997, the journal Machine Learning published a special issue devoted to transfer learning, and by 1998, the field had advanced to include multi-task learning, along with a more formal analysis of its theoretical foundations. Learning to Learn, edited by Pratt and Sebastian Thrun, is a 1998 review of the subject.
Transfer learning has also been applied in cognitive science, with the journal Connection Science
publishing a special issue on reuse of neural networks through transfer in 1996.

Definition

The definition of transfer learning is given in terms of domain and task. The domain consists of: a feature space and a marginal probability distribution, where. Given a specific domain,, a task consists of two components: a label space and an objective predictive function, which is learned from the training data consisting of pairs, which consist of pairs, where and. The function can be used to predict the corresponding label,, of a new instance.
Given a source domain and learning task, a target domain and learning task, transfer learning aims to help improve the learning of the target predictive function in using the knowledge in and, where, or.

Applications

Algorithms are available for transfer learning in Markov logic networks and Bayesian networks. Transfer learning has also been applied to cancer subtype
discovery, building utilization, general game playing, text classification, digit recognition and spam filtering.
In 2020 it was discovered that, due to their similar physical natures, transfer learning is possible between Electromyographic signals from the muscles when classifying the behaviours of Electroencephalographic brainwaves from the gesture recognition domain to the mental state recognition domain. It was also noted that this relationship worked vice-versa, showing that EEG can likewise be used to classify EMG in addition. The experiments noted that the accuracy of neural networks and convolutional neural networks were improved through transfer learning both at the first epoch and at the asymptote. That is, algorithms are improved by exposure to another domain.