In statistical classification, including machine learning, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsistent, but three major types can be distinguished, following :
Classifiers computed without using a probability model are also referred to loosely as "discriminative".
The distinction between these last two classes is not consistently made; refers to these three classes as generative learning, conditional learning, and discriminative learning, but only distinguish two classes, calling them generative classifiers and discriminative classifiers, not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model. Standard examples of each, all of which are linear classifiers, are:
In application to classification, one wishes to go from an observation x to a label y. One can compute this directly, without using a probability distribution ; one can estimate the probability of a label given an observation, , and base classification on that; or one can estimate the joint distribution , from that compute the conditional probability, and then base classification on that. These are increasingly indirect, but increasingly probabilistic, allowing more domain knowledge and probability theory to be applied. In practice different approaches are used, depending on the particular problem, and hybrids can combine strengths of multiple approaches.
Definition
An alternative division defines these symmetrically as:
a generative model is a model of the conditional probability of the observable X, given a target y, symbolically,
a discriminative model is a model of the conditional probability of the target Y, given an observation x, symbolically,
Regardless of precise definition, the terminology is constitutional because a generative model can be used to "generate" random instances, either of an observation and target, or of an observation x given a target value y, while a discriminative model or discriminative classifier can be used to "discriminate" the value of the target variable Y, given an observation x. The difference between "" and "" is subtle, and these are not consistently distinguished. The term "generative model" is also used to describe models that generate instances of output variables in a way that has no clear relationship to probability distributions over potential samples of input variables. Generative adversarial networks are examples of this class of generative models, and are judged primarily by the similarity of particular outputs to potential inputs. Such models are not classifiers.
Relationships between models
In application to classification, the observable X is frequently a continuous variable, the target Y is generally a discrete variableconsisting of a finite set of labels, and the conditional probability can also be interpreted as a target function, considering X as inputs and Y as outputs. Given a finite set of labels, the two definitions of "generative model" are closely related. A model of the conditional distribution is a model of the distribution of each label, and a model of the joint distribution is equivalent to a model of the distribution of label values, together with the distribution of observations given a label, ; symbolically, Thus, while a model of the joint probability distribution is more informative than a model of the distribution of label, it is a relatively small step, hence these are not always distinguished. Given a model of the joint distribution,, the distribution of the individual variables can be computed as the marginal distributions and , and either conditional distribution can be computed from the definition of conditional probability: and. Given a model of one conditional probability, and estimated probability distributions for the variables X and Y, denoted and, one can estimate the opposite conditional probability using Bayes' rule: For example, given a generative model for, one can estimate: and given a discriminative model for, one can estimate: Note that Bayes' rule and the definition of conditional probability are frequently conflated as well.
Contrast with discriminative classifiers
A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learn directly from the data and then try to classify data. On the other hand, generative algorithms try to learn which can be transformed into later to classify the data. One of the advantages of generative algorithms is that you can use to generate new data similar to existing data. On the other hand, discriminative algorithms generally give better performance in classification tasks. Despite the fact that discriminative models do not need to model the distribution of the observed variables, they cannot generally express complex relationships between the observed and target variables. They don't necessarily perform better than generative models at classification and regression tasks. The two classes are seen as complementary or as different views of the same procedure.
Deep Generative Models
With the rise of deep learning, a new family of methods, called deep generative models, is formed through the combination of generative models and deep neural networks. The trick of DGMs is that the neural networks we use as generative models have a number of parameters significantly smaller than the amount of data we train them on, so the models are forced to discover and efficiently internalize the essence of the data in order to generate it. Popular DGMs include Variational Autoencoder, Generative Adversarial Networks and auto-regressive models. There is a trend to build large deep generative models. For example, GPT-2 for auto-regressive neural language models, BigGAN and VQ-VAE for image generation, Optimus as the largest VAE language model, jukebox as the largest VAE model for music generation DGMs have many short-term applications. But in the long run, they hold the potential to automatically learn the natural features of a dataset, whether categories or dimensions or something else entirely.
If the observed data are truly sampled from the generative model, then fitting the parameters of the generative model to maximize the data likelihood is a common method. However, since most statistical models are only approximations to the true distribution, if the model's application is to infer about a subset of variables conditional on known values of others, then it can be argued that the approximation makes more assumptions than are necessary to solve the problem at hand. In such cases, it can be more accurate to model the conditional density functions directly using a discriminative model, although application-specific details will ultimately dictate which approach is most suitable in any particular case.
Suppose the input data is, the set of labels for is, and there are the following 4 data points: For the above data, estimating the joint probability distribution from the empirical measure will be the following: while will be following:
Text generation
gives an example in which a table of frequencies of English word pairs is used to generate a sentence beginning with "representing and speedily is an good"; which is not proper English but which will increasingly approximate it as the table is moved from word pairs to word triplets etc.