NNPDF
NNPDF is the acronym used to identify the parton distribution functions from the NNPDF Collaboration. NNPDF parton densities are
extracted from global fits to data based on a combination of a Monte Carlo method for uncertainty estimation and the use of
neural networks as basic interpolating functions.
Methodology
The NNPDF approach can be divided into four main steps:- The generation of a large sample of Monte Carlo replicas of the original experimental data, in a way that central values, errors and correlations are reproduced with enough accuracy.
- The training of a set of PDFs parametrized by neural networks on each of the above MC replicas of the data. PDFs are parametrized at the initial evolution scale and then evolved to the experimental data scale by means of the DGLAP equations. Since the PDF parametrization is redundant, the minimization strategy is based in genetic algorithms as well as gradient descent based minimizers.
- The neural network training is stopped dynamically before entering into the overlearning regime, that is, so that the PDFs learn the physical laws which underlie experimental data without fitting simultaneously statistical noise.
- Once the training of the MC replicas has been completed, a set of statistical estimators can be applied to the set of PDFs, in order to assess the statistical consistency of the results. For example, the stability with respect PDF parametrization can be explicitly verified.
Example
The image below shows the gluon at small-x from , availablethrough
Releases
The NNPDF releases are summarised in the following table:PDF set | DIS data | Drell-Yan data | Jet data | LHC data | Independent param. of and | Heavy Quark masses | NNLO |
All PDF sets are available through the LHAPDF interface and in the .