Concept drift
In predictive analytics and machine learning, the concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes.
The term concept refers to the quantity to be predicted. More generally, it can also refer to other phenomena of interest besides the target concept, such as an input, but, in the context of concept drift, the term commonly refers to the target variable.
Examples
In a fraud detection application the target concept may be a binary attribute FRAUDULENT with values "yes" or "no" that indicates whether a given transaction is fraudulent. Or, in a weather prediction application, there may be several target concepts such as TEMPERATURE, PRESSURE, and HUMIDITY.The behavior of the customers in an online shop may change over time. For example, if weekly merchandise sales are to be predicted, and a predictive model has been developed that works satisfactorily. The model may use inputs such as the amount of money spent on advertising, promotions being run, and other metrics that may affect sales. The model is likely to become less and less accurate over time – this is concept drift. In the merchandise sales application, one reason for concept drift may be seasonality, which means that shopping behavior changes seasonally. Perhaps there will be higher sales in the winter holiday season than during the summer, for example.
Possible remedies
To prevent deterioration in prediction accuracy because of concept drift, both active and passive solutions can be adopted. Active solutions rely on triggering mechanisms, e.g., change-detection tests to explicitly detect concept drift as a change in the statistics of the data-generating process. In stationary conditions, any fresh information made available can be integrated to improve the model. Differently, when concept drift is detected, the current model is no longer up-to-date and must be substituted with a new one to maintain the prediction accuracy. On the contrary, in passive solutions the model is continuously updated, e.g., by retraining the model on the most recently observed samples, or enforcing an ensemble of classifiers.Contextual information, when available, can be used to better explain the causes of the concept drift: for instance, in the sales prediction application, concept drift might be compensated by adding information about the season to the model. By providing information about the time of the year, the rate of deterioration of your model is likely to decrease, concept drift is unlikely to be eliminated altogether. This is because actual shopping behavior does not follow any static, finite model. New factors may arise at any time that influence shopping behavior, the influence of the known factors or their interactions may change.
Concept drift cannot be avoided for complex phenomena that are not governed by fixed laws of nature. All processes that arise from human activity, such as socioeconomic processes, and biological processes are likely to experience concept drift. Therefore, periodic retraining, also known as refreshing, of any model is necessary.
Software
- RapidMiner : free open-source software for knowledge discovery, data mining, and machine learning also featuring data stream mining, learning time-varying concepts, and tracking drifting concept
- EDDM : free open-source implementation of drift detection methods in Weka.
- MOA : free open-source software specific for mining data streams with concept drift. It contains a prequential evaluation method, the EDDM concept drift methods, a reader of ARFF real datasets, and artificial stream generators as SEA concepts, STAGGER, rotating hyperplane, random tree, and random radius based functions. MOA supports bi-directional interaction with Weka.
Datasets
Real
- USP Data Stream Repository, 27 real-world stream datasets with concept drift compiled by Souza et al..
- Airline, approximately 116 million flight arrival and departure records compiled by E. Ikonomovska. Reference: Data Expo 2009 Competition .
- Chess.com and Luxembourg datasets compiled by I. Zliobaite.
- ECUE spam 2 datasets each consisting of more than 10,000 emails collected over a period of approximately 2 years by an individual. from S.J.Delany webpage
- Elec2, electricity demand, 2 classes, 45,312 instances. Reference: M. Harries, Splice-2 comparative evaluation: Electricity pricing, Technical report, The University of South Wales, 1999. from J.Gama webpage. .
- PAKDD'09 competition data represents the credit evaluation task. It is collected over a five-year period. Unfortunately, the true labels are released only for the first part of the data.
- Sensor stream and Power supply stream datasets are available from X. Zhu's Stream Data Mining Repository.
- SMEAR is a benchmark data stream with a lot of missing values. Environment observation data over 7 years. Predict cloudiness.
- Text mining, a collection of text mining datasets with concept drift, maintained by I. Katakis.
- Gas Sensor Array Drift Dataset, a collection of 13,910 measurements from 16 chemical sensors utilized for drift compensation in a discrimination task of 6 gases at various levels of concentrations.
Other
- KDD'99 competition data contains simulated intrusions in a military network environment. It is often used as a benchmark to evaluate handling concept drift.
Synthetic
- Extreme verification latency benchmark, Souza, V.M.A.; Silva, D.F.; Gama, J.; Batista, G.E.A.P.A. : Data Stream Classification Guided by Clustering on Nonstationary Environments and Extreme Verification Latency. SIAM International Conference on Data Mining, pp. 873–881, 2015. from Nonstationary Environments – Archive.
- Sine, Line, Plane, Circle and Boolean Data Sets, L.L.Minku, A.P.White, X.Yao, The Impact of Diversity on On-line Ensemble Learning in the Presence of Concept Drift, IEEE Transactions on Knowledge and Data Engineering, vol.22, no.5, pp. 730–742, 2010. from L.Minku webpage.
- SEA concepts, N.W.Street, Y.Kim, A streaming ensemble algorithm for large-scale classification, KDD'01: Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, 2001. from J.Gama webpage.
- STAGGER, J.C.Schlimmer, R.H.Granger, Incremental Learning from Noisy Data, Mach. Learn., vol.1, no.3, 1986.
- Mixed, J.Gama, P.Medas, G.Castillo, P.Rodrigues, Learning with drift detection, 2004.
Data generation frameworks
- L.L.Minku, A.P.White, X.Yao, The Impact of Diversity on On-line Ensemble Learning in the Presence of Concept Drift, IEEE Transactions on Knowledge and Data Engineering, vol.22, no.5, pp. 730–742, 2010. from L.Minku webpage.
- Lindstrom P, SJ Delany & B MacNamee Autopilot: Simulating Changing Concepts in Real Data In: Proceedings of the 19th Irish Conference on Artificial Intelligence & Cognitive Science, D Bridge, K Brown, B O'Sullivan & H Sorensen p272-263
- Narasimhamurthy A., L.I. Kuncheva, A framework for generating data to simulate changing environments, Proc. IASTED, Artificial Intelligence and Applications, Innsbruck, Austria, 2007, 384–389
Projects
- : Computational Intelligence Platform for Evolving and Robust Predictive Systems, Bournemouth University, Evonik Industries, Research and Engineering Centre
- : Handling Concept Drift in Adaptive Information Systems, Eindhoven University of Technology
- : Knowledge Discovery from Ubiquitous Streams, INESC Porto and Laboratory of Artificial Intelligence and Decision Support
- : Adaptive Dynamic Ensemble Prediction Techniques, University of Manchester, University of Bristol
- : autonomous learning agents for decentralised data and information networks
Benchmarks
- : The Numenta Anomaly Benchmark, benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications.
Meetings
- 2014
- * Special Session on "Concept Drift, Domain Adaptation & Learning in Dynamic Environments" @IEEE IJCNN 2014
- 2013
- * Real-World Challenges for Data Stream Mining Workshop-Discussion at the ECML PKDD 2013, Prague, Czech Republic.
- * The 1st International Workshop on Learning stratEgies and dAta Processing in nonStationary environments
- 2011
- * Special Session on Learning in evolving environments and its application on real-world problems at ICMLA'11
- * The 2nd International Workshop on Handling Concept Drift in Adaptive Information Systems
- * Track on Incremental Learning
- * Special Session on Concept Drift and Learning Dynamic Environments
- * Symposium on Computational Intelligence in Dynamic and Uncertain Environments
- 2010
- * International Workshop on Handling Concept Drift in Adaptive Information Systems: Importance, Challenges and Solutions
- * Special Session on Dynamic learning in non-stationary environments
- * Data Streams Track at ACM Symposium on Applied Computing
- * International Workshop on Knowledge Discovery from Sensor Data
- * Novel Data Stream Pattern Mining Techniques
- * Concept Drift and Learning in Nonstationary Environments at
- * Special Session on Machine Learning Methods for Data Streams at the 10th International Conference on Intelligent Design and Applications, ISDA’10
Bibliographic references
Reviews
- Souza, V. M. A., Reis, D. M., Maletzke, A. G., Batista, G. E. A. P. A.. Challenges in Benchmarking Stream Learning Algorithms with Real-world Data, Data Mining and Knowledge Discovery, 1--54. https://link.springer.com/article/10.1007/s10618-020-00698-5
- Krawczyk, B., Minku, L.L., Gama, J., Stefanowski, J., Wozniak, M.. "Ensemble Learning for Data Stream Analysis: a survey", Information Fusion, Vol 37, pp. 132–156,
- Dal Pozzolo, A., Boracchi, G., Caelen, O., Alippi, C., & Bontempi, G.. Credit card fraud detection and concept-drift adaptation with delayed supervised information. In 2015 International Joint Conference on Neural Networks . IEEE.
- C.Alippi, "Learning in Nonstationary and Evolving Environments", Chapter in Intelligence for Embedded Systems. Springer, 2014, 283pp,.
- Gama, J., Žliobaitė, I., Bifet, A., Pechenizkiy, M. and Bouchachia, A., 2014. A survey on concept drift adaptation. ACM computing surveys , 46, p.44.
- C.Alippi, R.Polikar, Special Issue on Learning In Nonstationary and Evolving Environments, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 25, NO. 1, JANUARY 2014
- Dal Pozzolo, A., Caelen, O., Le Borgne, Y. A., Waterschoot, S., & Bontempi, G.. Learned lessons in credit card fraud detection from a practitioner perspective. Expert systems with applications, 41, 4915–4928.
- Zliobaite, I., Learning under Concept Drift: an Overview. Technical Report. 2009, Faculty of Mathematics and Informatics, Vilnius University: Vilnius, Lithuania.
- Jiang, J., A Literature Survey on Domain Adaptation of Statistical Classifiers. 2008.
- Kuncheva L.I. Classifier ensembles for detecting concept change in streaming data: Overview and perspectives, Proc. 2nd Workshop SUEMA 2008, Patras, Greece, 2008, 5–10,
- Gaber, M, M., Zaslavsky, A., and Krishnaswamy, S., Mining Data Streams: A Review, in ACM SIGMOD Record, Vol. 34, No. 1, June 2005,
- Kuncheva L.I., Classifier ensembles for changing environments, Proceedings 5th International Workshop on Multiple Classifier Systems, MCS2004, Cagliari, Italy, in F. Roli, J. Kittler and T. Windeatt, Lecture Notes in Computer Science, Vol 3077, 2004, 1–15, .
- Tsymbal, A., The problem of concept drift: Definitions and related work. Technical Report. 2004, Department of Computer Science, Trinity College: Dublin, Ireland.