Predictive analytics


Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modelling, and machine learning, that analyze current and historical facts to make predictions about future or otherwise unknown events.
In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision-making for candidate transactions.
The defining functional effect of these technical approaches is that predictive analytics provides a predictive score for each individual in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.
Predictive analytics is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, mobility, healthcare, child protection, pharmaceuticals, capacity planning, social networking and other fields.
One of the best-known applications is credit scoring, which is used throughout financial services. Scoring models process a customer's credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time.

Definition

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns. The enhancement of predictive web analytics calculates statistical probabilities of future events online. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.
Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores for each individual organizational element. This distinguishes it from forecasting. For example, "Predictive analytics—Technology that learns from experience to predict the future behavior of individuals in order to drive better decisions." In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated into prescriptive analytics for decision optimization.

Types

Generally, the term predictive analytics is used to mean predictive modeling, "scoring" data with predictive models, and forecasting. However, people are increasingly using the term to refer to related analytical disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.

Predictive models

uses predictive models to analyze the relationship between the specific performance of a unit in a sample and one or more known attributes or features of the unit. The objective of the model is to assess the likelihood that a similar unit in a different sample will exhibit the specific performance. This category encompasses models in many areas, such as marketing, where they seek out subtle data patterns to answer questions about customer performance, or fraud detection models. Predictive models often perform calculations during live transactions, for example, to evaluate the risk or opportunity of a given customer or transaction, in order to guide a decision. With advancements in computing speed, individual agent modeling systems have become capable of simulating human behaviour or reactions to given stimuli or scenarios.
The available sample units with known attributes and known performances is referred to as the "training sample". The units in other samples, with known attributes but unknown performances, are referred to as "out of sample" units. The out of sample units do not necessarily bear a chronological relation to the training sample units. For example, the training sample may consist of literary attributes of writings by Victorian authors, with known attribution, and the out-of sample unit may be newly found writing with unknown authorship; a predictive model may aid in attributing a work to a known author. Another example is given by analysis of blood splatter in simulated crime scenes in which the out of sample unit is the actual blood splatter pattern from a crime scene. The out of sample unit may be from the same time as the training units, from a previous time, or from a future time.

Descriptive models

Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior, descriptive models identify many different relationships between customers or products. Descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Instead, descriptive models can be used, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be utilized to develop further models that can simulate large number of individualized agents and make predictions.

Decision models

s describe the relationship between all the elements of a decision—the known data, the decision, and the forecast results of the decision—in order to predict the results of decisions involving many variables. These models can be used in optimization, maximizing certain outcomes while minimizing others. Decision models are generally used to develop decision logic or a set of business rules that will produce the desired action for every customer or circumstance.

Applications

Although predictive analytics can be put to use in many applications, we outline a few examples where predictive analytics has shown positive impact in recent years.

Business

Analytical customer relationship management is a frequent commercial application of predictive analysis. Methods of predictive analysis are applied to customer data to construct a holistic view of the customer. CRM uses predictive analysis in applications for marketing campaigns, sales, and customer services. Analytical CRM can be applied throughout the customers' lifecycle.
Often corporate organizations collect and maintain abundant data, such as customer records or sale transactions. In these cases, predictive analytics can help analyze customers' spending, usage and other behavior, leading to efficient cross sales, or selling additional products to current customers.
Proper application of predictive analytics can lead to more proactive and effective retention strategies. By a frequent examination of a customer's past service usage, service performance, spending and other behavior patterns, predictive models can determine the likelihood of a customer terminating service sometime soon. An intervention with lucrative offers can increase the chance of retaining the customer. Predictive analytics can also predict silent attrition, the behavior of a customer to slowly but steadily reduce usage.

Child protection

Some child welfare agencies have started using predictive analytics to flag high risk cases. For example, in Hillsborough County, Florida, the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population.

Clinical decision support systems

Predictive analysis have found use in health care primarily to determine which patients are at risk of developing conditions such as diabetes, asthma, or heart disease. Additionally, sophisticated clinical decision support systems incorporate predictive analytics to support medical decision making.
A 2016 study of neurodegenerative disorders provides a powerful example of a CDS platform to diagnose, track, predict and monitor the progression of Parkinson's disease.

Predicting outcomes of legal decisions

The predicting of the outcome of juridical decisions can be done by AI programs. These programs can be used as assistive tools for professions in this industry.

Portfolio, product or economy-level prediction

Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques. They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power.

Underwriting

Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default.

Technology and big data influences

is a collection of data sets that are so large and complex that they become awkward to work with using traditional database management tools. The volume, variety and velocity of big data have introduced challenges across the board for capture, storage, search, sharing, analysis, and visualization. Examples of big data sources include web logs, RFID, sensor data, social networks, Internet search indexing, call detail records, military surveillance, and complex data in astronomic, biogeochemical, genomics, and atmospheric sciences. Big Data is the core of most predictive analytic services offered by IT organizations.
Thanks to technological advances in computer hardware—faster CPUs, cheaper memory, and MPP architectures—and new technologies such as Hadoop, MapReduce, and in-database and text analytics for processing big data, it is now feasible to collect, analyze, and mine massive amounts of structured and unstructured data for new insights. It is also possible to run predictive algorithms on streaming data. Today, exploring big data and using predictive analytics is within reach of more organizations than ever before and new methods that are capable for handling such datasets are proposed.

Analytical techniques

The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.

Regression techniques

models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there are a wide variety of models that can be applied while performing predictive analytics. Some of them are briefly discussed below.

Linear regression model

The linear regression model predicts the response variable as a linear function of the parameters with unknown coefficients. These parameters are adjusted so that a measure of fit is optimized. Much of the effort in model fitting is focused on minimizing the size of the residual, as well as ensuring that it is randomly distributed with respect to the model predictions.
The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares estimation.

Discrete choice models

Multiple regression is generally used when the response variable is continuous and has an unbounded range. Often the response variable may not be continuous but rather discrete. While mathematically it is feasible to apply multiple regression to discrete ordered dependent variables, some of the assumptions behind the theory of multiple linear regression no longer hold, and there are other techniques such as discrete choice models which are better suited for this type of analysis. If the dependent variable is discrete, some of those superior methods are logistic regression, multinomial logit and probit models. Logistic regression and probit models are used when the dependent variable is binary.

Logistic regression

In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model.
The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model. A test assessing the goodness-of-fit of a classification model is the "percentage correctly predicted".

Probit regression

s offer an alternative to logistic regression for modeling categorical dependent variables.

Multinomial logistic regression

An extension of the binary logit model to cases where the dependent variable has more than 2 categories is the multinomial logit model. In such cases collapsing the data into two categories might not make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variable categories are not ordered. Some authors have extended multinomial regression to include feature selection/importance methods such as random multinomial logit.

Logit versus probit

The two regressions tend to behave similarly, except that the logistic distribution tends to be slightly flatter tailed. The coefficients obtained from the logit and probit model are usually close together. However, the odds ratio is easier to interpret in the logit model.
Practical reasons for choosing the probit model over the logistic model could include :
models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure that should be accounted for. As a result, standard regression techniques cannot be applied to time series data and methodology has been developed to decompose the trend, seasonal and cyclical component of the series.
Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models and moving-average models. The Box–Jenkins methodology combines the AR and MA models to produce the ARMA model, which is the cornerstone of stationary time series analysis. ARIMA, on the other hand, are used to describe non-stationary time series.
In recent years time series models have become more sophisticated and attempt to model conditional heteroskedasticity. Such models includethe ARCH model and the GARCH model, both frequently used for financial time series.

Survival or duration analysis

is another name for time-to-event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering.
Censoring and non-normality, which are characteristic of survival data, generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression. The normal distribution, being a symmetric distribution, takes positive as well as negative values, but duration by its very nature cannot be negative and therefore normality cannot be assumed when dealing with duration/survival data.
Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier and Cox proportional hazard model.

Classification and regression trees (CART)

Classification and regression trees are a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively.
Decision trees are formed by a collection of rules based on variables in the modeling data set:
Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules.
A very popular method for predictive analytics is random forests.

Multivariate adaptive regression splines

is a non-parametric technique that builds flexible models by fitting piecewise linear regressions.
Multivariate and adaptive regression spline approach deliberately overfits the model and then prunes to get to the optimal model. The algorithm is computationally very intensive, and in practice an upper limit on the number of basis functions is specified.

Machine learning techniques

includes a number of advanced statistical methods for regression and classification, and finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market.

Tools

Historically, using predictive analytics tools—as well as understanding the results they delivered—required advanced skills. However, modern predictive analytics tools are no longer restricted to IT specialists. As more organizations adopt predictive analytics into decision-making processes and integrate it into their operations, they are creating a shift in the market toward business users as the primary consumers of the information. Business users want tools they can use on their own. Vendors are responding by creating new software that removes the mathematical complexity, provides user-friendly graphic interfaces and/or builds in short cuts that can, for example, recognize the kind of data available and suggest an appropriate predictive model. Predictive analytics tools have become sophisticated enough to adequately present and dissect data problems, so that any data-savvy information worker can utilize them to analyze data and retrieve meaningful, useful results. For example, modern tools present findings using simple charts, graphs, and scores that indicate the likelihood of possible outcomes.
There are numerous tools available in the marketplace that help with the execution of predictive analytics. These range from those that need very little user sophistication to those that are designed for the expert practitioner. The difference between these tools is often in the level of customization and heavy data lifting allowed.

PMML

The Predictive Model Markup Language was proposed for standard language for expressing predictive models. Such an XML-based language provides a way for the different tools to define predictive models and to share them. PMML 4.0 was released in June, 2009.

Criticism

There are plenty of skeptics when it comes to computers' and algorithms' abilities to predict the future, including Gary King, a professor from Harvard University and the director of the Institute for Quantitative Social Science. People are influenced by their environment in innumerable ways. Predicting perfectly what people will do next requires that all the influential variables be known and measured accurately. "People's environments change even more quickly than they themselves do. Everything from the weather to their relationship with their mother can change the way people think and act. All of those variables are unpredictable. How they will impact a person is even less predictable. If put in the exact same situation tomorrow, they may make a completely different decision. This means that a statistical prediction is only valid in sterile laboratory conditions, which suddenly isn't as useful as it seemed before."
In a study of 1072 papers published in Information Systems Research and MIS Quarterly between 1990 and 2006, only 52 empirical papers attempted predictive claims, of which only 7 carried out proper predictive modeling or testing.