Computational complexity of machine learning algorithms

What is complexity ? Good question. I should have addressed it right away. It is a notion which is often addressed in algorithmic classes, but not in machine learning classes… Simply put, say you have a model. Training it on points takes minutes. Now what happens if you train it on points ? If the training time is now then the training time is linear. Sometimes, it is more. The new training time may be . In this case, the training time would be called quadratic in the number of points. If you have a long training time for few thousands points, do not expect to be able to run this method on millions of points (kernel SVMs, I miss you in large samples).

It is harder than one would think to evaluate the complexity of a machine learning algorithm, especially as it may be implementation dependent, properties of the data may lead to other algorithms or the training time often depends on some parameters passed to the algorithm. Another caveat is that the learning algorithms are complex and rely on other algorithms.

A theoretical point of view

Some bounds

Here, I propose upper bounds (as the implementation achieving this bound will be described) when the data is dense.

Calling the number of training sample, the number of features, the number of trees (for methods based on various trees), , the number of support vectors and the number of neurons at layer in a neural network, we have the following approximations.

Algorithm Classification/Regression Training Prediction
Decision Tree C+R
Random Forest C+R
Random Forest R Breiman implementation
Random Forest C Breiman implementation
Extremly Random Trees  C+R
Gradient Boosting () C+R
Linear Regression R
SVM (Kernel) C+R
k-Nearest Neighbours (naive) C+R
Nearest centroid C
Neural Network C+R  ?
Naive Bayes C  

Justifications

Decision Tree based models

Obviously, ensemble methods multiply the complexity of the original model by the number of “voters” in the model, and replace the training size by the size of each bag.

When training a decision tree, a split has to be found until a maximum depth has been reached.

The strategy for finding this split is to look for each variable (there are of them) to the different thresholds (there are up to of them) and the information gain that is achieved (evaluation in )

In the Breiman implementation, and for classification, it is recommanded to use predictors for each (weak) classifier.

Extremly random trees

The search strategy for the optimal split simply does not take place in the case of ERTs. This makes it much simpler to find a (weaker) split.

However (in my experience), ERTs implementation are not much faster than RFs.

Linear regressions

The problem of finding the vector of weights in a linear regression boils down to evaluating the following equation: .

The most computationnaly intensive part is to evaluate the product , which is done in operations, and then inverting it, which is done in operations.

Though most implementations prefer to use a gradient descent to solve the system of equations , the complexity remains the same.

Support Vector Machine

For the training part, the classical algorithms require to evaluate the kernel matrix , the matrix whose general term is where is the specified kernel.

It is assumed that K can be evaluated with a complexity, as it is true for common kernels (Gaussian, polynomials, sigmoid…). This assumption may be wrong for other kernels.

Then, solving the constrained quadratic programm is “morally equivalent to” inverting a square matrix of size , whose complexity is assumed to be

k-Nearest Neighbours

In its simplest form, given a new data point , the kNN algorithm looks for the k closest points to in the training data and returns the most common label (or the averaged values of targets for a regression problem).

To achieve this, it is necessary to compare the distance between and every point in the data set. This amounts to operations. For the common distances (Euclide, Manhattan…) this operation is performed in a operations. Not that kernel k Nearest Neighbours have the same complexity (provided the kernels enjoy the same property).

However, many efforts pre-train the kNN, indexing the points with quadtrees, which enable to lower dramatically the number of comparisons to the points in the training set.

Likewise, in the case of a sparse data set, with an inverted indexation of the rows, it is not necessary to compute all the pairwise distances.

The practical point of view

All this is nice, but what about real life examples ? We can focus on sk-learn implementations below.

The assumptions will be that the complexities take the form of and and will be estimated using randomly generated samples with and varying. Then, using a log-log regression, the complexities are estimated.

Though this assumption is wrong, it should help to have a better idea of how the algorithms work and it will reveal some implementation details / difference between the default settings of the same algorithm that one may overlook.

The results

Method
RandomForestRegressor 1.21 0.89
ExtraTreesRegressor 1.03 0.88
AdaBoostRegressor 0.71 1.01
LinearRegression 0.72 1.3
SVR 1.96 0.42
RandomForestClassifier 1.09 0.38
ExtraTreesClassifier 0.81 0.31
AdaBoostClassifier 0.89 0.79
SVC 2.05 0.52
LogisticRegression(solver=liblinear) 0.9 0.88
LogisticRegression(solver=sag) 1.03 0.95

Surprisingly, some methods are sublinear in . Perhaps the sample sizes were too small. As expected, the Support Vector show a complexity in that does not scale well with the sample size (close to 2).

Another interesting point to note are the complexities in for the random forest and extra trees, the component in varies according to the fact that we are performing a regression or a classification problem. A short look at the documentation explains it, they have different behaviors for each problem!

For the regression:

max_features : int, float, string or None, optional (default=”auto”)

The number of features to consider when looking for the best split:

If int, then consider max_features features at each split.
If float, then max_features is a percentage and int(max_features * n_features) features are considered at each split.
If “auto”, then max_features=n_features.
If “sqrt”, then max_features=sqrt(n_features).
If “log2”, then max_features=log2(n_features).
If None, then max_features=n_features.

Whereas the classification default behavior is

If “auto”, then max_features=sqrt(n_features).

The code

Fore those who would like to run the code over other algorithms, here is the method I used.

import numpy as np
import pandas as pd
import time
from sklearn.linear_model import LinearRegression
import math


class ComplexityEvaluator:

    def __init__(self, nrow_samples, ncol_samples):
        self._nrow_samples = nrow_samples
        self._ncol_samples = ncol_samples

    def _time_samples(self, model, random_data_generator):
        rows_list = []
        for nrow in self._nrow_samples:
            for ncol in self._ncol_samples:
                train, labels = random_data_generator(nrow, ncol)

                start_time = time.time()
                model.fit(train, labels)
                elapsed_time = time.time() - start_time

                result = {"N": nrow, "P": ncol, "Time": elapsed_time}
                rows_list.append(result)

        return rows_list

    def Run(self, model, random_data_generator):
        data = pd.DataFrame(self._time_samples(model, random_data_generator))
        print(data)
        data = data.applymap(math.log)
        linear_model = LinearRegression(fit_intercept=True)
        linear_model.fit(data[["N", "P"]], data[["Time"]])
        return linear_model.coef_

And, some unit tests (that can just be appended at the bottom of the previous class).

if __name__ == "__main__":
    class TestModel:

        def __init__(self):
            pass

        def fit(self, x, y):
            time.sleep(x.shape[0] / 1000.)

    def random_data_generator(n, p):
        return np.random.rand(n, p), np.random.rand(n, 1)

    model = TestModel()

    complexity_evaluator = ComplexityEvaluator(
            [200, 500, 1000, 2000, 3000], [1,5,10])

    res = complexity_evaluator.Run(model, random_data_generator)

    print(res)

After a small unit test, everything seems consistent.

N   P      Time
0    200   1  0.200263
1    200   5  0.200421
2    200  10  0.200520
3    500   1  0.500928
4    500   5  0.500711
5    500  10  0.501064
6   1000   1  1.001396
7   1000   5  1.001420
8   1000  10  1.000629
9   2000   1  2.002225
10  2000   5  2.002096
11  2000  10  2.000759
12  3000   1  3.003331
13  3000   5  3.003340
14  3000  10  3.003350
[[  9.99583664e-01   1.13487892e-05]]

So let’s enjoy the number of algorithms offered by sklearn. The following list may be updated as new algorithms are tested.

import numpy as np
import ComplexityEvaluator
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor, AdaBoostRegressor
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier, AdaBoostClassifier
from sklearn.svm import SVR, SVC
from sklearn.linear_model import LogisticRegression


def random_data_regression(n, p):
    return np.random.rand(n, p), np.random.rand(n)


def random_data_classification(n, p):
    return np.random.rand(n, p), np.random.binomial(1, 0.5, n)


regression_models = [RandomForestRegressor(),
                     ExtraTreesRegressor(),
                     AdaBoostRegressor(),
                     LinearRegression(),
                     SVR()]

classification_models = [RandomForestClassifier(),
                         ExtraTreesClassifier(),
                         AdaBoostClassifier(),
                         SVC(),
                         LogisticRegression(),
                         LogisticRegression(solver='sag')]

names = ["RandomForestRegressor",
         "ExtraTreesRegressor",
         "AdaBoostRegressor",
         "LinearRegression",
         "SVR",
         "RandomForestClassifier",
         "ExtraTreesClassifier",
         "AdaBoostClassifier",
         "SVC",
         "LogisticRegression(solver=liblinear)",
         "LogisticRegression(solver=sag)"]

complexity_evaluator = ComplexityEvaluator.ComplexityEvaluator(
    [500, 1000, 2000, 5000, 10000, 15000, 20000],
    [5, 10, 20, 50, 100, 200])

i = 0
for model in regression_models:
    res = complexity_evaluator.Run(model, random_data_regression)[0]
    print(names[i] + ' | ' + str(round(res[0], 2)) +
          ' | ' + str(round(res[1], 2)))
    i = i + 1

for model in classification_models:
    res = complexity_evaluator.Run(model, random_data_classification)[0]
    print(names[i] + ' | ' + str(round(res[0], 2)) +
          ' | ' + str(round(res[1], 2)))
    i = i + 1

Learning more

The elements of statistical learning by Trevor Hastie, Robert Tibshirani, Jerome Friedman is a brilliant introduction to the topic and will help you have a better understanding of most of the algorithms presented in this article !

Foundations of Machine Learning by Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar and Francis Bach provides a theoretical framework to various machine learning algorithms and a detailed implementation of some of them.

Introduction to q-methodology

“In Q methodology there is little more reason to understand the mathematics involved than there is to understand mechanics in order to drive a car.” S.R. Brown

Context

This methodology is particularly relevant when one wants to assess the order of importance of variables as seen by a sample of individuals. Per example “A good computer… must be fast, must be beautiful, must be resistant to shocks …”. One approach could be the usual Likert scale analysis. However, the issue is that some (many) individuals may think that every factor is very important.

This is why, somehow, q methodology was born. It simply forces the respondant to order their preferences. How ? The assertions have to be placed in a pyramid. Then the position of each element for each participant constitutes the dataset. Each individual will fill a pyramid with different assertions, like the one below.

Illustration of a qsort

Fig. 1: A q sort

Who uses Q-methodology ?

Some will use q-methdology to study the conceptualization of democracy by the individuals [1] the quality of education as seen by teachers [2] or the goals and values of beef farmers in Brazil [3], to name a few published applications. But it can also be used to improve customer behavior analysis, or define relevant (at last!) opinion surveys!

A detailed example

Assertions

Imagine one is asking which assertions describe a good runner.

  • A good runner must have a steady speed (speed)
  • A good runner eats well (food)
  • A good runner trains regularly (training)
  • A good runner does not drink alcohol (alcohol)

In parenthesis are the abbreviations that will be used in the next descriptions.

Encoding the data

Once the data gathered it looks like:

Individual speed food training alcohol
1. 0 0 +1 -1
2. +1 0 0 -1
3. +1 0 0 -1
     
n. 0 +1 0 -1

Per example the row number one would be the encoding of the responses on the first figure. Each line is called a q sort.

The method

Once the data gathered and encoded, a PCA (principal component analysis) is performed on this dataset, followed by an orthogonal rotation (usually varimax). The algebraic details can be found in [5].

In the end, the data looks like:

Factor 1st factor 2nd factor
speed +1 0  
food 0 +1  
training 0 -1  
alcohol -1 0  

Analysis

The analysis is then performed observing the axis after the rotation. Quoting [7]

The most important aspect of the study file will nonetheless be the factor arrays themselves. These will be found in a table, ‘Item or Factor Scores’. […] The interpretative task in Q methodology involves the production of a series of summarizing accounts, each of which explicates the viewpoint being expressed by a particular factor.

Basically each factor will be a point of view (or will represent different similar points of view), shared by many individuals in the sample. This is a very first step to factor analysis and I recommend reading [7] in more details for an analysis with actual data.

Implementations

The “Q sort” data collection procedure is traditionally done using a paper template and the sample of statements or other stimuli printed on individual cards. Wikipedia.

Setting up an online survey

Unless you have a lot of time to write down the assertions on small cardboards, and take pictures of the way they are disposed on a blackboard, I recommend using an online survey ! This github repository does a nice job : easy-htmlq. You really do not need a lot of knowledge of web development to get it running :)

If you do not want to go through the hassle of using your own server, you can use a firebase service (possible with this version of the code) to keep the data and deploy the pages on Netlify or on Github pages (depending on your preferences).

Let’s have a look at the contents.

├── fonts
│   ├── glyphicons-halflings-regular.eot
│   ├── [...] 
│   └── glyphicons-halflings-regular.woff
├── index.html
├── LICENSE
├── logo_center.jpg
├── logo.jpg
├── logo_right.jpg
├── README.md
├── settings
│   ├── config.xml
│   ├── firebaseInfo.js
│   ├── language.xml 
│   ├── map.xml 
│   └── statements.xml 
├── src
│   ├── angular.min.js
│   ├── [...]
│   └── xml2json.min.js
├── stylesheets
│   ├── bootstrap.min.css
│   └── htmlq.css
└── templates
    ├── dropEventText.html
    ├── [...] 
    └── thanks.html

The only things that need to be changed are in settings, which are just xml files.

statements.xml and map.xml

Corresponds to the statements to be sorted by the respondant. In the first figure, we would have used the following configuration.

<?xml version="1.0" encoding="UTF-8"?>
<statements version="1.0" htmlParse="false">
  <statement id="1">
    A good runner must have a steady speed 
  </statement>
  <statement id="2">
    A good runner eats well 
  </statement>
  <statement id="3">
    A good runner trains regularly 
  </statement>
  <statement id="4">
    A good runner does not drink alcohol 
  </statement>
</statements>

Note that map.xml must be modified accordingly:

<map version="1.0" htmlParse="false">
  <column id="-1" colour="FFD5D5">1</column>
  <column id=" 0" colour="E9E9E9">2</column>
  <column id="+1" colour="9FDFBF">1</column>
</map>

config.xml

Corresponds to other questions that can be asked (checkboxes…)

firebaseInfo.js

Where to put you firebase tokens.

// Initialize Firebase
var config = {
  apiKey: "",
  authDomain: "",
  databaseURL: "",
  projectId: "",
  storageBucket: "",
  messagingSenderId: ""
};
firebase.initializeApp(config);
var rootRef = firebase.database().ref();

language.xml

Enables you to change various elements of language.

Analysis and simulations, in R

I do not have data that I can publicly disclose, so the analysis will be on simulated data.

Let’s assume you collected the data and want to analyze it. There is an R package taking charge of the analysis [4]. Let’s have a look at it with simulated data. N will be the number of individuals, target_sort a distribution that is the “actual order of preferences” of the individuals, on which we will swap some elements accross individuals, randomly. The following code does the job.

require("qmethod")

N <- 15
target_sort <- c(-2, 2, 1, -1, 1, 0, 0, -1, 1, 0)

data <- t(replicate(N, target_sort))

for (i in 1:nrow(data))
{
  switch_indices <- sample(x = ncol(data), 2)
    tmp <- data[i, switch_indices[1]]
    data[i, switch_indices[1]] <- data[i, switch_indices[2]]
    data[i, switch_indices[2]] <- tmp
}

data <- t(data)
  rownames(data) <- paste0("assertion_", 1:10)
  colnames(data) <- paste0("individual_", 1:N)

qmethod(data, nfactors = 2)

Now let’s look at the output of qmethod. It is basically a very long console output divided in several blocks. The first block is just a summary of the parameters and the data provided to the method.

Q-method analysis.
Finished on:             Mon Apr 16 18:10:10 2018
Original data:           10 statements, 15 Q-sorts
Forced distribution:     TRUE
Number of factors:       2
Rotation:                varimax
Flagging:                automatic
Correlation coefficient: pearson
Q-method analysis.
Finished on:             Mon Apr 16 18:10:10 2018
Original data:           10 statements, 15 Q-sorts
Forced distribution:     TRUE
Number of factors:       2
Rotation:                varimax
Flagging:                automatic
Correlation coefficient: pearson

Then we have more details about the data sent to the qmethod function.

Original data :
             individual_1 individual_2 individual_3 individual_4 individual_5
assertion_1            -2            0           -2           -2           -2
assertion_2             2            2            2            0            2
assertion_3             1            1            1            1            1
assertion_4            -1           -1           -1           -1            1
assertion_5            -1            1           -1            1            1
assertion_6             0            0            0            0            0
assertion_7             0            0            0            0            0
assertion_8             1           -1            1           -1           -1
assertion_9             1            1            1            1           -1
assertion_10            0           -2            0            2            0
             individual_6 individual_7 individual_8 individual_9 individual_10
assertion_1            -2            1           -2            0            -2
assertion_2             2            2            2            2             2
assertion_3             1            1            1            1            -1
assertion_4            -1           -1           -1           -1             1
assertion_5             1            1           -1            1             1
assertion_6             0            0            0           -2             0
assertion_7            -1            0            0            0             0
assertion_8             0           -1            1           -1            -1
assertion_9             1           -2            1            1             1
assertion_10            0            0            0            0             0

The loadings. In a nutshell, they represent how close someone is to the factor column at the end of the qsort. Note that the individuals where the values -2, 2 were untouched by the random swap are the one with the highest loadings with respect to f1. This may be particularly interesting if the individual are heterogeneous, and to test wether one of them (or some of them) are actually really close to the “consensual preferences” (i.e. the principal component).

Q-sort factor loadings :
                 f1    f2
individual_1   0.93 0.085
individual_2   0.26 0.762
individual_3   0.93 0.085
individual_4   0.56 0.286
individual_5   0.43 0.553
individual_6   0.80 0.517
individual_7  -0.21 0.732
individual_8   0.93 0.085
individual_9   0.27 0.849
individual_10  0.54 0.406
 (...) See item '...$loa' for the full data.

This part is seldomly reported, I ommited some lines on purpose.

Flagged Q-sorts :
              flag_f1 flag_f2
individual_1  " TRUE" "FALSE"
[...]
individual_10 "FALSE" "FALSE"
 (...) See item '...$flagged' for the full data.

This part is seldomly reported, I ommited some lines on purpose.

Statement z-scores :
             zsc_f1 zsc_f2
assertion_1  -1.841 -0.290
[...]
assertion_10 -0.088 -0.382

At last we have the figures that will usually be reported in most papers relying on qsorts. Note that the first column corresponds (almost) to c(-2, 2, 1, -1, 1, 0, 0, -1, 1, 0) as expected! Increasing N in the code allows to recover exactly the original vector.

Statement factor scores :
             fsc_f1 fsc_f2
assertion_1      -2      0
assertion_2       2      2
assertion_3       1      1
assertion_4      -1     -1
assertion_5      -1      1
assertion_6       0     -2
assertion_7       0      0
assertion_8       1     -1
assertion_9       1      1
assertion_10      0      0
Factor characteristics:
   General factor characteristics: 
   av_rel_coef nload eigenvals expl_var reliability se_fscores
f1         0.8     6       6.1       41        0.96        0.2
f2         0.8     6       5.2       35        0.96        0.2

   Correlation between factor z-scores: 
       zsc_f1 zsc_f2
zsc_f1   1.00   0.52
zsc_f2   0.52   1.00

   Standard error of differences between factors: 
     f1   f2
f1 0.28 0.28
f2 0.28 0.28
Distinguishing and consensus statements :
              dist.and.cons f1_f2 sig_f1_f2
assertion_1  Distinguishing -1.55      ****
assertion_2       Consensus -0.17          
assertion_3       Consensus -0.08          
assertion_4       Consensus  0.17          
assertion_5  Distinguishing -1.50      ****
assertion_6  Distinguishing  1.07       ***
assertion_7       Consensus -0.11          
assertion_8  Distinguishing  1.59      ****
assertion_9       Consensus  0.29          
assertion_10      Consensus  0.29          

Final words

That’s it! Q methodology is a vast topic and deserves a whole book ! Covering the theory of principal components, rotations, possibly the mathematic underlying the method, like *varimax** and the other options, detailing various surveys and how the analysis was performed, setting up tests… As I was writing this post I realized how optimistic I was when I thought I could describe the method. Anyway, I hope this will be enough for a reader to set up one’s survey and analyze it.

Sources and external sites

[1] Rune Holmgaard Andersen, Jennie L. Schulze, and Külliki Seppel, “Pinning Down Democracy: A Q-Method Study of Lived Democracy,” Polity 50, no. 1 (January 2018): 4-42.

[2] Grover, Vijay Kumar (2015, August). Developing indicators of quality school education as perceived by teachers using Q-methodology approach. Zenith International Journal of Multidisciplinary Research, 5(8), 54-65.

[3] Pereira, Mariana A., John R. Fairweather, Keith B. Woodford, & Peter L. Nuthall (2016, April). Assessing the diversity of values and goals amongst Brazilian commercial-scale progressive beef farmers using Q-methodology. Agricultural Systems, 144, 1-8.

[4] Aiora Zabala. qmethod: A Package to Explore Human Perspectives Using Q Methodology. The R Journal, 6(2):163-173, Dec 2014.

[5] H. Abdi, “Factor Rotations in Factor Analyses”

[6] A primer on Q methodology - SR Brown - Operant subjectivity, 1993 - researchgate.net

[7] S. Watts and P. Stenner, “Doing Q methodology: theory, method and interpretation,” p. 26.

What to expect from a model when there is nothing to learn ?

An imbalanced binary classification problem

Did it ever happen to you to have a model that have a lower accuracy than a constant guessing (the one that predicts the most common class) ? It happened to me recently and I was quitte puzzled: 200 data points, two classes, 60% of the sample belonged to class one, the remaining part, to class two.

After running a random forest, I observed an accuracy of 50% on the out of bag predictions. This seemed really low, if there were nothing to learn from the data, then why did the model did not predict the most common class and achieved an accuracy of 60% ?

Playing with the parameters (increasing min_sample_leaf) improved the performance (but it was forcing the trees to have a ridiculously low depth).

So I decided to simulate the distribution of the out of sample accuracy of my model with the following snippet! (in R)

library("randomForest")

N <- 100
P <- 0.6
TRIALS <- 10000

random_response <- as.factor(rbinom(n = N, size = 1, p = P))

evaluate_error_rate <- function(blob)
{
  model <-
    randomForest(x = matrix(rnorm(n = N * 5), nrow = N), y = random_response)
    model$err.rate[nrow(model$err.rate), 1]
}

res <- sapply(1:TRIALS, evaluate_error_rate)

hist(res, main = 'Distribution of the out of sample error',
      xlab = 'Out of sample error (percent of mistmatches)')
mean(res) # 0.417235 seems good...
sum(res[res>0.5])/length(res) # 0.010754
sum(res[res>0.4])/length(res) # 0.272972

Illustration

What were the results ? The mean of the sample seems to converge to the accuracy of a constant predictor, which is good. However, the probability that a model makes more mistake than a random guess was not that low.

What is event better is that, given a training procedure, a number of points, a number of variables and an imbalance between classes, this distribution should not change. So now, one could even imagine to create a statistical test “did my model actually learned something”, and give a probability to the rejection of the null hypothesis. I leave this to the reader :)

Learning more

The elements of statistical learning by Trevor Hastie, Robert Tibshirani, Jerome Friedman is a brilliant introduction to the topic and will help you have a better understanding of most of the algorithms presented in this article !

Applied Predictive Modeling by Max Kuhn and Kjell Johnson is a good introduction to R (used in the code snippets) and machine learning.

Should I Scale my data?

If you are asking this, then you probably do not understand the algorithm you are using. This is a bad habit to start with, but if you do not want, have the time or the interest, the following table should be a decent starting point.

Some definitions

Centering Centering a variable consists in substracting the mean value to each value, so that the new variable has a sample mean equals to 0.

Reducing Reducing a value consists in dividing every value of the sample by the standard deviation of the sample.

Scaling Here we will call “scaling” the action consisting of centering the data and then reducing it. After the scaling, the sample has a null sample mean and a standard deviation of 1.

Generalities about algorithms regarding the scaling of the data

Supervised learning

Algorithm Scaling
Decision Tree No
Random Forest No
Gradient Boosting No
Linear Regression No
Penalized Linear Regression Yes, probably
SVM (Kernel) Yes, probably
k-Nearest Neighbours Yes, probably
Nearest centroid Yes, probably
Neural Network  Yes, probably

Unsupervised learning

Algorithm Scaling needed
PCA Yes, probably
Random projections Yes, probably
t-SNE Yes, probably

The following tables should be read this way. If scaling is not needed, it means you should not see changes between the results you obtain with or without scaling.

If it says yes, probably, it means that scaling is useful as features should have the same order of magnitude for the algorithm to work properly. However, it does not mean that performance will increase.

Per example, in presence of features lying on a bounded scale (when translating an image to a grayscale image and then feeding it to a neural network, or when turning a text to a TFIDF matrix), scaling is not recommended.

When the scaling is performed before applying the algorithm

Note that some libraries (especially in R) take care of the scaling before applying the algorithm. Though this seems to be a bad idea (the behaviour of the algorithm if a column is constant becomes implementation dependent, per example), this may save you some efforts.

svm(x, y = NULL, scale = TRUE, type = NULL, kernel =
    "radial", degree = 3, gamma = if (is.vector(x)) 1 else 1 / ncol(x),
    coef0 = 0, cost = 1, nu = 0.5,
    class.weights = NULL, cachesize = 40, tolerance = 0.001, epsilon = 0.1,
    shrinking = TRUE, cross = 0, probability = FALSE, fitted = TRUE,
    ..., subset, na.action = na.omit)

This is the svm function as presented in the e1071 R package. Note the default value of scale.

glmnet(x, y, family=c("gaussian","binomial","poisson","multinomial","cox","mgaussian"),
    weights, offset=NULL, alpha = 1, nlambda = 100,
    lambda.min.ratio = ifelse(nobs<nvars,0.01,0.0001), lambda=NULL,
    standardize = TRUE, intercept=TRUE, thresh = 1e-07, dfmax = nvars + 1,
    pmax = min(dfmax * 2+20, nvars), exclude, penalty.factor = rep(1, nvars),
    lower.limits=-Inf, upper.limits=Inf, maxit=100000,
    type.gaussian=ifelse(nvars<500,"covariance","naive"),
    type.logistic=c("Newton","modified.Newton"),
    standardize.response=FALSE, type.multinomial=c("ungrouped","grouped"))

In the glmnet package the argument is now called standardize. Note that here, the response can be standardized as well. This topic will not be covered in this post. Now coming back to the dangers of such an approach, look at the following sample:

N <- 100
P <- 5
X <- matrix(data = rnorm(N*P), nrow = N)
Y <- matrix(rnorm(N), nrow = N)

X[,1] <- X[,1]*0 # some evil action 

require("glmnet")
model <- glmnet(x = X, y = Y)

# Runs without issues

require("e1071")
model2 <- svm(x = X, y = Y)

# Warning message:
#  In svm.default(x = X, y = Y) :
#  Variable(s) ‘X1’ constant. Cannot scale data.

In the case where you run many models on many datasets (or many combination of features) some will scale the data, others will not (if one of the features is constant) and may report bad performances because the scaling was not operated…

Is it always possible to scale the data ?

Theoretical point of view

The assumptions when substracting the mean and dividing by the standard deviation is that they both exist ! Though with finite samples, we can always evaluate sample mean and sample variance, if the variables come from (say a Cauchy distribution) the coefficients for scaling may vary dramatically when enriching the sample with new points.

However, in the case where one is trying to learn anything from distributions that are not integrable, there will be many other issues to deal with.

Practical point of view

With a sparse dataset, scaling is not a good idea : it would force many of the points (the ones that are 0s in the original dataset). But reducing the variables is possible! And it turns out that some algorithms are not affected by the centering (or not) of the data.

A better approach

As we saw, there are actually three types of algorithms : those who do not change with monotonic transformations of the inputs, those who do not change with translations of the input and those who do not fit in the first two categories.

Note that the “monotonic transformation invariance” is the strongest property, as translation is just a monotonic transformation.

So the algorithms would enjoy a better representation in this table:

Supervised learning

Algorithm Translation invariant Monotonic transformation invariant
Decision Tree X X
Random Forest X X
Gradient Boosting X X
Linear Regression X  
Penalized Linear Regression    
SVM (Gaussian kernel) X  
SVM (Other kernels)    
k-Nearest Neighbours X  
Nearest centroid X  
Neural Network    

Unsupervised learning

Algorithm Translation invariant Monotonic transformation invariant
PCA    
Random projections    
t-SNE X  

Learning more

The elements of statistical learning by Trevor Hastie, Robert Tibshirani, Jerome Friedman is a brilliant introduction to the topic and will help you have a better understanding of most of the algorithms presented in this article !