Why Should We Do Feature Scaling?
The first question we need to address – why do we need to scale the variables in our dataset? Some
machine learning algorithms are sensitive to feature scaling while others are virtually invariant to it. Let me explain that in more detail.
machine learning algorithms are sensitive to feature scaling while others are virtually invariant to it. Let me explain that in more detail.
Gradient Descent Based Algorithms :
Machine learning algorithms like linear regression, logistic regression, neural network, etc. that use
gradient descent as an optimization technique require data to be scaled.
gradient descent as an optimization technique require data to be scaled.
Distance-Based Algorithms :
Distance algorithms like KNN, K-means, and SVM are most affected by the range of features. This is
because behind the scenes they are using distances between data points to determine their similarity.
because behind the scenes they are using distances between data points to determine their similarity.
Tree-Based Algorithms :
Tree-based algorithms, on the other hand, are fairly insensitive to the scale of the features. Think about it, a decision tree is only splitting a node based on a single feature. The decision tree splits a node on afeature that increases the homogeneity of the node. This split on a feature is not influenced by other
features.
So, there is virtually no effect of the remaining features on the split. This is what makes them invariant to the scale of the features!
Comments
Post a Comment