Skip to main content

Feature Scaling

Why Should We Do Feature Scaling?

The first question we need to address – why do we need to scale the variables in our dataset? Some
machine learning algorithms are sensitive to feature scaling while others are virtually invariant to it. Let me explain that in more detail.




Gradient Descent Based Algorithms :

Machine learning algorithms like linear regression, logistic regression, neural network, etc. that use
gradient descent as an optimization technique require data to be scaled.


Distance-Based Algorithms : 

Distance algorithms like KNN, K-means, and SVM are most affected by the range of features. This is
because behind the scenes
they are using distances between data points to determine their similarity.

Tree-Based Algorithms : 

Tree-based algorithms, on the other hand, are fairly insensitive to the scale of the features. Think about it, a decision tree is only splitting a node based on a single feature. The decision tree splits a node on a
feature that increases the homogeneity of the node. This split on a feature is not influenced by other
features. 
            So, there is virtually no effect of the remaining features on the split. This is what makes them invariant to the scale of the features!


Scene from Cinema Paradiso



Comments

Popular posts from this blog

Batch and Online Learning

  It is the criterion used to classify Machine Learning systems is whether or not the system can learn incrementally from a stream of incoming data. Batch learning In batch learning , the system is incapable of learning incrementally: it must be trained using all the available data. This will generally take a lot of time and computing resources, so it is typically done offline. First the system is trained, and then it is launched into production and runs without learning anymore; it just applies what it has learned. This is called offline learning . If you want a batch learning system to know about new data (such as a new type of spam), you need to train a new version of the system from scratch on the full dataset (not just the new data, but also the old data), then stop the old system and replace it with the new one. Fortunately, the whole process of training, evaluating, and launching a Machine Learning system can be automated fairly easily (as shown in Figure 1-3 ), so even a batch

What is text.similar() & text.common_contexts() of nltk

Let's first define our input text, I will just Copy/Paste the first paragraph of  Game of Thrones Wikipedia page : input_text = "Game of Thrones is an American fantasy drama television series \ created by David Benioff and D. B. Weiss for HBO. It is an adaptation of A Song \ of Ice and Fire, George R. R. Martin's series of fantasy novels, the first of \ which is A Game of Thrones. The show was filmed in Belfast and elsewhere in the \ United Kingdom, Canada, Croatia, Iceland, Malta, Morocco, Spain, and the \ United States.[1] The series premiered on HBO in the United States on April \ 17, 2011, and concluded on May 19, 2019, with 73 episodes broadcast over \ eight seasons. Set on the fictional continents of Westeros and Essos, Game of \ Thrones has several plots and a large ensemble cast, and follows several story \ arcs. One arc is about the Iron Throne of the Seven Kingdoms, and follows a web \ of alliances and conflicts among the noble dynasties either vying to claim the