Machine Learning [Python] – K-Nearest Neighbors (KNN) – Classification

knn python

In this tutorial, we will learn how to predict a data point using K-Nearest Neighbors.

K-Nearest Neighbors is an algorithm for supervised learning. Where the data is “trained” with data points corresponding to their classification. Once a point is to be predicted, it takes into account the ‘K’ nearest points to it to determine its classification.

K Nearest Neighbor (or kNN ) is a supervised machine learning algorithm useful for classification problems. It calculates the distance between the test data and the input and gives the prediction according.

Here’s a visualization of the K-Nearest Neighbors algorithm.

k Nearest Neighbor Classifier ( kNN )-Machine Learning Algorithms | by  Shubham Panchal | Medium
Source

In this case, we have data points of Class A and B. We want to predict what the star (test data point) is. If we consider a k value of 3 (3 nearest data points) we will obtain a prediction of Class B. Yet if we consider a k value of 6, we will obtain a prediction of Class A. In this way, it is important to consider the value of k.

Parts Required

  • Python interpreter (Spyder, Jupyter, etc.).

Procedure

Following are the steps required to perform this tutorial.

Packages Needed

import numpy as np
 import matplotlib.pyplot as plt
 import pandas as pd
 import numpy as np
 from sklearn import preprocessing

Dataset

Imagine a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, the company can customize offers for individual prospective customers. It is a classification problem. That is, given the dataset, with predefined labels, we need to build a model to be used to predict the class of a new or unknown case.

The target field, called custcat, has four possible values that correspond to the four customer groups, as follows: 1- Basic Service 2- E-Service 3- Plus Service 4- Total Service

The objective is to build a classifier, to predict the class of unknown cases.

Load Data From CSV File

dataset = pd.read_csv('teleCust1000t.csv')

Data Visualization and Analysis

dataset['custcat'].value_counts()
Number of customers per class
dataset.hist(column='income', bins=50)

Feature Set

Lets define feature sets, X:

dataset.columns

To use scikit-learn library, we have to convert the Pandas data frame to a Numpy array:

X = dataset[['region', 'tenure','age', 'marital', 'address', 'income', 'ed', 'employ','retire', 'gender', 'reside']] .values  #.astype(float)
X[0:5]

What are our labels?

y = dataset['custcat'].values
y[0:5]

Normalize Data

Data Standardization gives data zero mean and unit variance, it is a good practice, especially for algorithms such as KNN which is based on the distance of cases:

X = preprocessing.StandardScaler().fit(X).transform(X.astype(float))
X[0:5]

Train Test Split

Out of Sample Accuracy is the percentage of correct predictions that the model makes on data that the model has NOT been trained on. Doing a train and test on the same dataset will most likely have low out-of-sample accuracy, due to the likelihood of being over-fit.

It is important that our models have high, out-of-sample accuracy, because the purpose of any model, is to make correct predictions on unknown data. So how can we improve out-of-sample accuracy? One way is to use an evaluation approach called Train/Test Split.

Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set.

This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data.

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape,  y_train.shape)
print ('Test set:', X_test.shape,  y_test.shape)

Classification

K nearest neighbor (KNN)

Import library

from sklearn.neighbors import KNeighborsClassifier

Training

Lets start the algorithm with k=4 for now:

k = 4
#Train Model and Predict  
neigh = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
neigh

Predicting

We can use the model to predict the test set:

yhat = neigh.predict(X_test)
yhat[0:5]

Accuracy Evaluation

In multilabel classification, accuracy classification score is a function that computes subset accuracy. This function is equal to the jaccard_score function. Essentially, it calculates how closely the actual labels and predicted labels are matched in the test set.

from sklearn import metrics
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat))

Build the model with k=6

k = 6
neigh6 = KNeighborsClassifier(n_neighbors = k).fit(X_train,y_train)
yhat6 = neigh6.predict(X_test)
print("Train set Accuracy: ", metrics.accuracy_score(y_train, neigh6.predict(X_train)))
print("Test set Accuracy: ", metrics.accuracy_score(y_test, yhat6))

What about other K?

K in KNN, is the number of nearest neighbors to examine. It is supposed to be specified by the user. So, how can we choose right value for K? The general solution is to reserve a part of your data for testing the accuracy of the model. Then chose k =1, use the training part for modeling, and calculate the accuracy of prediction using all samples in your test set. Repeat this process, increasing the k, and see which k is the best for your model.

We can calculate the accuracy of KNN for different Ks.

Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))

for n in range(1,Ks):
    
    #Train Model and Predict  
    neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
    yhat=neigh.predict(X_test)
    mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)

    
    std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

mean_acc
plt.plot(range(1,Ks),mean_acc,'g')
plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
plt.fill_between(range(1,Ks),mean_acc - 3 * std_acc,mean_acc + 3 * std_acc, alpha=0.10,color="green")
plt.legend(('Accuracy ', '+/- 1xstd','+/- 3xstd'))
plt.ylabel('Accuracy ')plt.xlabel('Number of Neighbors (K)')
plt.tight_layout()
plt.show()

One way to find the best value of K is to plot the graph of K value and the corresponding error rate for the dataset.

Let’s plot the mean error for the predicted values of test set for all the K values between 1 and 800. To do so, let’s first calculate the mean of error for all the predicted values where K ranges from 1 and 800:

error = []

# Calculating error for K values between 1 and 800
for i in range(1, 800):
    knn = KNeighborsClassifier(n_neighbors=i)
    knn.fit(X_train, y_train)
    pred_i = knn.predict(X_test)
    error.append(np.mean(pred_i != y_test))

The above script executes a loop from 1 to 800. In each iteration, the mean error for predicted values of test set is calculated and the result is appended to the error list. The next step is to plot the error values against K values.

plt.figure(figsize=(12, 6))
plt.plot(range(1, 800), error, color='red', linestyle='dashed', marker='o',
         markerfacecolor='blue', markersize=10)
plt.title('Error Rate K Value')
plt.xlabel('K Value')
plt.ylabel('Mean Error')

The output graph looks like this:

From the output we can see that the mean error is the lowest when the value of the K is between 35 and 40.

Advantages of kNN

  • Simple implementation.
  • Makes no prior assumption of the data.

Disadvantages of kNN

The prediction time is quite high as it finds the distance between every data point.

References

[1] https://equipintelligence.medium.com/k-nearest-neighbor-classifier-knn-machine-learning-algorithms-ed62feb86582

[2] https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html

[3] https://stackabuse.com/k-nearest-neighbors-algorithm-in-python-and-scikit-learn/

[4] IBM – Machine Learning with Python – A Practical Introduction

[5] Udemy – The Data Science Course 2020: Complete Data Science Bootcamp – 365 Careers

[6] Udemy – Machine Learning and Data Science (Python)

Leave a Reply

Your email address will not be published. Required fields are marked *

Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected!!!

We have detected that you are using extensions to block ads. Please support us by disabling these ads blocker.

Powered By
Best Wordpress Adblock Detecting Plugin | CHP Adblock