Prediction via KNN (K Nearest Neighbours) R codes: Part 2



In the previous post (Part 1), I have explained the concepts of KNN and how it works. In this post, I will explain how to use KNN for predict whether a patient with Cancer will be Benign or Malignant. This example is get from Brett book[1]. Imagine that we have a dataset on laboratory results of some patients that some of them already Benign  or Malignant. See below picture.

the first column is patient ID, the second one is the diagnosis for each patient: B stand for Benign  and M stand for Malignant. the other columns are the laboratory results (I am not good on understanding them!)


We want to create a prediction models for a new patient with specific laboratory results, we want to predict whether  this patient will be Benign or Malignant.

For this demo, I will use R environment in Visual Studio. Hence, after opening Visual Studio 2015, select File, New file and then under the General tab find “R”. I am going to write R codes in R scripts (Number 4) and then create a R scripts there.


After creating an empty R scripts. Now I am going to import data. choose “R Tools”, then in Data menu, then click on the “Import Dataset into R session”.


You will see below window. It shows all the columns and the sample of data. The SCV file that I am used for this post has been produced by [1]. It is a CSV file with delimiter (number 1) by Comma.


After importing the dataset, now we are going to see the summary of data by Function “STR”. this function shows the summary of column’s data and the data type of each column.

the result will be:

Now we want to keep the original dataset, so we put data in a temp variable “wbcd”

The first column of data “id” could not be that much important in prediction, so we eliminate the first column from dataset.

We want to look at the statistical summary of each column: such as min, max, mid, mean value of each columns.

The result of running the code will be as below, as you can see for first column (we already delete the id column), we have 357 cases that are Benign  and 212 Malignant cases. also for all other laboratory measurement we can see the min, max, median, mean. 1st Qu, and 3rd Qu.

Data Wrangling

first of all, we want to have a dataset that is easy to read. the first data cleaning is about replacing the “B” value with Benign   and “M” value with Malignant in diagnosis column. this replacement makes the data to be more informative. Hence we employ below code:

Factor is a function that gets the column name in a dataset, and we can identify the labels with out consuming memories)

there is another issue in data. the numbers are not normalized!

what is data normalization : that mean they are not in a same scale. for instance for radius mean all numbers between 6 to 29 while for column smoothness_mean is between 0.05 to 0.17. for performing the predict analysis using KNN, as we use distance calculation (Part 1), it is important all numbers should be in same range[1].

normalization can be done by below formula

now we are going to apply this function in all numeric columns in wbcd dataset. There is a function in R that apply a function over a dataset:

“lapply” gets the dataset and function name, then apply the function on all dataset. in this example because the first column is text (diagnosis), we apply “normalize” function on columns 2 to 31.Now our data is ready for creating a KNN model.

from machine learning process we need a dataset for training model and another for testing model (from Market basket analysis post)


Hence, we should have two different dataset for train and test. in this example, we going to have row number 1 to 469 for training and creating model and from row number 470 to 569 for testing the model.

so wbcd_train we have 469 rows of data and the rest in wbcd_test. also we need the prediction label for result

So data is  ready, now we are going to train  model and create KNN algorithm.

For using KNN there is a need to install package “Class”

Now we able to call function KNN to predict the patient diagnosis. KNN function accept the training dataset and test dataset as second arguments. moreover the prediction label also need for result. we want to use KNN based on the discussion on Part 1, to identify the number K (K nearest Neighbour), we should calculate the square root of observation. here for 469 observation the K is 21.

the result is “wbcd_test_pred” holds the result of the KNN prediction.

we want to evaluate the result of the model by installing “gmodels” a packages that shows the evaluation performance.


we employ a function name “CrossTable”. it gets label as first input, the prediction result as second argument.

The result of “Cross table” will be as below. we have 100 observation. the tables show the result of evaluation and see how much the KNN prediction is accurate. the first row and first column shows the true positive (TP) cases, means the cases that already Benign and KNN predicts Benign. The first row and second column shows number of cases that already Benign and KNN predict they are Malignant (TN). The second row and first column is Malignant in real world but KNN predict they are Benign (FP). finally the last column and last row is False Negative (FN) that means cases that they Malignant and KNN predict as Malignant.

so as much as TP and FN is higher the prediction is better. in our example TP is 61 and FN is 37, moreover the TN and TP is just 0 and 2 which is good.

to calculate the accuracy we should follow the below formula:

accuracy <- (tp + tn) / (tp + fn + fp + tn)

Accuracy will be (61+37)/(61+37+2+0)=98%

In the next post I will explained how to perform KNN in Power BI (data wrangling, modelling and visualization).



[1].Machine Learning with R,Brett Lantz, Packt Publishing,2015.



Leila Etaati on LinkedinLeila Etaati on TwitterLeila Etaati on Youtube
Leila Etaati
Trainer, Consultant, Mentor
Leila is the first Microsoft AI MVP in New Zealand and Australia, She has Ph.D. in Information System from the University Of Auckland. She is the Co-director and data scientist in RADACAD Company with more than 100 clients in around the world. She is the co-organizer of Microsoft Business Intelligence and Power BI Use group (meetup) in Auckland with more than 1200 members, She is the co-organizer of three main conferences in Auckland: SQL Saturday Auckland (2015 till now) with more than 400 registrations, Difinity (2017 till now) with more than 200 registrations and Global AI Bootcamp 2018. She is a Data Scientist, BI Consultant, Trainer, and Speaker. She is a well-known International Speakers to many conferences such as Microsoft ignite, SQL pass, Data Platform Summit, SQL Saturday, Power BI world Tour and so forth in Europe, USA, Asia, Australia, and New Zealand. She has over ten years’ experience working with databases and software systems. She was involved in many large-scale projects for big-sized companies. She also AI and Data Platform Microsoft MVP. Leila is an active Technical Microsoft AI blogger for RADACAD.

Leave a Reply