Day 7: Support Vector Machines (SVM)


Day 7: Support Vector Machines (SVM)


Mastering Machine Learning is not that difficult, you just need a lot of time and effort to do so. Many young learners often start with classification or regression as it is the easiest and then hesitate to move forward into the more complex forms of ML.

Hello World! This is Manas from – “Your one stop destination for everything computer science!”. I am back with a freshly brewed ML Tutorial. I’m sorry i couldn’t post on time, I had a few school assignments to complete…9th grade has a board exam too, so school is hellbent on completing the portions 🙁  Anyways we now that I am back we can complete a new topic called Support Vector Machines.

Let us take a simple analogy here, let us assume that we have arsenal filled with weapons like swords, daggers, axes, knives etc. Here, regression acts as a sword which can efficiently slice and dice data, but it cannot deal with data that is complex and small in size. Classification acts as an axe on wood, it chops up the data into categories but can’t do much more than that. Support Vector Machines are like the dagger, it allows the user to perform quick moves on targets at a close range in this case complex and smaller sized datasets. It can perform these tasks with much more efficiency than regression would.

What is a Support Vector Machine

“Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be used for both classification or regression challenges.  However, it is mostly used in classification problems. In this algorithm, we plot each data item as a point in n-dimensional space (where n is number of features you have) with the value of each feature being the value of a particular coordinate. Then, we perform classification by finding the hyper-plane that differentiate the two classes very well. Here it acts more like classification but can work with more complex forms of data too. As the name suggests a SVM works by manipulating and using values from Vectors.  We have to remember that anything that happens inside an SVM works in vector space. We also must know that in most cases an SVM is what is called a binary classifier, i.e. it separates the data into only two groups.

Decision Making and Learning process of a SVM

Decision making is fairly simple in a SVM. Once the best separating Hyper-plane( line ) is found any unknown data point will be classified on the basis of which side of the hyper-plane does it fall on. The training process consists of different methods of finding the best separating hyper-plane for your data. In order to not complicate the simple intuition I will not be discussing on how does the SVM find the best separating hyper-plane.  There are a lot of sources online discussing how the SVM algorithm learns you can find a better description there. SVM sort of acts like a classifier but it can also work with more complex forms of data.

SVM’s are very similar to classifiers in that they perform the same basic operation and that is classifying data into groups.

Writing our own simple project

We have learnt the basic intuition of a SVM. Now we will try to implement it in Python 3. We will be working on the Wisconsin Breast Cancer Prediction problem. We need a few things before we start, firstly the dataset, you can download the dataset from here. After visiting the link right-click on the file named ‘’ and click save link as. Then save it to your working directory and open up a text editor of your choice. Now we are ready to roll!

before trying this problem consider opening cmd and type ‘pip install sklearn’ If sklearn is not installed already. With this we are ready to start



This is program is a little difficult and so let me break it down for you! from line 1-4 we import the necessary modules. From line 6-19 we collect data from the dataset and format it to the correct validation and training test size. In line 8 we are replacing all the empty data values to a value of -99999 which will be considered as an outlier(ignored by the model). in line 9 we are removing the patient id as it doesn’t contribute anything to the final output and thus is a useless feature. In line 19 we are creating an object from the SVC() function from the svm module and saving it to a variable called ‘clf’. From line 21-31 We collect data about a new patient through the input. in line 36-37 we are inputting all the values entered by the user into a numpy array and reshaping it to match the correct number dimensions in line 38. In line 39 we are getting the prediction from our model. from line 41-51 we are checking the type of breast cancer classified by the model. here {[2] = “Benign” and [4] = “Malignant”}. These numbers 2 and 4 are generated by the model and the terms benign and malignant are scientific words you don’t need to break your head on that



As we can see here I have entered some values for the characteristics based on a test subject, and my model predicts that that test patient has a Malignant form of breast cancer with an accuracy of 95%!!! Can you believe it! We have made a successful Breast Cancer Classifier using an SVM.

Conclusion and more information

To sum it all up, a Support Vector Machine is a machine learning algorithm that can separate data with high complexity relatively easily! The time required to build a SVM is very less and it also works with a good accuracy if used correctly! Now the question arises, When to use a SVM over Classification? The answer to this is, When our data has a very high dimensionality and is very complex we are better off using a SVM over a Classification algorithm as a SVM is much more suited to perform these kinds of tasks and does it very efficiently. Here are a few Pros and Cons of a Support Vector Machine:

  • Pros:
    • It works really well with clear margin of separation
    • effective in high dimensional spaces.
    • highly effective in cases where number of dimensions is greater than the number of samples.
    • A SVM uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.
  • Cons:
    • It doesn’t perform well, when we have large data set because the required training time is higher
    • It also doesn’t perform very well if the data set has more noise i.e. target classes are overlapping
    • SVM doesn’t directly provide probability estimates, these are calculated using an expensive five-fold cross-validation. It is related SVC method of Python scikit-learn library. This means slower training.

There are a few parameters that can also be provided into the svm.SVC() command in Python. Firstly, we have a parameter called C which stands for the Penalty parameter of the SVM, it helps to decide whether we want a smooth decision boundary or correctly classified points. Another parameter that we can specify is called gamma, also known as the kernel coefficient, gamma helps in exactly fitting the model into the data. A high gamma can cause a problem known as Overfitting where a model fits the training data so accurately that any new point is wrongly classified.

So in my opinion this was a small brief about a SVM. Hope you guys liked this intuitive explanation on the fundamentals of a Support Vector Machine along with a practical tutorial. Thank you so much for spending you valuable time here. In the next blog post I will be discussing about another commonly used algorithm called KNearestNeighbors.
Until then, have a nice day and enjoy Deep Learning. 🙂



Previous Post: ML Course Day 6

Python Course by Author: Jithu

My Personal Blog

Edexcel Gcse Statistics Course Planner: CRAZY MACHINELEARNINGGROUP

Manas Hejmadi

I am a boy who studies in 9th grade at Bangalore! I have a good knowledge of computer programming, AI and UI Design. I aspire to create a tech startup of my own!

One thought on “Day 7: Support Vector Machines (SVM)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.