Machine Learning relies on different algorithms to solve data problems. Data scientists like to point out that there’s no single one-size-fits-all type of algorithm that is best to solve a problem. The kind of algorithm employed depends on the kind of problem you wish to solve, the number of variables, the kind of model that would suit it best and so on. Here’s a quick look at some of the commonly used algorithms in machine learning (ML).

Machine Learning categories

ML tasks are broadly classified into categories, chiefly being supervised learning, unsupervised learning, active learning, reinforcement learning, and so on.

Supervised learning

The algorithm builds a model of a data set that contains both the inputs and the desired outputs. Examples of supervised learning include Decision Trees, Random Forest, and Logistic Regression, among others.

Unsupervised learning

A model of a set of data is built which consists of only inputs and no desired outputs. These kinds of unsupervised learning algorithms are used to find structure and patterns in the data. K-means is an example of unsupervised learning.

Machine Learning & NLP

Natural Language Processing algorithms are usually based on machine learning algorithms. ML algorithms enable NLP engines to learn a set of rules rapidly and automatically and make statistical inferences. For example, ML applications in NLP will include speech recognition, text classification, machine translation, and document summarization, to name a few. For more details on NLP and how it works, read our blog.

Types of Machine Learning algorithms

Naive Bayes

It is an algorithm for predictive modeling based on Bayes Theorem. A Naive Bayesian model is easy to build. It is particularly useful when there are very large data sets involved. Often it can outperform more sophisticated classification techniques to solve complex problems. Using your training data, the Naive Bayes model assumes that each input variable is independent and unrelated to the presence of any other variable.

Linear Regression

Linear regression is a technique borrowed from statistics. In the area of predictive modeling, linear regression is used to minimize the error in a model or to ensure that the model makes the predictions as accurate as possible. Linear regression assumes a linear relationship between the input variables and the output variable. In a simple linear regression, there’s a single input variable whereas, in the case of multiple input variables, the method is referred to as multiple linear regression.

Logistic Regression

This is a classification algorithm used for binary classification problems, that is, problems with two class values. It predicts the probability of occurrence by using logistic functions and not simply linear regression. Here, the coefficients of the variables of the logistic regression algorithm are estimated from the training data, using what is known as maximum-likelihood estimation.

Tree-based learning algorithms

Apart from linear regression and logistic regression, tree-based learning algorithms are another popular class of machine learning algorithms. They can be used to inject predictive models with high accuracy and stability. Tree-based learning algorithms Decision Trees and Random Forest are commonly used supervised learning methods. They can map non-linear relationships, unlike linear models and can be adapted to solve many kinds of data problems.

Decision Tree

A decision tree is a type of supervised learning algorithm mostly used to solve classification problems. The representation for the decision tree model has a binary tree representation. The leaf ‘nodes’ of the tree contain an output variable which is used to make a prediction. The nodes are split into more homogenous sets called sub nodes. Decision tree algorithm is most useful when there is a non-linear relationship between the dependent and independent variables, and when you are required to build a model that is easy to interpret. Trees are quick and can solve a wide range of data problems without requiring any specially treated data. They are especially effective when used in an ensemble.

Random Forest

Random forest is a learning method that uses multitude or an ensemble of decision trees for classification, regression, and other tasks. Random Forest is part of the ensemble machine learning algorithm called Bootstrap Aggregation, or bagging, in short. Random forests present a way to average multiple deep decision trees to help reduce the high variance that is common with decision trees. Multiple samples of the data are used to create decision trees. When new data comes in, each decision tree makes a prediction. In a regression task, the predictions are then averaged out to give a more accurate value. In a classification task, the forest will select the most popular or common classification from all the trees.

 

This is by no means an exhaustive list of popular ML algorithms – others include Support Vector Machine (SVM) Gradient Boosting algorithms, K-Nearest Neighbor, K-Means and many others.  Now there are proprietary software suits available like Microsoft Azure Machine Learning, Amazon Machine Learning etc., that are equipped with ML algorithms. There are also free and open source software like TensorFlow, Yooreeka, Apache System ML, to name a few. As ML algorithms continue to revolutionize how workflows are processed, it will open new opportunities for business owners worldwide.