Last week, our post focused on providing you with an overview of Neural Networks and Deep Learning. This week’s post attempts to give you a deeper understanding of the applications of Neural Networks.

Introduction

Modeled similar to the human brain, Neural Networks are essentially a set of Machine Learning algorithms designed to process complex data. They normally comprise thousands of interconnected nodes, organized into layers in a feed-forward manner. Feed-forward means that a particular node receives data from the nodes in the layer below it and sends data to the nodes in the layer above it.

Neural networks are typically used to train computers to perform tasks based on training examples i.e. data sets that have been labeled in advance. This allows them to uncover patterns that are strongly correlated with the labels which can then be applied to unlabeled data. A good example is image recognition, where a neural network may be fed thousands of labeled images of say, dogs, till it learns patterns in images that are descriptive of a dog. It can then use these patterns to identify whether a given picture contains a dog or not.

How do Neural Networks work?

Most simplistically, Neural networks work using weighted connections. A node assigns a number (“weight”) to each of its incoming connections and multiplies the data item, another number, received over that connection by its weight. It then sums all the weighted data from its connections. If the number is above a threshold, it is passed forward to all outgoing connections i.e. the node ‘fires’, otherwise, no data is passed from that node.

During training, all weights and thresholds are assigned random values. The data is fed to the input layer and passes through all subsequent layers, getting transformed through multiplications and additions until it finally arrives at the output layer. The weights and thresholds are continuously adjusted during training until repeatable predictive patterns are developed that attempt to match training labels with model predictions.

With this brief introduction to Neural Networks, let’s take a look at some of the major applications of Neural Networks.

Image and Character recognition

As we mentioned earlier, image recognition through Neural Networks works by training the network to recognize images through labels. Once the neural network has trained itself to recognize the image, a new image can be fed into the network to see if it yields a similar result as with the training set

Financial Predictions

Neural Networks have been claimed as a powerful tool for predictions in areas like the stock market, currency and futures prediction, debt risk assessment, credit approval etc. Good results have been observed using layered back-propagation networks, however, the major task lies in identifying which set of parameters and variables such as previous performance, economic indicators, market valuation etc. would lead to the optimal neural network.

Weather Forecasting

Similar to financial predictions, Neural networks can be trained on past data using weather variables like atmospheric pressure, temperature, winds etc. This allows them to then be used to make predictions about future weather patterns using existing data. However, with the rapidly changing environmental landscape, the challenge lies in again in creating a cohesive set of variables that can account for variances over time as well as weather anomalies.

Self-Driving cars

A major aspect of self-driving cars is computer vision i.e. what autonomous vehicles see and how they classify what they see. Both of these areas use neural networks to a great extent. Convolutional Neural Networks have additional layers (convolutional layers) that can detect complex features like faces that makes them extremely useful for self-driving vehicles.

Optimization problems

Optimization problems typically consist of an objective function and a set of constraints on the variables of the function. The focus for these problems is to find out the value of variables that lead to the function’s optimal value (maximum/minimum) while making sure that all the constraints are satisfied. However, some of the problems have a really large number of feasible solutions from which the optimal solution must be selected (O(2N) or O(N!)). For such NP-hard problems, e.g. the Travelling Salesman Problem, conventional approaches aren’t the easiest. For problems like these, neural networks like the Hopfield Network can be used.