Share this post

### Table of Contents

- What is Regularization in Machine Learning ?
- Why we need Regularization ?
- Different Types of Regularization Techniques in Deep Learning

#### 1. Regularization in Machine Learning

Before going to discuss about Regularization Techniques, there are some basic things we have to learn in Machine Learning.

- Bias
- Variance

#### Bias

The term bias is first used by Tom Mitchell in a research paper named The Need for Biases in Learning Generalizations in the 1980s. If a Dataset is having more bias means, Any Model or Algorithm can't able to Identify dis-similarity between data items. We can't able to learn more from data if the data is having high bias. This may result in less accurate outcomes.

Image Courtesy of Unsplash

There are four types of Biases in Machine Learning

- Sample Bias
- Model Bias
- Systematic Bias
- Cultural Bias

#### Variance

Variance is a well-known term used in Probability and Statistics and it represents the density of data spread along with dataset variables. A good variance between data variables results in high and reliable accuracy.

#### Trade-off Between Bias and Variance

A Machine Learning algorithm is said to be good if it satisfies the following criteria

- Low Bias
- High Variance among variables

We have done with pre-processing learning for Regularization. Regularization is a process of controlling those two parameters to be matched exactly for model to get accurate results. This Process is introduced to get rid of imbalanced bias and variance.

#### 2. Why we need regularization, what happens if we can't regularize algorithm

The main problems associated with bias and variance tradeoff are

- Underfitting
- Overfitting

#### Underfitting

Underfitting means, our algorithm unable to learn insights from data and get result in bad accuracy. In underfitting, most of the data points are not used in training.

#### Overfitting

Overfitting is a process of learning more than enough information from data and this may result in bad results.

By using Regularization algorithms we can control both underfitting and Overfitting.

#### Different Types of Regularization Techniques in Machine Learning/Deep Learning

There are total of 5 different approaches used for regularization Machine Learning or Deep Learning Algorithms. They are

- Lasso Regression or L2 Regularization
- Ridge Regression or L1 Regularization
- Elastic-Net Regression
- Dropout Regularization
- Early Stopping
- Data Augmentation

#### 1. Lasso Regression

In Machine Learning LASSO( least absolute shrinking and selection operator) regression take consideration of both selection of variables and followed by regularizing them in order to improve prediction accuracy and interaction between variables.

The cost function for Simple Linear Regression is

The Cost function for Lasso Regression is

#### 2. Ridge Regression

Ridge regression in Machine Learning is also called an L1 regularization technique that mainly deals with the problem of multi-collinearity. The problem of multicollinearity is associated with Multiple Linear regression. When we are trying to train the algorithm multi-input variable, there is chance of linearity problems that may cause bias errors to result in less accurate results.

#### 3. Elastic-Net Regression

The Elastic-Net regression is an algorithm that compromised with both L1 and L2 regularization algorithms.

#### 4. Dropout Regularization

This is a crazy, easy, and efficient regularization technique in Deep Learning. During Training some portion of data is removed from training to avoid overfitting data. It is mostly used in Neural Networks especially in Image processing based Projects. At every layer of Neural Network, a layer od dropout is added to remove some of the input data in training that result in a perfect image classification/recognition model.

#### 5. Early Stopping

Early stopping is a run time regularization technique. When model training, the best weights are saved to a weights file. If the model weights not improved to a better one, then model training is stopped, and the last saved weights are loaded to model.

#### 6. Data Augmentation

Data Augmentation is a popular regularization algorithm used in Deep Learning. Data Augmentation is a process of the training model in all possible ways by generating more images from given data. By training models with more images, the model can able to learn exact insights from training images.

Data Augmentation doo following changes to data

- Left shift image by an angle
- Right shift image by an angle
- rotation by angle
- Horizontal Flipping
- Vertical Flipping
- Applying Shearing
- Applying zooming to image data
- Image Translation

Thank you for your time!! ðŸ¤©

Contact us if you have queries

e

## Comments

## Post a Comment