**Regularization** is a technique which is used to solve the overfitting problem of the machine learning models.

### What is overfitting?

Overfitting is a phenomenon which occurs when a model learns the detail and noise in the training data to an extent that it negatively impacts the performance of the model on new data.

So the overfitting is a major problem as it negatively impacts the performance.

Regularization technique to the rescue.

Generally, a good model does not give more weight to a particular feature. The weights are evenly distributed. This can be achieved by doing regularization.

**There are two types of regularization as follows:**

**L1 Regularization or Lasso Regularization**
**L2 Regularization or Ridge Regularization**

###
L1 Regularization or Lasso Regularization

L1 Regularization or Lasso Regularization adds a penalty to the error function. The penalty is
the sum of the **absolute** values of weights.

L1 Regularization or Lasso Regularization
`p`

is the tuning parameter which decides how much we want to penalize the model.

###
L2 Regularization or Ridge Regularization

L2 Regularization or Ridge Regularization also adds a penalty to the error function. But the penalty here is
the sum of the **squared** values of weights.

L2 Regularization or Ridge Regularization
Similar to L1, in L2 also, `p`

is the tuning parameter which decides how much we want to penalize the model.

This is **Regularization**. That's it for now.

#### Share This Post