Machine learning algorithms are classified as two distinct groups: parametric and non-parametric. Herein, parametricness is related to pair of model complexity and the number of rows in the train set. We can classify algorithms as non-parametric when model becomes more complex if number of samples in the training set increases. Vice versa, a model would be parametric if model becomes stable when number of examples in the training set increases.
Vlog
Non-parametric models
Consider decision tree algorithms. If we increase the number of instances, then the decision tree that is going to be built becomes more complex. The more decision rules could be created based on those new instances inherently.
🙋♂️ You may consider to enroll my top-rated machine learning course on Udemy
Depth of the tree might be risen. Besides, values in the decision rules would be changed as well. That’s why, all tree based algorithms are non-parametric including regular decision tree algorithms and boosted trees.
SVM is a non-parametric model as well.
Pros of non-parametric models
Non-parametric models handle feature engineering mostly. We can feed all the data we have to those non-parametric algorithms and the algorithm can ignore unimportant features. It would not cause overfitting.
Parametric models
What about neural networks?
We firstly build neural networks structure. The number of inputs and outputs, number of hidden layers, and nodes in each layer can be pre-determined. All of those are parameters for neural networks. In this case, increasing the number of instances will not make your model more complex. It becomes stable. It will have same number of layers and nodes.
Increasing trainset size will just increase the learning time but this is not related to being parametric. That’s why, regular neural networks are parametric models.
Deep learning models including convolutional neural networks and LSTM are parametric models as well.
So do Logistic and Linear regression. Because, the equation in both algorithms are pre-defined. Feeding more data might just change the coefficients in the equations.
Pros of parametric models
Feature engineering is important in parametric models. Because you can poison parametric models if you feed a lot of unrelated features. They cannot ignore feature similar to non-parametric models. You have to feed features neither more or less.
Transfer learning
These two methods act different in transfer learning.
The size of model structure and pre-trained weights can be predictable in deep learning because it is a parametric method. For example, VGG model is about 500 MB. If you would train this model from scratch for your custom train set, then its size would be about 500 MB as well. Because structure remains same and the number of weights will not be changed.
On the other hand, decision tree based algorithms such as GBM would build totally different trees for different train set. Pruning causes to have different depth of trees as well. That’s why, the size of a non-parametric model could not be predictable.
Support this blog if you do like!