Feature importance is a common way to make interpretable machine learning models and also explain existing models. That enables to … More
Tag: interpretability
Feature Importance in Decision Trees
A decision tree is explainable machine learning algorithm all by itself. Beyond its transparency, feature importance is a common way … More
Interpretable Machine Learning with H2O and SHAP
Previously, we’ve made explanations for h2o.ai models with lime. Lime enables questioning for made predictions of built models. Herein, SHAP … More
Explaining h2o models with Lime
Interpretability and accuracy inversely proportional concepts. Models offering higher accuracy such as deep learning or GBM would be lowly interpretable. … More
How SHAP Can Keep You From Black Box AI
Machine learning interpretability and explainable AI are hot topics nowadays in the data world. Just working of a model will … More