Skip to content Skip to sidebar Skip to footer

Names Features Importance Plot After Preprocessing

Before building a model I make scaling like this X = StandardScaler(with_mean = 0, with_std = 1).fit_transform(X) and after build a features importance plot xgb.plot_importance(b

Solution 1:

first we get list of feature names before preprocessing

dtrain = xgb.DMatrix( X, label=y)
dtrain.feature_names

Then

bst.get_fscore()
mapper = {'f{0}'.format(i): v for i, v inenumerate(dtrain.feature_names)}
mapped = {mapper[k]: v for k, v in bst.get_fscore().items()}
mapped
xgb.plot_importance(mapped, color='red')

that's all

Solution 2:

You can retrieve the importance of Xgboost model (trained with scikit-learn like API) with:

xgb.feature_importances_

To check what type of importance it is: xgb.importance_type. The importance type can be set in the Xgboost constructor. You can read about ways to compute feature importance in Xgboost in this post.

Solution 3:

For xgboost 0.82, the answer is quite simple, just overwrite the feature names attribute with the list of feature name strings.

trained_xgbmodel.feature_names = feature_name_list
xgboost.plot_importance(trained_xgbmodel)

Post a Comment for "Names Features Importance Plot After Preprocessing"