In this tutorial, we will examine an easy yet powerful technique called Decision Tree. There sure are many variations of the techniques from the most basic one (Classical Decision Trees) to an advanced cousin (Random Forest.) In this tutorial, we will focus on one variant of Decision Tree: Basic Decision Tree. We will utilize rpart library to train the Classical Decision Tree. So, let’s load the libraries!

We will use HR dataset to demonstrate the powerful yet simplistic Classical Decision Tree algorithm.

But before we can use the algorithm, we need to prepare the data: one hot encoding for departments, and salary, and changing some names to be a little more self-explanatory.

As usual, we will split the data into train and test sets.

75% of the data or 11,249 observations (rows) will be a train set, while the rest (25% or 3,750) will be a test set. Now we are ready, let’s build the Classical Decision Tree model!

There are three critical parameters in the code: xval , cp , and split .

  • Xval controls the number of cross-validation (the more, the better.)
  • CP (Complexity Parameter) controls the split (the more, the stringent the split.) CP default value is 0.01. If the tree split does not improve the fit by cp, then it will not split. So, when I set it to 0, I just simply want the algorithm to split as much as they’d like.
  • Split controls how a tree gets split. The rpart algorithm supports many options. I generally use either Gini or Information Gain. In all honesty, I try everything — as it is super easy by just changing the word — and pick the best one according to situations 😝.

Next, we will plot some chart.

The error drops significantly at the fourth split and slows down until there is hardly any improvement after the ninth split.

Let’s see the numbers.

After we let the algorithm loose, there are 54 splits! Whoa, okay, well, that is way too much. Surely, it is the time to prune. Now is the time when things can be entirely subjective. Maybe your client doesn’t want the tree to be more than 10, or your boss wants the most thorough tree possible regardless of the complexity. If there is no mandate from an Ivory Tower resident, I personally use the \(cp\) from lowest split whose error range falls in this formula \(xerror \pm xstd\) applied to the highest split. In this case, the range is \(0.102\pm 0.006\) or 0.096 to 0.1084.  Looking at the result, the 11th attempt or 12 split has \(xerror\) fall within the range with the least split. Then we will use its \(cp\) in the prune()  function.

Now it’s time to plot the tree.

My apology for the font size. It was quite a pain to adjust the look-and-feel of the Decision Tree. From the chart, we can see that the algorithm mainly uses average hours work, satisfaction level, and a number of projects. With this information, we can hard code the prediction using ifelse()  and apply to the test set. But why would we do that as we have predict()  function? 😝

Next, we evaluate the prediction.

Oh, that is even better than Logistic Regression. The accuracy is \(\frac{2850+815}{(2850+815+68+17)}\) or 97.7%.

Despite the excellent performance, if you look closely, you will see that the algorithm didn’t use a categorical variable at all (e.g., department, salary.) This is the exhaustive search bias. The issue is addressed in other Decision Tree iterations, one of which is Conditional Inference Tree.

TL:DR; In this example, Classical Decision Tree could predict with 97.7% accuracy rate despite its super easy to implement algorithm. However, this is not its best yet… there are far more advanced cousins: Conditional Inference Tree, and Random Forest. 🙂