Loading

Decision Trees, Random Forests, AdaBoost & XGBoost in Python

In this free online course, learn about the methods involved in decision trees and ensemble techniques in Python.

Publisher: Start-Tech Academy
Decision trees are one of the most popular techniques in machine learning since they are visually appealing and easy to interpret. In this free online course, learn about the creation and pruning of decision trees, ensemble techniques, and the stopping criteria for controlling the growth of decision trees. Boost your regression and classification trees knowledge and skills by studying this comprehensive course.
Decision Trees, Random Forests, AdaBoost & XGBoost in Python
  • Duration

    6-10 Hours
  • Students

    31
  • Accreditation

    CPD

Description

Modules

Outcome

Certification

View course modules

Description

Do you want to be an expert in using decision trees to solve complex business problems? This course covers all the steps you should take when solving business problems using decision trees in Python. Business analysts and data scientists widely use tree-based decision models. They use decision trees because they are easy to interpret and make getting to a clear decision point uncomplicated. You will use them to present business concepts to people who might not be enthusiastic about numbers and complex mathematical models. Discover that a decision tree is a business decision support tool that uses a tree-like model of decisions and possible outcomes that include event outcomes, resource costs and utilities. It is a way of displaying an algorithm that only contains conditional control statements. The course will discuss the essential directories in Python and the process of building a machine learning model.

At first, you will be introduced to the three methods of opening the Jupyter Notebook; Anaconda navigator, Anaconda prompt, and Command prompt. Then, you will find out that regression trees are suitable for continuous quantitative target variables, whilst classification trees are ideal for discrete categorical target variables. You will understand the steps to take when building a regression tree, gain the ability to manipulate the decision tree and interpret the results more efficiently than someone who only knows the technical aspects of making decision trees on Python. Discover that the recursive binary splitting top-down approach is a greedy approach. At each step of the tree-building process, the best split is made at that particular step, rather than looking ahead and picking a split that will lead to a better tree in some future action. The course will discuss missing value treatment in Python and dummy variable creation.

Next, you will learn that when running the code in Python, you need to specify the stopping criteria for the decision tree to control the tree length and avoid the overfitting problem. The three methods of controlling the decision tree’s growth specify the minimum observations required at the internal node, setting the minimum observations required at the leaf node and giving the tree’s maximum depth. Gain insight into the three main prediction models based on decision trees bagging, random forest and boosting techniques. You will then understand that decision trees closely mirror human decision-making than other regression and classification approaches. They can easily handle qualitative predictors without the need to create dummy variables. Finally, you will study how to evaluate a model’s performance in Python, plotting a decision tree, data analysis, and ensemble techniques. This course will be of interest to data scientists, executives, or students interested in learning about decision trees. Why wait? Start this course today and become a decision tree and problem-solving expert.

Start Course Now

Careers