Loading

Alison's New App is now available on iOS and Android! Download Now

Decision Trees, Random Forests, AdaBoost & XGBoost in Python

Learn how to create decision trees and master ensemble techniques in this free online Python course.

Publisher: Start-Tech Academy
This free online Python course explains decision trees, which are one of the most popular techniques used in machine learning since they are visually appealing and easy to interpret. We cover their creation and pruning, ensemble techniques and the stopping criteria for controlling the growth of decision trees. Boost your regression and classification trees knowledge and skills with this comprehensive course to increase your professional value.
Decision Trees, Random Forests, AdaBoost & XGBoost in Python
  • Duration

    6-10 Hours
  • Students

    94
  • Accreditation

    CPD

Description

Modules

Outcome

Certification

View course modules

Description

This course covers all the steps you should take when using decision trees in Python to solve business problems. Business analysts and data scientists widely use tree-based decision models because they are easy to interpret and make decision points clear. You can use them to present business concepts to people who might not be enthusiastic about numbers and complex mathematical models. A decision tree includes event outcomes, resource costs and utilities and displays algorithms that only contain conditional control statements. This course examines the essential directories in Python and the process of building a machine learning model.

We introduce you to the three methods of opening the Jupyter Notebook: Anaconda navigator, Anaconda prompt and Command prompt. We then demonstrate that regression trees are suitable for continuous quantitative target variables while classification trees are ideal for discrete categorical target variables. We can help understand the steps to take when building a regression tree and gain the ability to manipulate the decision tree and interpret the results more efficiently than someone who only knows the technical aspects of making decision trees with Python. Discover that the recursive binary splitting top-down approach is considered ‘greedy’. At each step of the tree-building process, the best split is made at that particular step rather than looking ahead and picking a split that will lead to a better tree in some future action. This course discusses missing value treatment in Python and dummy variable creation.

Next, we explain that, when running the code in Python, you need to specify the stopping criteria for the decision tree to control the tree length and avoid the overfitting problem. We unpack how to control the decision tree’s growth and provide insight into the three main prediction models based on ‘bagging’, random forest and boosting techniques. We then illustrate how decision trees closely mirror human decision-making more closely than other regression and classification approaches as they can easily handle qualitative predictors without the need to create dummy variables. Finally, the course demonstrates how to evaluate a model’s performance in Python by plotting a decision tree and utilizing data analysis and ensemble techniques. This course suits executives or anyone seeking the practical guidance of Python’s decision trees to solve complicated dilemmas. This ability to use data science to solve real-world problems can only enhance your professional status.

Start Course Now

Careers