- Which of the following statements is true about a Decision Tree?
A) It is an ensemble method combining multiple models
B) It splits the data based on feature values to create a tree structure
C) It always gives better accuracy than Random Forest
D) It cannot be used for classification tasks
- Bagging (Bootstrap Aggregating) primarily helps to:
A) Reduce bias in the model
B) Reduce variance in the model
C) Increase the depth of a single decision tree
D) Select the most important feature only
- Random Forest differs from Bagging mainly because it:
A) Uses boosting instead of bagging
B) Randomly selects a subset of features at each split
C) Builds only a single decision tree
D) Increases bias intentionally
- AdaBoost works by:
A) Training multiple weak learners sequentially, focusing on misclassified points
B) Training multiple models independently and averaging predictions
C) Using only deep decision trees as base learners
D) Randomly dropping features to reduce correlation
- Which of the following is true regarding Random Forest?
A) It is sensitive to outliers
B) It is a type of boosting method
C) It reduces overfitting compared to a single decision tree
D) It cannot handle categorical variables
- In AdaBoost, the weights of misclassified observations:
A) Decrease in the next iteration
B) Remain the same
C) Increase in the next iteration
D) are ignored
- Which method is most likely to reduce both variance and overfitting?
A) Single Decision Tree
B) Bagging
C) Random Forest
D) AdaBoost