✅ Explainable AI (XAI) – 50 Solved MCQs Basic Concepts of XAI (1–20) 1. What is Explainable AI (XAI)? A. AI that creates documents B....
✅ Explainable AI (XAI) – 50 Solved MCQs
Basic Concepts of XAI (1–20)
1. What is Explainable AI (XAI)?
A. AI that creates documents
B. AI that explains how decisions are made
C. AI with hidden layers
D. AI used for entertainment
✅ Correct Answer: B
Explanation: XAI refers to systems that make their decisions understandable to humans.
2. Why is explainability important in AI?
A. To reduce training time
B. To make predictions faster
C. To build trust, transparency, and accountability
D. To reduce cost
✅ Correct Answer: C
3. XAI is especially critical in:
A. Games
B. Medical diagnosis, finance, and law
C. Sports predictions
D. Online shopping
✅ Correct Answer: B
4. A black-box model is:
A. Fully interpretable
B. A system with no outputs
C. A model whose internal workings are hard to interpret
D. A model that cannot be trained
✅ Correct Answer: C
5. An interpretable model is:
A. One that needs deep learning
B. Easy for machines to understand
C. Easily understood by humans
D. Hidden and encrypted
✅ Correct Answer: C
6. Which of the following is NOT a goal of XAI?
A. Interpretability
B. Accuracy
C. Transparency
D. Deception
✅ Correct Answer: D
7. Which type of model is more explainable by default?
A. Neural Networks
B. Decision Trees
C. Deep CNNs
D. Random Forest
✅ Correct Answer: B
8. Post-hoc explanations refer to:
A. Explanations designed before model creation
B. Pre-training steps
C. Explanations generated after the model makes a prediction
D. Raw data interpretation
✅ Correct Answer: C
9. What does LIME stand for?
A. Local Interpretable Model-agnostic Explanations
B. Learning in Machine Environments
C. Linear Interpolation for Model Explanations
D. Local Instance Mapping
✅ Correct Answer: A
10. LIME is considered:
A. Global explanation technique
B. Local explanation technique
C. Rule-based model
D. Supervised model
✅ Correct Answer: B
11. Which explanation technique is used to visualize neural networks’ decisions?
A. SVM
B. Decision rules
C. Saliency maps
D. KNN
✅ Correct Answer: C
12. Which term describes whether a human can understand why a model made a certain prediction?
A. Generalization
B. Transparency
C. Interpretability
D. Optimization
✅ Correct Answer: C
13. What does SHAP stand for?
A. Structured Heuristic Attribute Prediction
B. SHadow and Prediction
C. SHapley Additive exPlanations
D. Smart Heuristic Applied Prediction
✅ Correct Answer: C
14. SHAP values are based on:
A. Random sampling
B. Game theory
C. Linear regression
D. Loss functions
✅ Correct Answer: B
15. Which type of XAI technique uses decision rules to explain outcomes?
A. LIME
B. SHAP
C. Rule-based methods
D. CNN
✅ Correct Answer: C
16. Counterfactual explanations answer the question:
A. What is the root cause?
B. What else could have led to a different outcome?
C. What data was missing?
D. How fast was the model?
✅ Correct Answer: B
17. Which model is usually considered “white-box”?
A. Deep learning
B. Decision trees
C. GANs
D. Autoencoders
✅ Correct Answer: B
18. Which of the following is a limitation of explainable AI?
A. Always accurate
B. May reduce model performance
C. Works only with text
D. Makes models smaller
✅ Correct Answer: B
19. Trust in AI can be improved by:
A. High latency
B. Incomplete explanations
C. Clear and understandable model decisions
D. Obfuscating rules
✅ Correct Answer: C
20. What is the main trade-off in XAI?
A. Between cost and accuracy
B. Between model complexity and interpretability
C. Between speed and memory
D. Between training and inference
✅ Correct Answer: B
XAI Techniques & Tools (21–40)
21. Model-agnostic techniques:
A. Only work with CNNs
B. Require model internals
C. Work across various models
D. Are specific to KNN
✅ Correct Answer: C
22. What is the purpose of saliency maps in CNNs?
A. To train the model
B. To visualize training loss
C. To highlight input regions important for the prediction
D. To reduce computation
✅ Correct Answer: C
23. A global explanation explains:
A. One prediction
B. A random variable
C. Overall model behavior
D. A random sample
✅ Correct Answer: C
24. Which of the following is NOT an XAI tool or library?
A. SHAP
B. LIME
C. TensorBoard
D. Pandas
✅ Correct Answer: D
25. Feature importance techniques in XAI help to:
A. Speed up training
B. Select datasets
C. Identify which features contributed most to predictions
D. Perform normalization
✅ Correct Answer: C
26. Explainable AI improves regulatory compliance by:
A. Hiding features
B. Allowing traceability of decisions
C. Encrypting models
D. Ignoring user feedback
✅ Correct Answer: B
27. Local explanations are useful when:
A. You want to summarize entire model behavior
B. Explaining a single instance prediction
C. Optimizing datasets
D. Training multiple models
✅ Correct Answer: B
28. Which sector has high demand for explainable AI due to legal constraints?
A. Gaming
B. Food delivery
C. Finance
D. Fitness apps
✅ Correct Answer: C
29. Which technique visually explains how changes in input affect output?
A. SHAP
B. Counterfactuals
C. Attention maps
D. Data augmentation
✅ Correct Answer: B
30. In SHAP, higher SHAP value means:
A. Feature is irrelevant
B. Feature negatively influences prediction
C. Greater contribution to prediction
D. Less weight during training
✅ Correct Answer: C
31. Integrated gradients is a method for:
A. Feature scaling
B. Explaining neural networks
C. Model compression
D. Feature encoding
✅ Correct Answer: B
32. In which stage is XAI most important?
A. After training (model evaluation)
B. During data collection
C. During model compilation
D. GPU configuration
✅ Correct Answer: A
33. Which XAI tool is part of the Captum library?
A. LIME
B. SHAP
C. Integrated Gradients
D. ELI5
✅ Correct Answer: C
34. XAI helps debug models by:
A. Cleaning code
B. Showing which features caused errors or bias
C. Compressing data
D. Improving RAM
✅ Correct Answer: B
35. Which visualization tool is often used with PyTorch for XAI?
A. Matplotlib
B. Captum
C. Sklearn
D. OpenCV
✅ Correct Answer: B
36. ELI5 is used for:
A. Explaining Linear Models and Tree models
B. Translating data
C. Label encoding
D. Cloud deployment
✅ Correct Answer: A
37. Explainable models help AI become:
A. More secretive
B. More interpretable and socially acceptable
C. Less accurate
D. More expensive
✅ Correct Answer: B
38. What is a “glass box” model?
A. Complex, deep model
B. Transparent model with interpretable internals
C. Model with no output
D. Audio-only model
✅ Correct Answer: B
39. In sensitive domains, XAI helps detect:
A. Training speed
B. Bias, fairness, and safety issues
C. Hyperparameters
D. GPU types
✅ Correct Answer: B
40. One key advantage of SHAP over LIME is:
A. Model-specific explanation
B. Use of deep learning
C. Consistency with global model behavior
D. Fast computation
✅ Correct Answer: C
Ethics, Applications, and Challenges (41–50)
41. Which of the following is a major challenge in XAI?
A. Model training
B. Data visualization
C. Balancing explainability and accuracy
D. GPU compatibility
✅ Correct Answer: C
42. Explainable AI can help prevent:
A. Overfitting
B. Ethical violations and bias
C. High resolution
D. Compilation errors
✅ Correct Answer: B
43. GDPR regulations require that AI decisions:
A. Remain confidential
B. Be made quickly
C. Be explainable to the user
D. Be hidden from authorities
✅ Correct Answer: C
44. A black-box attack in XAI refers to:
A. Model crashing
B. Reverse engineering the model
C. Random sampling
D. Data imbalance
✅ Correct Answer: B
45. Which industry is NOT currently emphasizing XAI strongly?
A. Healthcare
B. Finance
C. Legal systems
D. Gaming
✅ Correct Answer: D
46. XAI supports responsible AI by promoting:
A. Incomplete answers
B. Fairness, accountability, transparency
C. Black-box models
D. Closed-source tools
✅ Correct Answer: B
47. Which ethical principle is closely tied to XAI?
A. Profit maximization
B. Transparency
C. Competition
D. Complexity
✅ Correct Answer: B
48. What is “faithfulness” in XAI?
A. The model remains unchanged
B. The explanation accurately reflects the model's reasoning
C. The output is always correct
D. It doesn’t need training
✅ Correct Answer: B
49. A faithful explanation is one that:
A. Uses complex math
B. Makes false assumptions
C. Matches the model’s actual logic
D. Is entertaining
✅ Correct Answer: C
50. The ultimate goal of XAI is to:
A. Replace humans
B. Build complicated models
C. Make AI decisions understandable, fair, and trustworthy
D. Speed up processing
✅ Correct Answer: C
COMMENTS