XAI Methods for AI Security Specialists: Protecting Against Model Poisoning, Adversarial Attacks, and Data Privacy Breaches

Master Explainable AI (XAI) techniques to interpret model predictions, detect novel AI security threats, and fortify AI systems against model poisoning, adversarial attacks, and data privacy breaches.

Foundations of XAI for Threat Detection

Unit 1: Introduction to Explainable AI

Unit 2: Model-Agnostic XAI Techniques

Unit 3: Model-Specific XAI Techniques

Unit 4: Evaluating XAI Methods

Applying XAI to AI Security Threats

Unit 1: XAI for Model Poisoning Detection

Unit 2: XAI for Adversarial Attack Analysis

Unit 3: XAI for Data Privacy Breach Detection