Adversarial AI Attacks, Mitigations, and Defense Strategies

Explore uCertify's Adversarial AI Attacks, Mitigations, and Defense Strategies course and virtual labs to start building essential security skills today.

(AI-ATCK-DEF.AJ1)
Lessons
Lab
AI Tutor (Add-on)
Get A Free Trial

About This Course

Adversarial AI Attacks, Mitigations, and Defense Strategies is a hands‑on, practitioner‑focused course designed to help you understand, break, defend, and secure modern AI systems. From classic ML pipelines to cutting‑edge LLMs and generative AI, you’ll explore how adversarial AI attacks work—and how to stop them.

AI is everywhere—and so are adversarial AI attacks. Models can be poisoned, stolen, manipulated, or tricked into leaking sensitive data. This course teaches you how attackers think, where AI systems break, and how to build resilient defenses using AI security and MLSecOps best practices.

You’ll not only learn what can go wrong, but also how to fix it.

Skills You’ll Get

  • Launching & Mitigating Attacks: Execute and defend against a full spectrum of adversarial AI attacks, including poisoning, evasion, model extraction, and new-age LLM prompt injection.
  • Defense Architectures: Implement robust defense strategies like adversarial training, differential privacy, and privacy-preserving AI techniques.
  • Secure by Design: Apply threat modeling and risk assessment to the AI lifecycle (Secure by Design).
  • MLSecOps & Governance: Integrate security into the machine learning pipeline using the MLSecOps framework.
  • Trustworthy AI Principles: Master the pillars of trustworthy AI to ensure your systems are secure, fair, transparent, and reliable.

1

Preface

  • Who this course is for
  • What this course covers
  • To get the most out of this course
2

Getting Started with AI

  • Understanding AI and ML
  • Types of ML and the ML life cycle
  • Key algorithms in ML
  • Neural networks and deep learning
  • ML development tools
  • Summary
3

Building Our Adversarial Playground

  • Technical requirements
  • Setting up your development environment
  • Hands-on basic baseline ML
  • Developing our target AI service with CNNs
  • ML development at scale
  • Summary
4

Security and Adversarial AI

  • Technical requirements
  • Security fundamentals
  • Securing our adversarial playground
  • Securing code and artifacts
  • Bypassing security with adversarial AI
  • Summary
5

Poisoning Attacks

  • Basics of poisoning attacks
  • Staging a simple poisoning attack
  • Backdoor poisoning attacks
  • Hidden-trigger backdoor attacks
  • Clean-label attacks
  • Advanced poisoning attacks
  • Mitigations and defenses
  • Summary
6

Model Tampering with Trojan Horses and Model Reprogramming

  • Injecting backdoors using pickle serialization
  • Injecting Trojan horses with Keras Lambda layers
  • Trojan horses with custom layers
  • Neural payload injection
  • Attacking edge AI
  • Model hijacking
  • Summary
7

Supply Chain Attacks and Adversarial AI

  • Traditional supply chain risks and AI
  • AI supply chain risks
  • Data poisoning
  • AI/ML SBOMs
  • Summary
8

Evasion Attacks against Deployed AI

  • Fundamentals of evasion attacks
  • Perturbations and image evasion attack techniques
  • NLP evasion attacks with BERT using TextAttack
  • Universal Adversarial Perturbations (UAPs)
  • Black-box attacks with transferability
  • Defending against evasion attacks
  • Summary
9

Privacy Attacks – Stealing Models

  • Understanding privacy attacks
  • Stealing models with model extraction attacks
  • Defenses and mitigations
  • Summary
10

Privacy Attacks – Stealing Data

  • Understanding model inversion attacks
  • Types of model inversion attacks
  • Example model inversion attack
  • Understanding inference attacks
  • Attribute inference attacks
  • Example attribute inference attack
  • Membership inference attacks
  • Summary
11

Privacy-Preserving AI

  • Privacy-preserving ML and AI
  • Simple data anonymization
  • Advanced anonymization
  • Differential privacy (DP)
  • Federated learning (FL)
  • Split learning
  • Advanced encryption options for privacy-preserving ML
  • Advanced ML encryption techniques in practice
  • Applying privacy-preserving ML techniques
  • Summary
12

Generative AI – A New Frontier

  • A brief introduction to generative AI
  • Using GANs
  • Using pre-trained GANs
  • Summary
13

Weaponizing GANs for Deepfakes and Adversarial Attacks

  • Use of GANs for deepfakes and deepfake detection
  • Using GANs in cyberattacks and offensive security
  • Defenses and mitigations
  • Summary
14

LLM Foundations for Adversarial AI

  • A brief introduction to LLMs
  • Developing AI applications with LLMs
  • Hello LLM with Python
  • Hello LLM with LangChain
  • Bringing your own data
  • How LLMs change Adversarial AI
  • Summary
15

Adversarial Attacks with Prompts

  • Adversarial inputs and prompt injection
  • Direct prompt injection
  • Automated gradient-based prompt injection
  • Risks from bringing your own data
  • Indirect prompt injection
  • Data exfiltration with prompt injection
  • Privilege escalation with prompt injection
  • RCE with prompt injection
  • Defenses and mitigations
  • Summary
16

Poisoning Attacks and LLMs

  • Poisoning embeddings in RAG
  • Poisoning attacks on fine-tuning LLMs
  • Summary
17

Advanced Generative AI Scenarios

  • Supply-chain attacks in LLMs
  • Privacy attacks and LLMs
  • Model inversion and training data extraction attacks on LLMs
  • Inference attacks on LLMs
  • Model cloning with LLMs using a secondary model
  • Defenses and mitigations for privacy attacks
  • Summary
18

Secure by Design and Trustworthy AI

  • Secure by design AI
  • Building our threat library
  • Industry AI threat taxonomies
  • AI threat taxonomy mapping
  • Threat modeling for AI
  • Threat modelling in action
  • Enhanced FoodieAI threat model
  • Risk assessment and prioritization
  • Security design and implementation
  • Testing and verification
  • Shifting left – embedding security into the AI life cycle
  • Live operations
  • Beyond security – Trustworthy AI
  • Summary
19

AI Security with MLSecOps

  • The MLSecOps imperative
  • Toward an MLSecOps 2.0 framework
  • Building a primary MLSecOPs platform
  • MLSecOps in action
  • Integrating MLSecOps with LLMOps
  • Advanced MLSecOps with SBOMs
  • Summary
20

Maturing AI Security

  • Enterprise security AI challenges
  • Foundations of enterprise AI security
  • Protecting AI with enterprise security
  • Operational AI security
  • Iterative enterprise security
  • Summary

1

Building Our Adversarial Playground

2

Security and Adversarial AI

3

Poisoning Attacks

4

Model Tampering with Trojan Horses and Model Reprogramming

5

Supply Chain Attacks and Adversarial AI

6

Evasion Attacks against Deployed AI

7

Privacy Attacks – Stealing Models

8

Privacy Attacks – Stealing Data

9

Privacy-Preserving AI

10

Generative AI – A New Frontier

11

Weaponizing GANs for Deepfakes and Adversarial Attacks

12

LLM Foundations for Adversarial AI

13

Adversarial Attacks with Prompts

14

Poisoning Attacks and LLMs

15

Advanced Generative AI Scenarios

16

Secure by Design and Trustworthy AI

  • Understanding Secure Design, Threats, and Trustworthy AI
17

AI Security with MLSecOps

18

Maturing AI Security

  • Strengthening Enterprise AI Security Maturity

Any questions?
Check out the FAQs

  Want to Learn More?

Contact Us Now

They are techniques used to attack AI systems and the corresponding defenses used to protect models, data, and pipelines.

Yes! You’ll learn prompt injection, RAG poisoning, LLM privacy attacks, and GenAI defenses.

Absolutely. Performance‑based labs let you practice real adversarial AI attacks and mitigations.

MLSecOps helps operationalize AI security across the ML lifecycle, from training to production.

Ready to Defend AI Like a Pro?

  Enroll now and master adversarial AI attacks, mitigations, and defense strategies. Learn how to outthink attackers, secure AI systems, and build trustworthy AI with confidence.

$167.99

Pre-Order Now

Related Courses

All Courses
scroll to top