Pentesting GenAI LLM models: Securing Large Language Models

Master LLM Security: Penetration Testing, Red Teaming & MITRE ATT&CK for Secure Large Language Models
4.33 (12 reviews)
Udemy
platform
English
language
Network & Security
category
Pentesting GenAI LLM models: Securing Large Language Models
3 063
students
3.5 hours
content
May 2025
last update
$19.99
regular price

What you will learn

Understand the unique vulnerabilities of large language models (LLMs) in real-world applications.

Explore key penetration testing concepts and how they apply to generative AI systems.

Master the red teaming process for LLMs using hands-on techniques and real attack simulations.

Analyze why traditional benchmarks fall short in GenAI security and learn better evaluation methods.

Dive into core vulnerabilities such as prompt injection, hallucinations, biased responses, and more.

Use the MITRE ATT&CK framework to map out adversarial tactics targeting LLMs.

Identify and mitigate model-specific threats like excessive agency, model theft, and insecure output handling.

Conduct and report on exploitation findings for LLM-based applications.

Course Gallery

Pentesting GenAI LLM models: Securing Large Language Models – Screenshot 1
Screenshot 1Pentesting GenAI LLM models: Securing Large Language Models
Pentesting GenAI LLM models: Securing Large Language Models – Screenshot 2
Screenshot 2Pentesting GenAI LLM models: Securing Large Language Models
Pentesting GenAI LLM models: Securing Large Language Models – Screenshot 3
Screenshot 3Pentesting GenAI LLM models: Securing Large Language Models
Pentesting GenAI LLM models: Securing Large Language Models – Screenshot 4
Screenshot 4Pentesting GenAI LLM models: Securing Large Language Models

Loading charts...

6514281
udemy ID
12/03/2025
course created date
26/05/2025
course indexed date
Bot
course submited by