Agentic AI Security Professional (CAASP)
Secure autonomous and multi-agent AI systems before they become the next breach.
Critical industry shift driven by exponential AI growth
Roles this program prepares you for
These are emerging, high-impact roles shaped by autonomous and agentic AI systems.
Secures autonomous and goal-driven AI systems in production.
- • Secure agent workflows, tools, and integrations
- • Analyze and mitigate agentic attack vectors
- • Design guardrails for autonomous behavior
- • Enterprises are rapidly adopting agentic AI systems
- • Traditional AppSec and CloudSec roles lack agent-level coverage
This role requires specialized agentic AI security skills.

Actively tests and breaks AI systems to expose hidden risks.
- • Simulate prompt, toolchain, and agent manipulation attacks
- • Red team multi-agent workflows and decision paths
- • Identify emergent and cascading failure risks
- • Organizations need offensive testing for autonomous AI
- • Agentic failures are harder to detect through defensive tools alone
This role requires specialized agentic AI security skills.

Ensures autonomous AI systems comply with global regulations.
- • Design governance frameworks for agentic deployments
- • Align AI systems with ISO 42001, NIST AI RMF, and EU AI Act
- • Support audit readiness for high-risk AI use cases
- • Regulations are accelerating faster than AI governance maturity
- • Autonomous AI introduces new accountability and risk challenges
This role requires specialized agentic AI security skills.

These roles require skills beyond traditional AI and cybersecurity training.
Agentic AI Security Professional (CAASP)
120-hour modular program
This module establishes the technical and conceptual foundation required to understand agentic AI systems and their security implications.
- • Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and autonomous agents
- • Agent architectures and core attack surfaces
- • STRIDE-GPT methodology for AI threat modeling
- • Introduction to MITRE ATLAS for adversarial AI techniques
This module focuses on real-world attack techniques targeting autonomous and goal-driven AI agents.
- • Memory poisoning in agentic systems
- • Tool misuse and privilege escalation via agent actions
- • Goal and authorization hijacking attacks
- • Cascading failures across interconnected agents
- • RAG poisoning and agentic AI supply chain attacks
This module addresses security challenges unique to multi-agent and distributed AI ecosystems.
- • Inter-agent communication security and poisoning attacks
- • Emergent behavioral vulnerabilities in multi-agent environments
- • Agent impersonation, collusion, and coordinated malicious behavior
- • Governance and accountability models for multi-agent systems
This module covers securing real-world AI applications and agent integrations at scale.
- • API and tool integration security for agentic systems
- • RAG pipeline security and exploitation techniques
- • Agent sandboxing, isolation, and resource control mechanisms
- • Privacy risks and autonomous data handling
- • Incident response and forensic considerations for autonomous systems
This module focuses on regulatory, governance, and risk frameworks for deploying autonomous AI systems responsibly.
- • ISO/IEC 42001 for agentic AI deployments
- • NIST AI Risk Management Framework (AI RMF)
- • EU AI Act implications for high-risk autonomous systems
- • Governance design for autonomous workflows and decision-making
This module provides offensive security techniques to evaluate and harden agentic AI systems.
- • Red team methodologies for AI and agentic systems
- • Prompt and toolchain exploitation techniques
- • Multi-agent red teaming scenarios
- • Automated adversarial testing for agentic AI environments
Design, attack, and secure an agentic AI system
Learners complete a full end-to-end security assessment of a real-world autonomous AI deployment.

Autonomous AI agents supporting financial analysis and decision workflows.
- Agentic workflows for financial data processing
- Tool integrations for analysis and reporting
- Memory and RAG poisoning attacks
- Tool misuse and privilege escalation
- Cascading agent decision failures
- Guardrails for financial decision autonomy
- Monitoring and incident response for agent actions
This capstone demonstrates real-world readiness for securing autonomous AI systems.
Ready to secure autonomous AI systems?
Apply to join a specialized program designed to prepare professionals for securing agentic and multi-agent AI systems in real-world environments.