CompTIA launched SecAI+ on February 17, 2026. It's the first vendor-neutral certification that tests both sides of the AI-security coin: how to protect AI systems from attack, and how to use AI tools inside security operations. If you work in a SOC, a GRC team, or any security role where AI is showing up in your toolchain (which at this point is most of them), this cert validates that you know what you're doing with it.
The exam is 60 questions in 60 minutes. That's one minute per question with no room to waste. The passing score is 600 on a 100-to-900 scale. Questions are a mix of multiple-choice and performance-based items (PBQs), and CompTIA recommends 3 to 4 years in IT with at least 2 years of hands-on cybersecurity experience before attempting it. The exam voucher runs approximately $425.
The Four Domains
The domain weights tell you exactly where to spend your time. Securing AI Systems at 40% is worth more than the other three domains combined in some scoring scenarios. Study accordingly.
- AI Concepts & Applications (17%) — Machine learning fundamentals, neural networks, NLP, generative AI, AI use cases in cybersecurity
- Securing AI Systems (40%) — Threat modeling for AI, adversarial attacks, data poisoning, prompt injection, model security, AI supply chain risks
- AI-Assisted Security Operations (24%) — AI in threat detection, SOAR integration, automated triage, AI-enhanced log analysis, false positive reduction
- AI Governance & Risk (19%) — NIST AI RMF, ISO/IEC 42001, bias and fairness, explainability, regulatory compliance, ethical AI deployment
Domain 1: AI Concepts and Applications
At 17%, this is the smallest domain, but it's foundational. You can't secure an AI system if you don't understand how it works. The exam expects you to distinguish between supervised, unsupervised, and reinforcement learning; know what training data, feature engineering, and model inference mean; and understand the basic architecture of neural networks, including CNNs and transformers.
You don't need to write code or train models. The questions test whether you understand the concepts well enough to reason about their security implications. For example: if a question describes a model that classifies network traffic as malicious or benign, you should recognize that as supervised learning with a labeled training dataset, and from there identify the attack surfaces (poisoned training data, adversarial inputs at inference time, model theft via repeated querying).
Generative AI gets specific attention. Know the difference between discriminative and generative models. Understand what large language models do, how they're fine-tuned, and why they produce hallucinations. The exam treats hallucinations as a security concern, not just an accuracy problem, because a hallucinating AI assistant in a security context can generate false intelligence, incorrect remediation steps, or fabricated log entries.
Domain 2: Securing AI Systems
This is 40% of your score. Almost half the exam. The attack taxonomy alone is extensive, and you need to know each attack type, how it works, and what controls mitigate it.
Adversarial Attacks and Input Manipulation
Adversarial attacks modify inputs to cause misclassification. In a cybersecurity context, an attacker might craft network packets that an AI-based IDS classifies as benign, or modify malware samples so an ML-based scanner misses them. The exam tests your understanding of how these attacks work (small, carefully calculated perturbations to input data) and what defenses exist (adversarial training, input validation, ensemble models, gradient masking).
Prompt injection targets LLM-based systems. Direct prompt injection feeds malicious instructions to the model through user input. Indirect prompt injection hides instructions in data the model processes (a poisoned webpage that an AI summarizer reads, for instance). The defenses the exam covers include prompt firewalls, input sanitization, output filtering, and least-privilege API access for LLM-integrated applications.
Data and Model Poisoning
Data poisoning corrupts training data so the model learns the wrong patterns. A targeted poisoning attack might insert examples that cause the model to misclassify one specific type of input while performing normally on everything else. This is harder to detect than a broad-spectrum poisoning attack that degrades overall accuracy.
Model poisoning goes further: the attacker modifies the model itself, either by compromising the training pipeline, injecting a backdoor during fine-tuning, or replacing the model file in storage. Supply chain attacks on ML pipelines fall here too. If your model was trained on a third-party dataset or uses a pre-trained model from a public repository, you've inherited whatever vulnerabilities exist upstream.
Defenses include data provenance tracking, integrity verification of training datasets, model versioning with checksums, access controls on training infrastructure, and anomaly detection during training (monitoring for unexpected accuracy changes or loss spikes).
Other Attack Types
The exam also covers model inversion (extracting training data from a model's outputs), model theft (recreating a model by querying it repeatedly and building a surrogate), membership inference (determining whether a specific data point was in the training set), and model denial of service (sending inputs specifically designed to consume maximum compute resources).
Know the OWASP Top 10 for LLM Applications. The exam references it directly. Also know MITRE ATLAS, which maps adversarial tactics and techniques against AI systems in the same way that MITRE ATT&CK maps tactics against traditional IT systems.
Domain 3: AI-Assisted Security Operations
This domain flips the perspective. Instead of defending AI systems, you're using AI as a defensive tool. At 24%, it's the second-largest domain.
AI in the SOC shows up in three main areas the exam covers. First, threat detection: ML models that analyze network traffic, endpoint telemetry, or user behavior to identify anomalies. Know the difference between signature-based detection (rules, known patterns) and anomaly-based detection (behavioral baselines, statistical models). The exam will give you scenarios and ask which approach fits.
Second, SOAR integration. Security Orchestration, Automation, and Response platforms increasingly incorporate AI for alert triage, enrichment, and automated playbook execution. A typical exam scenario: an LLM-powered component that reads alert data, correlates it with threat intelligence feeds, and recommends a response action. You need to understand both the benefits (speed, consistency, reduced analyst fatigue) and the risks (automated actions based on AI recommendations without human review, hallucinated threat intelligence, adversarial manipulation of the AI component).
Third, log analysis and false positive reduction. AI systems that process SIEM data can surface high-priority events and suppress noise. The exam tests whether you understand how to validate AI-generated findings before acting on them, and what happens when the AI gets it wrong (missed true positives are worse than false positives in most security contexts).
Performance-based questions in this domain may ask you to configure an AI-assisted workflow, evaluate an AI tool's output for accuracy, or identify where human oversight should be inserted into an automated pipeline.
Domain 4: AI Governance and Risk
At 19%, this domain tests whether you can put guardrails around AI deployments. It's heavy on frameworks and standards.
Two frameworks matter most:
- NIST AI Risk Management Framework (AI RMF): Organized around four functions: Govern, Map, Measure, Manage. Know what each function covers. Govern sets organizational context and policies. Map identifies and categorizes AI risks. Measure assesses those risks quantitatively. Manage implements controls. This will come up.
- ISO/IEC 42001: The international standard for AI management systems. It specifies requirements for establishing, implementing, and continually improving an AI management system. Think of it as ISO 27001 but for AI.
Beyond frameworks, the exam tests bias and fairness concepts. You should understand how bias enters AI systems (biased training data, biased feature selection, biased labeling), how to measure it (demographic parity, equalized odds, calibration across groups), and what organizational processes reduce it (diverse teams, bias audits, third-party assessments).
Explainability is another testable concept. Some models (linear regression, decision trees) are inherently interpretable. Others (deep neural networks) are not, and require post-hoc explanation techniques like SHAP values or LIME. The exam asks about explainability in a governance context: when must a model's decisions be explainable? When is a black-box model acceptable? Regulated industries (finance, healthcare) generally require explainability. Internal security tools may not.
TechPrep SecAI+
2,050+ practice questions across all four CY0-001 domains. Confidence calibration, spaced repetition, and exam readiness tracking built on cognitive science research.
Study Strategy
The 60-minute time limit changes how you prepare. You need fast recall, not slow reasoning. If you're spending 30 seconds remembering what MITRE ATLAS is, you're behind. The concepts need to be automatic.
CompTIA recommends Security+ or equivalent as a prerequisite. If you have that background, plan for 3 to 5 weeks of study. Without it, you'll need to build the security fundamentals first, which adds significant time.
Weeks 1-2: Cover Domain 1 (AI concepts) and Domain 2 (securing AI systems) together. Start with the AI fundamentals so you have the vocabulary, then move into the attack taxonomy. For each attack type, write down: what it targets, how it works, one real-world example, and at least two defenses. Read through the OWASP Top 10 for LLMs and MITRE ATLAS during this phase.
Week 3: Domain 3, AI-assisted security operations. If you work in a SOC, a lot of this will feel familiar. If you don't, spend time with SOAR documentation and AI-powered SIEM case studies. The PBQ format means you may need to walk through a scenario step by step: receive alert, enrich with AI, validate, respond. Practice that workflow.
Week 4: Domain 4, governance and risk. Read the NIST AI RMF in full (it's not long). Study ISO/IEC 42001 at the overview level. Focus on bias, fairness, and explainability concepts. Then take timed practice exams.
Week 5: Full practice exams under timed conditions. 60 questions in 60 minutes is fast. You should be finishing practice tests with 5 to 10 minutes to spare before sitting the real exam. If you're running out of time on practice tests, spend more time on the domains where you're slowest.
PBQs and Time Management
Performance-based questions appear early in CompTIA exams, and they take longer than multiple-choice. A common strategy: skip PBQs on the first pass, answer all the multiple-choice questions, then come back to the PBQs with whatever time remains. This ensures you don't burn 8 minutes on one PBQ and then rush through 20 multiple-choice questions.
On the CY0-001 specifically, PBQs will likely involve configuring an AI security control, mapping a scenario to a framework, or evaluating an AI tool's output. Practice these types of tasks during your study so they don't surprise you.
The 600/900 passing score means you need roughly 67% correct. That's a lower bar than many CompTIA exams, but the content is newer and study resources are still catching up. First-generation exams tend to have thinner prep material, which makes hands-on experience and framework knowledge more important than finding the right study guide.
Why This Cert Exists Now
Security teams are dealing with AI from both directions. Attackers are using AI to generate phishing emails, automate reconnaissance, create polymorphic malware, and find vulnerabilities faster. Defenders are using AI for threat detection, alert triage, and response automation. And organizations are deploying AI systems that need to be protected like any other critical asset.
The SecAI+ fills a gap. Security+ doesn't cover AI threats. The AI certifications from cloud vendors don't cover security operations. The CY0-001 is the first cert that expects you to understand both sides: protecting AI and using AI to protect everything else. Whether the cert itself becomes a market standard remains to be seen, but the knowledge it validates is already required in most mid-to-senior security roles.