The AWS Certified AI Practitioner (AIF-C01) is the first AWS certification built specifically around generative AI. It launched in late 2024, and it fills a gap that the Cloud Practitioner exam doesn't touch: how to choose, apply, and govern AI and ML services on AWS. No coding required. But "foundational" doesn't mean shallow. Over half the exam covers generative AI concepts and foundation model applications, and the questions expect you to know which AWS service solves which problem.

Here's what the exam actually covers, how the five domains break down, and how to study efficiently.

Exam Format

65 questions. 90 minutes. Passing score: 700 out of 1,000. The exam uses multiple-choice (one correct answer from four) and multiple-response (two or more correct from five or more options) question types. At 90 minutes for 65 questions, you have about 83 seconds per question, which is tight for scenario-based questions but comfortable for the factual recall items.

The exam costs $100 USD. AWS recommends about 6 months of experience working with AI/ML technologies on their platform, though candidates with strong AI fundamentals from other contexts often pass with less AWS-specific experience.

The Five Domains

Domain 1: AI and ML Fundamentals (20%)

This domain tests whether you can distinguish between AI, ML, and deep learning; identify supervised versus unsupervised versus reinforcement learning; and match problem types to algorithm categories. Classification, regression, clustering, recommendation. The questions aren't asking you to implement gradient descent. They're asking you to look at a business scenario and pick the right ML approach.

Specific areas that come up frequently:

  • Supervised learning patterns. Customer churn prediction (binary classification), house price estimation (regression), image categorization (multi-class classification). Know the difference and which metrics apply to each.
  • Unsupervised learning. Customer segmentation (clustering), anomaly detection, dimensionality reduction. The exam tests whether you understand when labeled data isn't available and what alternatives exist.
  • Training, validation, and test splits. Why you need all three, what overfitting looks like, and how cross-validation works at a conceptual level.
  • AWS SageMaker basics. Not the deep pipeline knowledge from the MLA-C01, but understanding what SageMaker is, what SageMaker JumpStart provides (pre-trained models and solution templates), and when you'd use SageMaker versus a purpose-built AI service.

Domain 2: Generative AI Concepts (24%)

This is where the AIF-C01 diverges from every previous AWS cert. Nearly a quarter of the exam is dedicated to how generative AI works, what foundation models are, and how they differ from traditional ML.

Foundation models are large neural networks pre-trained on broad datasets that can be adapted to many downstream tasks. The exam expects you to understand this at a functional level: what makes them different from task-specific models, why they need fine-tuning or prompt engineering for specific use cases, and the tradeoffs between training your own model versus using a pre-trained one.

Key concepts that get tested:

  • Large language models (LLMs). How they generate text, what tokens are, why context windows matter, and temperature as a parameter that controls output randomness.
  • Prompt engineering. Zero-shot versus few-shot prompting. System prompts versus user prompts. How to structure prompts to get consistent, useful outputs. The exam frames these as practical business decisions, not research topics.
  • Fine-tuning versus prompt engineering. When prompt engineering is enough and when you need to fine-tune. Fine-tuning requires labeled data and compute cost; prompt engineering requires neither but has limitations on task specificity.
  • Retrieval Augmented Generation (RAG). The pattern of combining a foundation model with an external knowledge base to reduce hallucinations and ground responses in specific data. AWS implements this through Amazon Bedrock's Knowledge Bases feature. The exam tests both the concept and the AWS implementation.
  • Embeddings and vector databases. How text gets converted to numerical representations for similarity search, and why this matters for RAG and semantic search applications.

Domain 3: Foundation Model Applications (28%)

The largest domain. It's about using foundation models through AWS services, not building them. Amazon Bedrock is the center of gravity here.

Amazon Bedrock is AWS's managed service for accessing foundation models from multiple providers (Anthropic, Meta, Amazon, Cohere, and others) through a single API. The exam tests:

  • Which foundation models are available on Bedrock and their general strengths. You don't need to memorize every model, but you should know that different providers optimize for different tasks.
  • Amazon Titan models specifically: Titan Text for generation, Titan Embeddings for vector representations, Titan Image for image generation. These are AWS's own foundation models.
  • Bedrock's customization features: fine-tuning with your data, continued pre-training, Knowledge Bases for RAG, Agents for multi-step task execution, and Guardrails for content filtering.
  • Model evaluation on Bedrock: comparing models against custom metrics like accuracy, toxicity, and robustness before selecting one for production.

Beyond Bedrock, this domain covers the AWS AI service portfolio. These are purpose-built services that handle specific AI tasks without requiring you to select or manage models:

  • Amazon Rekognition for image and video analysis: object detection, facial analysis, content moderation, text in images.
  • Amazon Comprehend for natural language processing: sentiment analysis, entity extraction, key phrase detection, language detection.
  • Amazon Textract for extracting text, forms, and tables from scanned documents. Know how it differs from basic OCR.
  • Amazon Transcribe for speech-to-text conversion. Amazon Polly for text-to-speech.
  • Amazon Lex for building conversational interfaces (chatbots). It handles intent recognition and slot filling.
  • Amazon Translate for real-time language translation.
  • Amazon Personalize for recommendation systems. Amazon Forecast for time-series predictions.
  • Amazon Kendra for intelligent enterprise search.
  • Amazon Q for AI-powered business assistance.

The exam's favorite question pattern in this domain: here's a business problem, which AWS service solves it? A company needs to extract invoice data from scanned PDFs. That's Textract, not Comprehend, not Rekognition. A company needs to detect offensive content in user-uploaded images. That's Rekognition content moderation, not a custom model. Get the service mapping right and this domain becomes straightforward.

TechPrep AWS AI Practitioner

2,500 practice questions across all 5 domains. 1,500 multiple-choice covering Bedrock, SageMaker JumpStart, Rekognition, Comprehend, Textract, Lex, responsible AI, and AWS security. 1,000 rapid-fire drills on model types, service capabilities, and compliance requirements.

Domain 4: Responsible AI (14%)

Smaller by weight but easy to lose points on if you haven't studied it. AWS has a specific framework for responsible AI, and the exam tests whether you know its principles.

The core areas:

  • Fairness and bias. Training data bias, selection bias, measurement bias. How biased training data produces biased model outputs, and what mitigation strategies exist (data augmentation, bias testing, human review).
  • Explainability. Why black-box models create problems in regulated industries. SageMaker Clarify for bias detection and model explainability. SHAP values as a method for explaining individual predictions.
  • Transparency. Model cards, data sheets, documentation of training data sources and known limitations. The exam expects you to know that transparency is both an ethical and a practical requirement.
  • Hallucination in generative AI. What causes it (the model generating plausible but factually wrong content), and how RAG, grounding, and human-in-the-loop review reduce it.
  • Bedrock Guardrails. Content filtering policies that block harmful or off-topic outputs. Know how to configure them and why they matter for production deployments.

Domain 5: Security and Compliance (14%)

This domain overlaps with general AWS security knowledge but focuses it through an AI lens:

  • Data privacy. How Bedrock handles customer data (it doesn't use your data to train base models), encryption at rest and in transit, data residency considerations.
  • IAM for AI services. Least-privilege access for Bedrock, SageMaker, and the purpose-built AI services. Service-linked roles.
  • Compliance frameworks. HIPAA, SOC, GDPR as they apply to AI workloads. The exam doesn't test compliance law in depth, but it expects you to know that these frameworks exist and that AWS services support them.
  • Data governance. Versioning training data, auditing model access, logging API calls through CloudTrail. The principle that you should be able to trace what data trained which model and who accessed the results.

Study Strategy

The AIF-C01 is a broad exam that covers many services at a conceptual level. The most efficient study approach is to learn the service catalog first, then layer on the generative AI concepts.

Recommended order:

  1. Start with the AI service map. Make a list of every AWS AI service, what it does, and one example use case for each. Rekognition = image analysis, Comprehend = NLP, Textract = document extraction, and so on. This mapping exercise handles a third of the exam by itself.
  2. Learn Amazon Bedrock. Understand the service architecture: foundation model access, Knowledge Bases for RAG, Agents for task orchestration, Guardrails for safety. Try the console if you have an AWS account; even browsing the model catalog builds useful familiarity.
  3. Study generative AI concepts. LLMs, transformers (at a high level), prompt engineering, fine-tuning versus RAG, embeddings. You don't need to understand attention mechanisms mathematically, but you need to know why transformers process sequences better than RNNs and what that means for practical applications.
  4. Cover responsible AI and security. These two domains together are 28% of the exam. Read the AWS Responsible AI documentation and the Bedrock security section. Know SageMaker Clarify at a conceptual level.

For someone with existing AI/ML knowledge who needs to learn the AWS service layer, one to two weeks of focused study is realistic. If you're starting from scratch on both AI concepts and AWS, budget four to six weeks.

Where Candidates Lose Points

Mixing up similar services. Comprehend versus Textract versus Kendra. All three process text, but for completely different purposes. Comprehend analyzes meaning (sentiment, entities). Textract extracts structured data from documents. Kendra searches across documents. If the question says "extract table data from scanned invoices," it's Textract. If it says "find all documents related to a topic," it's Kendra. If it says "determine customer sentiment from reviews," it's Comprehend.

Skipping responsible AI. At 14%, some candidates treat it as optional. It's not. The questions are specific: what type of bias is present, how would you detect it, which AWS tool addresses it. Generic "AI should be fair" answers won't pass.

Overthinking the ML fundamentals. This is a practitioner exam, not an engineer exam. You need to know what supervised learning is and when to use it. You don't need to know how gradient descent converges or what a learning rate schedule looks like. If you're spending time on calculus or linear algebra, you're studying for the wrong cert.

Test Day

90 minutes goes fast. Budget your time by domain weight: the foundation model applications questions (28%) will take the most time because they tend to be scenario-based. The ML fundamentals and responsible AI questions are usually shorter and more factual.

When you hit a service-identification question you're unsure about, eliminate the services that clearly don't fit and pick from what's left. AWS exam questions are designed so that at least two options are obviously wrong if you know the service catalog. Getting it down to two choices and making an informed guess is better than burning three minutes trying to recall a detail from a documentation page you skimmed once.

Anthony C. Perry

M.S. Computer Science, M.S. Kinesiology. USAF veteran and founder of Meridian Labs. ORCID