Snowflake replaced the COF-C02 with the COF-C03 on February 16, 2026. The old exam retires on May 14. If you're studying now, you're studying for the C03, and it covers several topics that didn't exist in the previous version: Snowflake Cortex AI, Apache Iceberg tables, Snowflake Notebooks, and Git integration. Everything else from the C02 carried forward, so existing study materials still apply for the foundational content. But you'll need to fill the gaps on the new additions yourself.

The exam is 100 questions in 115 minutes, scored on a 1,000-point scale with 750 to pass. That's a 75% threshold, which sounds manageable until you realize that many questions are multi-select (choose two or three correct answers) and partial credit isn't guaranteed. The exam costs $175 per attempt, and the certification is valid for two years.

The Five Domains

The COF-C03 is organized into five weighted domains. Your study time should roughly track these percentages, with extra weight on wherever your diagnostic scores are lowest.

  • Snowflake Architecture & Features (31%) — The three-layer architecture, virtual warehouses, micro-partitions, caching, cloning, time travel, Cortex AI
  • Data Querying & Performance (21%) — SQL functions, query profiling, clustering, materialized views, result set caching, query history
  • Account Access & Governance (20%) — RBAC, DAC, secondary roles, masking policies, row access policies, network policies, data classification
  • Data Loading & Transformation (18%) — Stages, file formats, COPY INTO, Snowpipe, Snowpipe Streaming, streams, tasks, dynamic tables
  • Data Sharing & Collaboration (10%) — Secure views, data sharing, Snowflake Marketplace, listings, reader accounts

Architecture and Features at 31% is the largest domain by a wide margin. If you have to pick one area to over-prepare, it's this one.

Snowflake's Three-Layer Architecture

Every SnowPro Core question assumes you understand how Snowflake separates storage, compute, and cloud services. This isn't just conceptual; questions will test whether you know which layer handles a specific function.

The storage layer holds all table data in a proprietary compressed columnar format. Data is organized into micro-partitions, which are immutable chunks of 50 to 500 MB. You don't choose how data is partitioned. Snowflake does it automatically based on insertion order. Each micro-partition stores column-level metadata (min/max values, distinct counts), and the query engine uses this metadata to skip irrelevant partitions entirely. This is called pruning, and it's the primary mechanism behind Snowflake's query performance on large tables.

The compute layer consists of virtual warehouses. Each warehouse is an independent cluster of compute nodes. Warehouses don't share resources with each other, which means one team's heavy workload won't slow down another team's queries. You can suspend warehouses when they're idle and resume them in seconds. Warehouses come in T-shirt sizes (XS through 6XL), and each size up roughly doubles the compute capacity and cost. Multi-cluster warehouses can auto-scale by adding clusters during high concurrency.

The cloud services layer sits on top and handles authentication, metadata management, query parsing and optimization, access control, and infrastructure management. It's always running; you don't provision it. This layer also manages the metadata store that powers features like time travel, zero-copy cloning, and data sharing. On the exam, if a question asks about query compilation, transaction management, or access control enforcement, the answer is the cloud services layer.

Virtual Warehouses and Caching

Warehouse sizing questions are common. The key concept: scaling up (bigger warehouse) helps with complex queries on large datasets. Scaling out (more clusters via multi-cluster warehouses) helps with concurrency. If one user's query is slow, you need a bigger warehouse. If many users' queries are queuing, you need more clusters.

Snowflake has three caching layers, and the exam tests all three:

  • Result cache: If you run the exact same query and the underlying data hasn't changed, Snowflake returns the cached result instantly. No warehouse needed. This cache persists for 24 hours.
  • Local disk cache (SSD cache): When a warehouse reads data from remote storage, it caches the raw data on local SSDs. Subsequent queries against the same data read from the local cache instead of remote storage. This cache only lives as long as the warehouse is running.
  • Metadata cache: The cloud services layer caches metadata like row counts, min/max values, and table statistics. Queries that only need metadata (like SELECT COUNT(*) FROM table) can return without touching the warehouse at all.

Suspending a warehouse clears the local disk cache. This is a frequently tested point. If you suspend and resume, the first queries will be slower because they're pulling from remote storage again.

Time Travel and Fail-Safe

Time travel lets you query data as it existed at a previous point in time, restore dropped tables, or undo accidental changes. The retention period depends on your Snowflake edition: Standard gets 1 day, Enterprise and higher get up to 90 days (configurable per table with DATA_RETENTION_TIME_IN_DAYS).

You can query historical data using AT or BEFORE clauses with timestamps, offsets, or statement IDs. For example: SELECT * FROM my_table AT(TIMESTAMP => '2026-03-01 12:00:00'::timestamp).

After the time travel period expires, data enters Fail-Safe, a 7-day window where Snowflake can recover data, but only through a support request. You can't self-serve Fail-Safe recovery. Temporary and transient tables have reduced or no Fail-Safe, which makes them useful for staging data where you don't need long-term protection.

Data Loading: Stages, Snowpipe, Streams, and Tasks

Data loading is 18% of the exam and covers a lot of ground. Start with stages.

Stages are locations where data files sit before being loaded. Internal stages are managed by Snowflake (table stages, user stages, named stages). External stages point to S3, Azure Blob, or GCS. The COPY INTO command loads data from stages into tables. Know the difference between COPY INTO <table> (loading) and COPY INTO <location> (unloading).

Snowpipe automates continuous data loading. It uses a queue-based system: when new files land in a stage, Snowpipe picks them up and loads them automatically using a Snowflake-managed warehouse (serverless). You don't pay for a dedicated warehouse; you pay per-file compute time. Snowpipe Streaming, new to the COF-C03 syllabus, allows row-level ingestion without staging files first.

Streams capture change data on a table (inserts, updates, deletes). They track a point-in-time offset and give you a view of what changed since that offset. Streams are how you build CDC (change data capture) pipelines in Snowflake.

Tasks are scheduled SQL executions. You can chain tasks into DAGs (directed acyclic graphs) where a root task fires on a schedule and child tasks run after their parent completes. The common exam pattern: use a stream to detect changes, then trigger a task to process those changes. This stream-plus-task combination is a core data pipeline pattern in Snowflake and comes up repeatedly.

Dynamic tables, also new to the COF-C03, let you define a target table as a SQL query. Snowflake automatically keeps the target table up to date. Think of them as self-refreshing materialized views with full DML tracking.

Access Control and Governance

Snowflake uses a combination of Role-Based Access Control (RBAC) and Discretionary Access Control (DAC). Every object has an owner (the role that created it), and that owner can grant privileges to other roles. The system-defined roles form a hierarchy: ACCOUNTADMIN at the top, then SECURITYADMIN and SYSADMIN, with USERADMIN and PUBLIC below them.

Key concepts for the exam:

  • Secondary roles let a user activate multiple roles in a single session, combining their privileges. This is new to the COF-C03.
  • Masking policies (dynamic data masking) conditionally hide column values based on the querying user's role. A common example: show full SSNs to HR roles, show only the last four digits to everyone else.
  • Row access policies filter rows based on the querying role. Different users running the same query see different rows.
  • Network policies restrict access by IP address. They apply at the account or user level.

Governance questions often test whether you know which role can perform a specific action. ACCOUNTADMIN can do everything. SECURITYADMIN manages grants and roles. SYSADMIN manages warehouses and databases. USERADMIN creates users and roles. Mixing these up is a common mistake.

Data Sharing and Collaboration

This domain is only 10%, but it's straightforward and practically free points if you study it. Snowflake's data sharing model is "zero-copy": the provider creates a share, adds objects to it, and the consumer accesses the shared data without any data movement or duplication. The consumer pays for their own compute; the provider pays nothing extra for sharing.

Secure views are important here. A standard view exposes its definition to anyone who can query it. A secure view hides the definition, which matters when you're sharing data with external parties and don't want them to see your underlying table structure or business logic.

The Snowflake Marketplace lets providers publish datasets (free or paid) to a catalog that any Snowflake customer can browse. Reader accounts let you share data with organizations that don't have their own Snowflake account; the provider pays for the consumer's compute in that case.

What's New in the COF-C03

If you studied for the COF-C02, these are the topics you need to add:

  • Snowflake Cortex AI: SQL-callable AI functions (COMPLETE, EXTRACT_ANSWER, SENTIMENT, SUMMARIZE, TRANSLATE). Know what each function does and that they run inside Snowflake without external API calls.
  • Apache Iceberg tables: Snowflake can read and write Iceberg-format tables, enabling interoperability with other engines (Spark, Trino). Know when you'd choose an Iceberg table over a native Snowflake table.
  • Snowflake Notebooks: Built-in notebooks for Python and SQL, similar to Jupyter but running inside Snowflake. They can use warehouse compute or Snowpark Container Services.
  • Git integration: Version control for Snowflake objects. The exam may test awareness-level questions on CI/CD pipelines for Snowflake.
  • FinOps and cost attribution: Resource monitors, cost management views, usage governance. Expect questions on how to track and control spending.

Study Plan

Most candidates with some Snowflake experience pass the COF-C03 with 2 to 4 weeks of focused study. If you're coming in cold, plan for 4 to 6 weeks. Here's a structure that works.

Week 1: Read the Snowflake documentation on architecture, virtual warehouses, and micro-partitions. These pages are the primary source material; the exam draws directly from them. Run a few queries in a trial account to see how warehouses auto-suspend, how caching works, and how query profiling surfaces scan statistics.

Week 2: Cover data loading end to end. Set up a stage, load files with COPY INTO, configure Snowpipe on an S3 bucket, create a stream on a table, and build a task that consumes it. Hands-on experience with these features sticks far better than reading about them.

Week 3: Governance, access control, and data sharing. Create custom roles, set up masking policies, build a share, and create a secure view. Also cover time travel and cloning by actually dropping and restoring tables. This is the week to also study the COF-C03-specific additions: Cortex AI functions, Iceberg tables, and Notebooks.

Week 4: Practice exams under timed conditions. Review every wrong answer and trace it back to the documentation. Pay attention to multi-select questions where you need to pick exactly two or three correct options; getting one wrong in a set can cost the entire question.

The Snowflake documentation is the single best study resource. The exam is written from it. Third-party courses can help structure your study, but they're supplements, not substitutes.

TechPrep SnowPro Core

2,050+ practice questions across all five COF-C03 domains. Confidence calibration, spaced repetition, and exam readiness tracking built on cognitive science research. Coming Spring 2026.

Common Mistakes on Exam Day

Time is generous at 115 minutes for 100 questions, but multi-select questions slow people down. Read the question stem carefully for how many answers it wants. "Choose two" means exactly two. Don't second-guess and add a third.

Watch for questions that look like they're about one domain but actually test another. A question about why a query is slow might seem like a performance question, but the answer could be about warehouse sizing (architecture domain) or about clustering keys (performance domain). Read the answer options before committing.

The exam doesn't penalize guessing. If you're stuck, eliminate what you can and pick. Don't leave questions blank.

One more thing: the exam environment allows you to flag questions and return to them. Unlike some adaptive exams, you can go back. Use this. Flag anything you're uncertain about, finish the rest, and revisit flagged questions with whatever time remains.

Anthony C. Perry

M.S. Computer Science, M.S. Kinesiology. USAF veteran and founder of Meridian Labs. ORCID