Exam4Training

Snowflake COF-C02 SnowPro Core Certification Exam Online Training

Question #1

The fail-safe retention period is how many days?

  • A . 1 day
  • B . 7 days
  • C . 45 days
  • D . 90 days

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Fail-safe is a feature in Snowflake that provides an additional layer of data protection. After the Time Travel retention period ends, Fail-safe offers a non-configurable 7-day period during which historical data may be recoverable by Snowflake. This period is designed to protect against accidental data loss and is not intended for customer access.

Reference: Understanding and viewing Fail-safe | Snowflake Documentation

Question #2

True or False: A 4X-Large Warehouse may, at times, take longer to provision than a X-Small Warehouse.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Provisioning time can vary based on the size of the warehouse. A 4X-Large Warehouse typically has more resources and may take longer to provision compared to a X-Small Warehouse, which has fewer resources and can generally be provisioned more quickly.

Reference: Understanding and viewing Fail-safe | Snowflake Documentation

Question #3

How would you determine the size of the virtual warehouse used for a task?

  • A . Root task may be executed concurrently (i.e. multiple instances), it is recommended to leave some margins in the execution window to avoid missing instances of execution
  • B . Querying (select) the size of the stream content would help determine the warehouse size. For example, if querying large stream content, use a larger warehouse size
  • C . If using the stored procedure to execute multiple SQL statements, it’s best to test run the stored procedure separately to size the compute resource first
  • D . Since task infrastructure is based on running the task body on schedule, it’s recommended to configure the virtual warehouse for automatic concurrency handling using Multi-cluster warehouse (MCW) to match the task schedule

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The size of the virtual warehouse for a task can be configured to handle concurrency automatically using a Multi-cluster warehouse (MCW). This is because tasks are designed to run their body on a schedule, and MCW allows for scaling compute resources to match the task’s execution needs without manual intervention.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Question #4

The Information Schema and Account Usage Share provide storage information for which of the following objects? (Choose three.)

  • A . Users
  • B . Tables
  • C . Databases
  • D . Internal Stages

Reveal Solution Hide Solution

Correct Answer: B, C, D
B, C, D

Explanation:

The Information Schema and Account Usage Share in Snowflake provide metadata and historical usage data for various objects within a Snowflake account. Specifically, they offer storage information for Tables, Databases, and Internal Stages. These schemas contain views and table functions that allow users to query object metadata and usage metrics, such as the amount of data stored and historical activity.

Tables: The storage information includes data on the daily average amount of data in database tables.

Databases: For databases, the storage usage is calculated based on all the data contained within the database, including tables and stages.

Internal Stages: Internal stages are locations within Snowflake for temporarily storing data, and their storage usage is also tracked.

Reference: The information is verified according to the SnowPro Core Certification Study Guide and

Snowflake documentation

Question #5

What is the default File Format used in the COPY command if one is not specified?

  • A . CSV
  • B . JSON
  • C . Parquet
  • D . XML

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The default file format for the COPY command in Snowflake, when not specified, is CSV (Comma-Separated Values). This format is widely used for data exchange because it is simple, easy to read, and supported by many data analysis tools.

Question #6

True or False: Reader Accounts are able to extract data from shared data objects for use outside of Snowflake.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reader accounts in Snowflake are designed to allow users to read data shared with them but do not have the capability to extract data for use outside of Snowflake. They are intended for consuming shared data within the Snowflake environment only.

Question #7

True or False: Loading data into Snowflake requires that source data files be no larger than 16MB.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Snowflake does not require source data files to be no larger than 16MB. In fact, Snowflake recommends that for optimal load performance, data files should be roughly 100-250 MB in size when compressed. However, it is not recommended to load very large files (e.g., 100 GB or larger) due to potential delays and wasted credits if errors occur. Smaller files should be aggregated to minimize processing overhead, and larger files should be split to distribute the load among compute resources in an active warehouse.

Reference: Preparing your data files | Snowflake Documentation

Question #8

True or False: A Virtual Warehouse can be resized while suspended.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Virtual Warehouses in Snowflake can indeed be resized while they are suspended. Resizing a warehouse involves changing the number of compute resources (servers) allocated to it, which can be done to adjust performance and cost. When a warehouse is suspended, it is not currently running any queries, but its definition and metadata remain intact, allowing for modifications like resizing.

Reference: https://docs.snowflake.com/en/user-guide/warehouses-tasks.html#effects-of-resizing-a-suspended-warehouse

Question #9

True or False: When you create a custom role, it is a best practice to immediately grant that role to ACCOUNTADMIN.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The ACCOUNTADMIN role is the most powerful role in Snowflake and should be limited to a select number of users within an organization. It is responsible for account-level configurations and should not be used for day-to-day object creation or management. Granting a custom role to ACCOUNTADMIN could inadvertently give broad access to users with this role, which is not a recommended security practice.

Reference: https://docs.snowflake.com/en/user-guide/security-access-control-considerations.html

Question #10

What are two ways to create and manage Data Shares in Snowflake? (Choose two.)

  • A . Via the Snowflake Web Interface (Ul)
  • B . Via the data_share=true parameter
  • C . Via SQL commands
  • D . Via Virtual Warehouses

Reveal Solution Hide Solution

Correct Answer: A, C
A, C

Explanation:

In Snowflake, Data Shares can be created and managed in two primary ways:

Via the Snowflake Web Interface (UI): Users can create and manage shares through the graphical interface provided by Snowflake, which allows for a user-friendly experience.

Via SQL commands: Snowflake also allows the creation and management of shares using SQL commands. This method is more suited for users who prefer scripting or need to automate the process.

Reference: https://docs.snowflake.com/en/user-guide/data-sharing-provider.html

Question #11

True or False: Fail-safe can be disabled within a Snowflake account.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Reference: https://docs.snowflake.com/en/user-guide/data-failsafe.html

Separate and distinct from Time Travel, Fail-safe ensures historical data is protected in the event of a system failure or other catastrophic event, e.g. a hardware failure or security breach. Fail-safe feature cannot be enabled or disabled from the user end.

Question #12

True or False: It is possible for a user to run a query against the query result cache without requiring an active Warehouse.

  • A . True
  • B . False

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Snowflake’s architecture allows for the use of a query result cache that stores the results of queries for a period of time. If the same query is run again and the underlying data has not changed, Snowflake can retrieve the result from this cache without needing to re-run the query on an active warehouse, thus saving on compute resources.

Question #13

A virtual warehouse’s auto-suspend and auto-resume settings apply to which of the following?

  • A . The primary cluster in the virtual warehouse
  • B . The entire virtual warehouse
  • C . The database in which the virtual warehouse resides
  • D . The Queries currently being run on the virtual warehouse

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The auto-suspend and auto-resume settings in Snowflake apply to the entire virtual warehouse. These settings allow the warehouse to automatically suspend when it’s not in use, helping to save on compute costs. When queries or tasks are submitted to the warehouse, it can automatically resume operation. This functionality is designed to optimize resource usage and cost-efficiency.

Reference: SnowPro Core Certification Exam Study Guide (as of 2021)

Snowflake documentation on virtual warehouses and their settings (as of 2021)

Question #14

Which of the following Snowflake features provide continuous data protection automatically? (Select TWO).

  • A . Internal stages
  • B . Incremental backups
  • C . Time Travel
  • D . Zero-copy clones
  • E . Fail-safe

Reveal Solution Hide Solution

Correct Answer: C, E
C, E

Explanation:

Snowflake’s Continuous Data Protection (CDP) encompasses a set of features that help protect data stored in Snowflake against human error, malicious acts, and software failure. Time Travel allows users to access historical data (i.e., data that has been changed or deleted) for a defined period, enabling querying and restoring of data. Fail-safe is an additional layer of data protection that provides a recovery option in the event of significant data loss or corruption, which can only be performed by Snowflake.

Reference: Continuous Data Protection | Snowflake Documentation1 Data Storage Considerations | Snowflake Documentation2 Snowflake SnowPro Core Certification Study Guide3 Snowflake Data Cloud Glossary

https://docs.snowflake.com/en/user-guide/data-availability.html

Question #15

Which of the following conditions must be met in order to return results from the results cache? (Select TWO).

  • A . The user has the appropriate privileges on the objects associated with the query
  • B . Micro-partitions have been reclustered since the query was last run
  • C . The new query is run using the same virtual warehouse as the previous query
  • D . The query includes a User Defined Function (UDF)
  • E . The query has been run within 24 hours of the previously-run query

Reveal Solution Hide Solution

Correct Answer: A, E
A, E

Explanation:

To return results from the results cache in Snowflake, certain conditions must be met:

Privileges: The user must have the appropriate privileges on the objects associated with the query.

This ensures that only authorized users can access cached data.

Time Frame: The query must have been run within 24 hours of the previously-run query. Snowflake’s results cache is designed to store the results of queries for a short period, typically 24 hours, to improve performance for repeated queries.

Question #16

Which of the following are benefits of micro-partitioning? (Select TWO)

  • A . Micro-partitions cannot overlap in their range of values
  • B . Micro-partitions are immutable objects that support the use of Time Travel.
  • C . Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses
  • D . Rows are automatically stored in sorted order within micro-partitions
  • E . Micro-partitions can be defined on a schema-by-schema basis

Reveal Solution Hide Solution

Correct Answer: B, C
B, C

Explanation:

Micro-partitions in Snowflake are immutable objects, which means once they are written, they cannot be modified. This immutability supports the use of Time Travel, allowing users to access historical data within a defined period. Additionally, micro-partitions can significantly reduce the amount of I/O from object storage to virtual warehouses. This is because Snowflake’s query optimizer can skip over micro-partitions that do not contain relevant data for a query, thus reducing the amount of data that needs to be scanned and transferred.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

https://docs.snowflake.com/en/user-guide/tables-clustering-micropartitions.html

Question #17

What is the minimum Snowflake edition required to create a materialized view?

  • A . Standard Edition
  • B . Enterprise Edition
  • C . Business Critical Edition
  • D . Virtual Private Snowflake Edition

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Materialized views in Snowflake are a feature that allows for the pre-computation and storage of query results for faster query performance. This feature is available starting from the Enterprise Edition of Snowflake. It is not available in the Standard Edition, and while it is also available in higher editions like Business Critical and Virtual Private Snowflake, the Enterprise Edition is the minimum requirement.

Reference: Snowflake Documentation on CREATE MATERIALIZED VIEW1. Snowflake Documentation on Working with Materialized Views

https://docs.snowflake.com/en/sql-reference/sql/create-materialized-view.html#:~:text=Materialized%20views%20require%20Enterprise%20Edition,upgrading%2C%20please%20contact%20Snowflake%20Support.

Question #18

What happens to the underlying table data when a CLUSTER BY clause is added to a Snowflake table?

  • A . Data is hashed by the cluster key to facilitate fast searches for common data values
  • B . Larger micro-partitions are created for common data values to reduce the number of partitions that must be scanned
  • C . Smaller micro-partitions are created for common data values to allow for more parallelism
  • D . Data may be colocated by the cluster key within the micro-partitions to improve pruning performance

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When a CLUSTER BY clause is added to a Snowflake table, it specifies one or more columns to organize the data within the table’s micro-partitions. This clustering aims to colocate data with similar values in the same or adjacent micro-partitions. By doing so, it enhances the efficiency of query pruning, where the Snowflake query optimizer can skip over irrelevant micro-partitions that do not contain the data relevant to the query, thereby improving performance.

Reference: Snowflake Documentation on Clustering Keys & Clustered Tables1.

Community discussions on how source data’s ordering affects a table with a cluster key

Question #19

Which feature is only available in the Enterprise or higher editions of Snowflake?

  • A . Column-level security
  • B . SOC 2 type II certification
  • C . Multi-factor Authentication (MFA)
  • D . Object-level access control

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Column-level security is a feature that allows fine-grained control over access to specific columns within a table. This is particularly useful for managing sensitive data and ensuring that only authorized users can view or manipulate certain pieces of information. According to my last update, this feature was available in the Enterprise Edition or higher editions of Snowflake.

Reference: Based on my internal data as of 2021, column-level security is an advanced feature typically reserved for higher-tiered editions like the Enterprise Edition in data warehousing solutions such as Snowflake.

https://docs.snowflake.com/en/user-guide/intro-editions.html

Question #20

Which of the following are valid methods for authenticating users for access into Snowflake? (Select THREE)

  • A . SCIM
  • B . Federated authentication
  • C . TLS 1.2
  • D . Key-pair authentication
  • E . OAuth
  • F . OCSP authentication

Reveal Solution Hide Solution

Correct Answer: B, D, E
B, D, E

Explanation:

Snowflake supports several methods for authenticating users, including federated authentication, key-pair authentication, and OAuth. Federated authentication allows users to authenticate using their organization’s identity provider. Key-pair authentication uses a public-private key pair for secure login, and OAuth is an open standard for access delegation commonly used for token-based authentication.

Reference: Authentication policies | Snowflake Documentation, Authenticating to the server | Snowflake Documentation, External API authentication and secrets | Snowflake Documentation.

Question #21

During periods of warehouse contention which parameter controls the maximum length of time a warehouse will hold a query for processing?

  • A . STATEMENT_TIMEOUT__IN__SECONDS
  • B . STATEMENT_QUEUED_TIMEOUT_IN_SECONDS
  • C . MAX_CONCURRENCY__LEVEL
  • D . QUERY_TIMEOUT_IN_SECONDS

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The parameter STATEMENT_QUEUED_TIMEOUT_IN_SECONDS sets the limit for a query to wait in the queue in order to get its chance of running on the warehouse. The query will quit after reaching this limit. By default, the value of this parameter is 0 which mean the queries will wait indefinitely in the waiting queue

https://community.snowflake.com/s/article/Warehouse-Concurrency-and-Statement-Timeout-Parameters#:~:text=The%20parameter%20STATEMENT_QUEUED_TIMEOUT_IN_SECONDS%20sets%20the,indefinitely%20in%20the%20waiting%20queue.

Question #22

Which of the following indicates that it may be appropriate to use a clustering key for a table? (Select TWO).

  • A . The table contains a column that has very low cardinality
  • B . DML statements that are being issued against the table are blocked
  • C . The table has a small number of micro-partitions
  • D . Queries on the table are running slower than expected
  • E . The clustering depth for the table is large

Reveal Solution Hide Solution

Correct Answer: D, E
D, E

Explanation:

A clustering key in Snowflake is used to co-locate similar data within the same micro-partitions to improve query performance, especially for large tables where data is not naturally ordered or has become fragmented due to extensive DML operations. The appropriate use of a clustering key can lead to improved scan efficiency and better column compression, resulting in faster query execution times.

The indicators that it may be appropriate to use a clustering key for a table include:

D. Queries on the table are running slower than expected: This can happen when the data in the table is not well-clustered, leading to inefficient scans during query execution.

E. The clustering depth for the table is large: A large clustering depth indicates that the table’s data is spread across many micro-partitions, which can degrade query performance as more data needs to be scanned.

Reference: Snowflake Documentation on Clustering Keys & Clustered Tables Snowflake Documentation on SYSTEM$CLUSTERING_INFORMATION Stack Overflow discussion on cluster key selection in Snowflake

Question #23

Which Snowflake object enables loading data from files as soon as they are available in a cloud storage location?

  • A . Pipe
  • B . External stage
  • C . Task
  • D . Stream

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

In Snowflake, a Pipe is the object designed to enable the continuous, near-real-time loading of data from files as soon as they are available in a cloud storage location. Pipes use

Snowflake’s COPY command to load data and can be associated with a Stage object to monitor for new files. When new data files appear in the stage, the pipe automatically loads the data into the target table.

Reference: Snowflake Documentation on Pipes

SnowPro® Core Certification Study Guide

https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro.html

Question #24

A user needs to create a materialized view in the schema MYDB.MYSCHEMA.

Which statements will provide this access?

  • A . GRANT ROLE MYROLE TO USER USER1;
    CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO ROLE MYROLE;
  • B . GRANT ROLE MYROLE TO USER USER1;
    CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER USER1;
  • C . GRANT ROLE MYROLE TO USER USER1;
    CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO USER1;
  • D . GRANT ROLE MYROLE TO USER USER1;
    CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE;

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Snowflake, to create a materialized view, the user must have the necessary privileges on the schema where the view will be created. These privileges are granted through roles, not directly to individual users. Therefore, the correct process is to grant the role to the user and then grant the privilege to create the materialized view to the role itself.

The statement GRANT ROLE MYROLE TO USER USER1; grants the specified role to the user, allowing them to assume that role and exercise its privileges. The subsequent statement CREATE MATERIALIZED VIEW ON SCHEMA MYDB.MYSCHEMA TO MYROLE; grants the privilege to create a materialized view within the specified schema to the role MYROLE. Any user who has been granted MYROLE can then create materialized views in MYDB.MYSCHEMA.

Reference:

Snowflake Documentation on Roles

Snowflake Documentation on Materialized Views

Question #25

What is the default character set used when loading CSV files into Snowflake?

  • A . UTF-8
  • B . UTF-16
  • C . ISO S859-1
  • D . ANSI_X3.A

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#:~:text=For%20delimited%20files%20(CSV%2C%20TSV,encoding%20to%20use%20for% 20loading.

For delimited files (CSV, TSV, etc.), the default character set is UTF-8. To use any other characters sets, you must explicitly specify the encoding to use for loading. For the list of supported character sets, see Supported Character Sets for Delimited Files (in this topic).

Question #26

A sales table FCT_SALES has 100 million records.

The following Query was executed

SELECT COUNT (1) FROM FCT__SALES;

How did Snowflake fulfill this query?

  • A . Query against the result set cache
  • B . Query against a virtual warehouse cache
  • C . Query against the most-recently created micro-partition
  • D . Query against the metadata excite

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Snowflake is designed to optimize query performance by utilizing metadata for certain types of queries. When executing a COUNT query, Snowflake can often fulfill the request by accessing metadata about the table’s row count, rather than scanning the entire table or micro-partitions. This is particularly efficient for large tables like FCT_SALES with a significant number of records. The metadata layer maintains statistics about the table, including the row count, which enables

Snowflake to quickly return the result of a COUNT query without the need to perform a full scan.

Reference: Snowflake Documentation on Metadata Management

SnowPro® Core Certification Study Guide

Question #27

Which cache type is used to cache data output from SQL queries?

  • A . Metadata cache
  • B . Result cache
  • C . Remote cache
  • D . Local file cache

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The Result cache is used in Snowflake to cache the data output from SQL queries. This feature is designed to improve performance by storing the results of queries for a period of time. When the same or similar query is executed again, Snowflake can retrieve the result from this cache instead of re-computing the result, which saves time and computational resources.

Reference: Snowflake Documentation on Query Results Cache

SnowPro® Core Certification Study Guide

Question #28

What is a key feature of Snowflake architecture?

  • A . Zero-copy cloning creates a mirror copy of a database that updates with the original
  • B . Software updates are automatically applied on a quarterly basis
  • C . Snowflake eliminates resource contention with its virtual warehouse implementation
  • D . Multi-cluster warehouses allow users to run a query that spans across multiple clusters
  • E . Snowflake automatically sorts DATE columns during ingest for fast retrieval by date

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

One of the key features of Snowflake’s architecture is its unique approach to eliminating resource contention through the use of virtual warehouses. This is achieved by separating storage and compute resources, allowing multiple virtual warehouses to operate independently on the same data without affecting each other. This means that different workloads, such as loading data, running queries, or performing complex analytics, can be processed simultaneously without any performance degradation due to resource contention.

Reference: Snowflake Documentation on Virtual Warehouses

SnowPro® Core Certification Study Guide

Question #29

What is a limitation of a Materialized View?

  • A . A Materialized View cannot support any aggregate functions
  • B . A Materialized View can only reference up to two tables
  • C . A Materialized View cannot be joined with other tables
  • D . A Materialized View cannot be defined with a JOIN

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Materialized Views in Snowflake are designed to store the result of a query and can be refreshed to maintain up-to-date data. However, they have certain limitations, one of which is that they cannot be defined using a JOIN clause. This means that a Materialized View can only be created based on a single source table and cannot combine data from multiple tables using JOIN operations.

Reference: Snowflake Documentation on Materialized Views

SnowPro® Core Certification Study Guide

Question #30

What features does Snowflake Time Travel enable?

  • A . Querying data-related objects that were created within the past 365 days
  • B . Restoring data-related objects that have been deleted within the past 90 days
  • C . Conducting point-in-time analysis for Bl reporting
  • D . Analyzing data usage/manipulation over all periods of time

Reveal Solution Hide Solution

Correct Answer: B, C
B, C

Explanation:

Snowflake Time Travel is a powerful feature that allows users to access historical data within a defined period.

It enables two key capabilities:

B. Restoring data-related objects that have been deleted within the past 90 days: Time Travel can be used to restore tables, schemas, and databases that have been accidentally or intentionally deleted within the Time Travel retention period.

C. Conducting point-in-time analysis for BI reporting: It allows users to query historical data as it appeared at a specific point in time within the Time Travel retention period, which is crucial for business intelligence and reporting purposes.

While Time Travel does allow querying of past data, it is limited to the retention period set for the Snowflake account, which is typically 1 day for standard accounts and can be extended up to 90 days for enterprise accounts. It does not enable querying or restoring objects created or deleted beyond the retention period, nor does it provide analysis over all periods of time.

Reference: Snowflake Documentation on Time Travel

SnowPro® Core Certification Study Guide

Question #31

Which statement about billing applies to Snowflake credits?

  • A . Credits are billed per-minute with a 60-minute minimum
  • B . Credits are used to pay for cloud data storage usage
  • C . Credits are consumed based on the number of credits billed for each hour that a warehouse runs
  • D . Credits are consumed based on the warehouse size and the time the warehouse is running

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Snowflake credits are the unit of measure for the compute resources used in Snowflake. The number of credits consumed depends on the size of the virtual warehouse and the time it is running. Larger warehouses consume more credits per hour than smaller ones, and credits are billed for the time the warehouse is active, regardless of the actual usage within that time.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Question #32

What Snowflake features allow virtual warehouses to handle high concurrency workloads? (Select TWO)

  • A . The ability to scale up warehouses
  • B . The use of warehouse auto scaling
  • C . The ability to resize warehouses
  • D . Use of multi-clustered warehouses
  • E . The use of warehouse indexing

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

Snowflake’s architecture is designed to handle high concurrency workloads through several features, two of which are particularly effective:

B. The use of warehouse auto scaling: This feature allows Snowflake to automatically adjust the compute resources allocated to a virtual warehouse in response to the workload. If there is an increase in concurrent queries, Snowflake can scale up the resources to maintain performance.

D. Use of multi-clustered warehouses: Multi-clustered warehouses enable Snowflake to run multiple clusters of compute resources simultaneously. This allows for the distribution of queries across clusters, thereby reducing the load on any single cluster and improving the system’s ability to handle a high number of concurrent queries.

These features ensure that Snowflake can manage varying levels of demand without manual intervention, providing a seamless experience even during peak usage.

Reference: Snowflake Documentation on Virtual Warehouses

SnowPro® Core Certification Study Guide

Question #33

When reviewing the load for a warehouse using the load monitoring chart, the chart indicates that a high volume of Queries are always queuing in the warehouse.

According to recommended best practice, what should be done to reduce the Queue volume? (Select TWO).

  • A . Use multi-clustered warehousing to scale out warehouse capacity.
  • B . Scale up the warehouse size to allow Queries to execute faster.
  • C . Stop and start the warehouse to clear the queued queries
  • D . Migrate some queries to a new warehouse to reduce load
  • E . Limit user access to the warehouse so fewer queries are run against it.

Reveal Solution Hide Solution

Correct Answer: A, B
A, B

Explanation:

To address a high volume of queries queuing in a warehouse, Snowflake recommends two best practices:

Question #34

Which of the following objects can be shared through secure data sharing?

  • A . Masking policy
  • B . Stored procedure
  • C . Task
  • D . External table

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Secure data sharing in Snowflake allows users to share various objects between Snowflake accounts without physically copying the data, thus not consuming additional storage. Among the options provided, external tables can be shared through secure data sharing. External tables are used to query data directly from files in a stage without loading the data into Snowflake tables, making them suitable for sharing across different Snowflake accounts.

Reference: Snowflake Documentation on Secure Data Sharing

SnowPro™ Core Certification Companion: Hands-on Preparation and Practice

Question #35

Which of the following commands cannot be used within a reader account?

  • A . CREATE SHARE
  • B . ALTER WAREHOUSE
  • C . DROP ROLE
  • D . SHOW SCHEMAS
  • E . DESCRBE TABLE

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

In Snowflake, a reader account is a type of account that is intended for consuming shared data rather than performing any data management or DDL operations. The CREATE SHARE command is used to share data from your account with another account, which is not a capability provided to reader accounts. Reader accounts are typically restricted from creating shares, as their primary purpose is to read shared data rather than to share it themselves.

Reference: Snowflake Documentation on Reader Accounts

SnowPro® Core Certification Study Guide

Question #36

A user unloaded a Snowflake table called mytable to an internal stage called mystage.

Which command can be used to view the list of files that has been uploaded to the staged?

  • A . list @mytable;
  • B . list @%raytable;
  • C . list @ %m.ystage;
  • D . list @mystage;

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a Snowflake table to the stage and for managing the files within the stage.

Reference: Snowflake Documentation on Stages

SnowPro® Core Certification Study Guide

Question #37

Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)

  • A . Customer-managed encryption keys through Tri-Secret Secure
  • B . Automatic encryption of all data
  • C . Up to 90 days of data recovery through Time Travel
  • D . Object-level access control
  • E . Column-level security to apply data masking policies to tables and views

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

In all Snowflake editions, two key capabilities are universally available:

B. Automatic encryption of all data: Snowflake automatically encrypts all data stored in its platform, ensuring security and compliance with various regulations. This encryption is transparent to users and does not require any configuration or management.

D. Object-level access control: Snowflake provides granular access control mechanisms that allow administrators to define permissions at the object level, including databases, schemas, tables, and views. This ensures that only authorized users can access specific data objects.

These features are part of Snowflake’s commitment to security and governance, and they are

included in every edition of the Snowflake Data Cloud.

Reference: Snowflake Documentation on Security Features

SnowPro® Core Certification Exam Study Guide

Question #38

Which command is used to unload data from a Snowflake table into a file in a stage?

  • A . COPY INTO
  • B . GET
  • C . WRITE
  • D . EXTRACT INTO

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The COPY INTO command is used in Snowflake to unload data from a table into a file in a stage. This command allows for the export of data from Snowflake tables into flat files, which can then be used for further analysis, processing, or storage in external systems.

Reference: Snowflake Documentation on Unloading Data

Snowflake SnowPro Core: Copy Into Command to Unload Rows to Files in Named Stage

Question #39

How often are encryption keys automatically rotated by Snowflake?

  • A . 30 Days
  • B . 60 Days
  • C . 90 Days
  • D . 365 Days

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Snowflake automatically rotates encryption keys when they are more than 30 days old. Active keys are retired, and new keys are created. This process is part of Snowflake’s comprehensive security measures to ensure data protection and is managed entirely by the Snowflake service without requiring user intervention.

Reference: Understanding Encryption Key Management in Snowflake

Question #40

What are value types that a VARIANT column can store? (Select TWO)

  • A . STRUCT
  • B . OBJECT
  • C . BINARY
  • D . ARRAY
  • E . CLOB

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

A VARIANT column in Snowflake can store semi-structured data types.

This includes:

B. OBJECT: An object is a collection of key-value pairs in JSON, and a VARIANT column can store this type of data structure.

D. ARRAY: An array is an ordered list of zero or more values, which can be of any variant-supported data type, including objects or other arrays.

The VARIANT data type is specifically designed to handle semi-structured data like JSON, Avro, ORC, Parquet, or XML, allowing for the storage of nested and complex data structures.

Reference: Snowflake Documentation on Semi-Structured Data Types SnowPro® Core Certification Study Guide

Question #41

A user has an application that writes a new Tile to a cloud storage location every 5 minutes.

What would be the MOST efficient way to get the files into Snowflake?

  • A . Create a task that runs a copy into operation from an external stage every 5 minutes
  • B . Create a task that puts the files in an internal stage and automate the data loading wizard
  • C . Create a task that runs a GET operation to intermittently check for new files
  • D . Set up cloud provider notifications on the Tile location and use Snowpipe with auto-ingest

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The most efficient way to get files into Snowflake, especially when new files are being written to a cloud storage location at frequent intervals, is to use Snowpipe with auto-ingest. Snowpipe is Snowflake’s continuous data ingestion service that loads data as soon as it becomes available in a cloud storage location. By setting up cloud provider notifications, Snowpipe can be triggered automatically whenever new files are written to the storage location, ensuring that the data is loaded into Snowflake with minimal latency and without the need for manual intervention or scheduling frequent tasks.

Reference: Snowflake Documentation on Snowpipe

SnowPro® Core Certification Study Guide

Question #42

Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).

  • A . Load files that are approximately 25 MB or smaller.
  • B . Remove all dates and timestamps.
  • C . Load files that are approximately 100-250 MB (or larger)
  • D . Avoid using embedded characters such as commas for numeric data types
  • E . Remove semi-structured data types

Reveal Solution Hide Solution

Correct Answer: C, D
C, D

Explanation:

When loading data into Snowflake, it is recommended to:

C. Load files that are approximately 100-250 MB (or larger): This size is optimal for parallel processing and can help to maximize throughput. Smaller files can lead to overhead that outweighs the actual data processing time.

D. Avoid using embedded characters such as commas for numeric data types: Embedded characters can cause issues during data loading as they may be interpreted incorrectly. It’s best to clean the data of such characters to ensure accurate and efficient data loading.

These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.

Reference: Snowflake Documentation on Data Loading Considerations

[COF-C02] SnowPro Core Certification Exam Study Guide

Question #43

A user has 10 files in a stage containing new customer dat

a. The ingest operation completes with no errors, using the following command:

COPY INTO my__table FROM @my__stage;

The next day the user adds 10 files to the stage so that now the stage contains a mixture of new customer data and updates to the previous data. The user did not remove the 10 original files.

If the user runs the same copy into command what will happen?

  • A . All data from all of the files on the stage will be appended to the table
  • B . Only data about new customers from the new files will be appended to the table
  • C . The operation will fail with the error uncertain files in stage.
  • D . All data from only the newly-added files will be appended to the table.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

When the COPY INTO command is executed in Snowflake, it processes all files present in the specified stage that have not been ingested before or marked as already loaded. Since the user did not remove the original 10 files after the first load, running the same COPY INTO command again will result in all 20 files being processed. This means that the data from the original 10 files will be appended to the table again, along with the data from the new 10 files, potentially leading to duplicate records for the original data set.

Reference: Snowflake Documentation on Data Loading

SnowPro® Core Certification Study Guide

Question #44

A user has unloaded data from Snowflake to a stage

Which SQL command should be used to validate which data was loaded into the stage?

  • A . list @file__stage
  • B . show @file__stage
  • C . view @file__stage
  • D . verify @file__stage

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The list command in Snowflake is used to validate and display the list of files in a specified stage. When a user has unloaded data to a stage, running the list @file__stage command will show all the files that have been uploaded to that stage, allowing the user to verify the data that was unloaded.

Reference: Snowflake Documentation on Stages

SnowPro® Core Certification Study Guide

Question #45

What happens when a cloned table is replicated to a secondary database? (Select TWO)

  • A . A read-only copy of the cloned tables is stored.
  • B . The replication will not be successful.
  • C . The physical data is replicated
  • D . Additional costs for storage are charged to a secondary account
  • E . Metadata pointers to cloned tables are replicated

Reveal Solution Hide Solution

Correct Answer: C, E
C, E

Explanation:

When a cloned table is replicated to a secondary database in Snowflake, the following occurs:

C. The physical data is replicated: The actual data of the cloned table is physically replicated to the secondary database. This ensures that the secondary database has its own copy of the data, which can be used for read-only purposes or failover scenarios1.

E. Metadata pointers to cloned tables are replicated: Along with the physical data, the metadata pointers that refer to the cloned tables are also replicated. This metadata includes information about the structure of the table and any associated properties2.

It’s important to note that while the physical data and metadata are replicated, the secondary database is typically read-only and cannot be used for write operations. Additionally, while there may be additional storage costs associated with the secondary account, this is not a direct result of the replication process but rather a consequence of storing additional data.

Reference: SnowPro Core Exam Prep ― Answers to Snowflake’s LEVEL UP: Backup and Recovery Snowflake SnowPro Core Certification Exam Questions Set 10

Question #46

Which data types does Snowflake support when querying semi-structured data? (Select TWO)

  • A . VARIANT
  • B . ARRAY
  • C . VARCHAR
  • D . XML
  • E . BLOB

Reveal Solution Hide Solution

Correct Answer: A, B
A, B

Explanation:

Snowflake supports querying semi-structured data using specific data types that are capable of handling the flexibility and structure of such data.

The data types supported for this purpose are:

Question #47

Which of the following describes how multiple Snowflake accounts in a single organization relate to various cloud providers?

  • A . Each Snowflake account can be hosted in a different cloud vendor and region.
  • B . Each Snowflake account must be hosted in a different cloud vendor and region
  • C . All Snowflake accounts must be hosted in the same cloud vendor and region
  • D . Each Snowflake account can be hosted in a different cloud vendor, but must be in the same region.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Snowflake’s architecture allows for flexibility in account hosting across different cloud vendors and regions. This means that within a single organization, different Snowflake accounts can be set up in various cloud environments, such as AWS, Azure, or GCP, and in different geographical regions. This allows organizations to leverage the global infrastructure of multiple cloud providers and optimize their data storage and computing needs based on regional requirements, data sovereignty laws, and other considerations.

https://docs.snowflake.com/en/user-guide/intro-regions.html

Question #48

A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip__outer_array file format option

What does the STRIP_OUTER_ARRAY file format do?

  • A . It removes the last element of the outer array.
  • B . It removes the outer array structure and loads the records into separate table rows,
  • C . It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
  • D . It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The STRIP_OUTER_ARRAY file format option in Snowflake is used when loading JSON documents that are composed of a large array containing multiple records. When this option is enabled, it removes the outer array structure, which allows each record within the array to be loaded as a separate row in the table. This is particularly useful for efficiently loading JSON data that is structured as an array of records1.

Reference: Snowflake Documentation on JSON File Format

[COF-C02] SnowPro Core Certification Exam Study Guide

Question #49

What are the default Time Travel and Fail-safe retention periods for transient tables?

  • A . Time Travel – 1 day. Fail-safe – 1 day
  • B . Time Travel – 0 days. Fail-safe – 1 day
  • C . Time Travel – 1 day. Fail-safe – 0 days
  • D . Transient tables are retained in neither Fail-safe nor Time Travel

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Transient tables in Snowflake have a default Time Travel retention period of 1 day, which allows users to access historical data within the last 24 hours. However, transient tables do not have a Fail-safe period. Fail-safe is an additional layer of data protection that retains data beyond the Time Travel period for recovery purposes in case of extreme data loss. Since transient tables are designed for temporary or intermediate workloads with no requirement for long-term durability, they do not include a Fail-safe period by default1.

Reference: Snowflake Documentation on Storage Costs for Time Travel and Fail-safe

Question #50

What is a best practice after creating a custom role?

  • A . Create the custom role using the SYSADMIN role.
  • B . Assign the custom role to the SYSADMIN role
  • C . Assign the custom role to the PUBLIC role
  • D . Add__CUSTOM to all custom role names

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Assigning the custom role to the SYSADMIN role is considered a best practice because it allows the SYSADMIN role to manage objects created by the custom role. This is important for maintaining proper access control and ensuring that the SYSADMIN can perform necessary administrative tasks on objects created by users with the custom role.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Section 1.3 – SnowPro Core Certification Study Guide1

Question #51

Which of the following Snowflake objects can be shared using a secure share? (Select TWO).

  • A . Materialized views
  • B . Sequences
  • C . Procedures
  • D . Tables
  • E . Secure User Defined Functions (UDFs)

Reveal Solution Hide Solution

Correct Answer: D, E
D, E

Explanation:

Secure sharing in Snowflake allows users to share specific objects with other Snowflake accounts without physically copying the data, thus not consuming additional storage. Tables and Secure User Defined Functions (UDFs) are among the objects that can be shared using this feature. Materialized views, sequences, and procedures are not shareable objects in Snowflake.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Secure Data Sharing1

Question #52

Will data cached in a warehouse be lost when the warehouse is resized?

  • A . Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
  • B . Yes. because the compute resource is replaced in its entirety with a new compute resource.
  • C . No. because the size of the cache is independent from the warehouse size
  • D . Yes. became the new compute resource will no longer have access to the cache encryption key

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

When a Snowflake virtual warehouse is resized, the data cached in the warehouse is not lost. This is because the cache is maintained independently of the warehouse size. Resizing a warehouse, whether scaling up or down, does not affect the cached data, ensuring that query performance is not impacted by such changes.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Virtual Warehouse Performance1

Question #53

Which Snowflake partner specializes in data catalog solutions?

  • A . Alation
  • B . DataRobot
  • C . dbt
  • D . Tableau

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Alation is known for specializing in data catalog solutions and is a partner of Snowflake. Data catalog solutions are essential for organizations to effectively manage their metadata and make it easily accessible and understandable for users, which aligns with the capabilities provided by Alation.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake’s official documentation and partner listings

Question #54

What is the MOST performant file format for loading data in Snowflake?

  • A . CSV (Unzipped)
  • B . Parquet
  • C . CSV (Gzipped)
  • D . ORC

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Parquet is a columnar storage file format that is optimized for performance in Snowflake. It is designed to be efficient for both storage and query performance, particularly for complex queries on large datasets. Parquet files support efficient compression and encoding schemes, which can lead to significant savings in storage and speed in query processing, making it the most performant file format for loading data into Snowflake.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Loading1

Question #55

Which copy INTO command outputs the data into one file?

  • A . SINGLE=TRUE
  • B . MAX_FILE_NUMBER=1
  • C . FILE_NUMBER=1
  • D . MULTIPLE=FAISE

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The COPY INTO command in Snowflake can be configured to output data into a single file by setting the MAX_FILE_NUMBER option to 1. This option limits the number of files generated by the command, ensuring that only one file is created regardless of the amount of data being exported.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Unloading

Question #56

Where would a Snowflake user find information about query activity from 90 days ago?

  • A . account__usage . query history view
  • B . account__usage.query__history__archive View
  • C . information__schema . cruery_history view
  • D . information__schema – query history_by_ses s i on view

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

To find information about query activity from 90 days ago, a Snowflake user should use the account_usage.query_history_archive view. This view is designed to provide access to historical query data beyond the default 14-day retention period found in the standard query_history view. It allows users to analyze and audit past query activities for up to 365 days after the date of execution, which includes the 90-day period mentioned.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Account Usage Schema1

Question #57

Which Snowflake technique can be used to improve the performance of a query?

  • A . Clustering
  • B . Indexing
  • C . Fragmenting
  • D . Using INDEX__HINTS

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Clustering is a technique used in Snowflake to improve the performance of queries. It involves organizing the data in a table into micro-partitions based on the values of one or more columns. This organization allows Snowflake to efficiently prune non-relevant micro-partitions during a query, which reduces the amount of data scanned and improves query performance.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Clustering

Question #58

User-level network policies can be created by which of the following roles? (Select TWO).

  • A . ROLEADMIN
  • B . ACCOUNTADMIN
  • C . SYSADMIN
  • D . SECURITYADMIN
  • E . USERADMIN

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

User-level network policies in Snowflake can be created by roles with the necessary privileges to manage security and account settings. The ACCOUNTADMIN role has the highest level of privileges across the account, including the ability to manage network policies. The SECURITYADMIN role is specifically responsible for managing security objects within Snowflake, which includes the creation and management of network policies.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Network Policies1

Section 1.3 – SnowPro Core Certification Study Guide

Question #59

Which command can be used to load data into an internal stage?

  • A . LOAD
  • B . copy
  • C . GET
  • D . PUT

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The PUT command is used to load data into an internal stage in Snowflake. This command uploads data files from a local file system to a named internal stage, making the data available for subsequent loading into a Snowflake table using the COPY INTO command.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Loading

Question #60

What happens when an external or an internal stage is dropped? (Select TWO).

  • A . When dropping an external stage, the files are not removed and only the stage is dropped
  • B . When dropping an external stage, both the stage and the files within the stage are removed
  • C . When dropping an internal stage, the files are deleted with the stage and the files are recoverable
  • D . When dropping an internal stage, the files are deleted with the stage and the files are not recoverable
  • E . When dropping an internal stage, only selected files are deleted with the stage and are not recoverable

Reveal Solution Hide Solution

Correct Answer: A, D
A, D

Explanation:

When an external stage is dropped in Snowflake, the reference to the external storage location is removed, but the actual files within the external storage (like Amazon S3, Google Cloud Storage, or Microsoft Azure) are not deleted. This means that the data remains intact in the external storage location, and only the stage object in Snowflake is removed.

On the other hand, when an internal stage is dropped, any files that were uploaded to the stage are deleted along with the stage itself. These files are not recoverable once the internal stage is dropped, as they are permanently removed from Snowflake’s storage.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Stages

Question #61

How long is Snowpipe data load history retained?

  • A . As configured in the create pipe settings
  • B . Until the pipe is dropped
  • C . 64 days
  • D . 14 days

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Snowpipe data load history is retained for 64 days. This retention period allows users to review and audit the data load operations performed by Snowpipe over a significant period of time, which can be crucial for troubleshooting and ensuring data integrity.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Snowpipe1

Question #62

In which use cases does Snowflake apply egress charges?

  • A . Data sharing within a specific region
  • B . Query result retrieval
  • C . Database replication
  • D . Loading data into Snowflake

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Snowflake applies egress charges in the case of database replication when data is transferred out of a Snowflake region to another region or cloud provider. This is because the data transfer incurs costs associated with moving data across different networks. Egress charges are not applied for data sharing within the same region, query result retrieval, or loading data into Snowflake, as these actions do not involve data transfer across regions.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Data Replication and Egress Charges1

Question #63

Which account__usage views are used to evaluate the details of dynamic data masking? (Select TWO)

  • A . ROLES
  • B . POLICY_REFERENCES
  • C . QUERY_HISTORY
  • D . RESOURCE_MONIT ORS
  • E . ACCESS_HISTORY

Reveal Solution Hide Solution

Correct Answer: B, E
B, E

Explanation:

To evaluate the details of dynamic data masking,

the POLICY_REFERENCES and ACCESS_HISTORY views in the account_usage schema are used.

The POLICY_REFERENCES view provides information about the objects to which a masking policy is applied, and the ACCESS_HISTORY view contains details about access to the masked data, which can be used to audit and verify the application of dynamic data masking policies.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Dynamic Data Masking1

Question #64

Query compilation occurs in which architecture layer of the Snowflake Cloud Data Platform?

  • A . Compute layer
  • B . Storage layer
  • C . Cloud infrastructure layer
  • D . Cloud services layer

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Query compilation in Snowflake occurs in the Cloud Services layer. This layer is responsible for coordinating and managing all aspects of the Snowflake service, including authentication, infrastructure management, metadata management, query parsing and optimization, and security. By handling these tasks, the Cloud Services layer enables the Compute layer to focus on executing queries, while the Storage layer is dedicated to persistently storing data.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Snowflake Architecture1

Question #65

Which is the MINIMUM required Snowflake edition that a user must have if they want to use AWS/Azure Privatelink or Google Cloud Private Service Connect?

  • A . Standard
  • B . Premium
  • C . Enterprise
  • D . Business Critical

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://docs.snowflake.com/en/user-guide/admin-security-privatelink.html

Question #66

In the query profiler view for a query, which components represent areas that can be used to help optimize query performance? (Select TWO)

  • A . Bytes scanned
  • B . Bytes sent over the network
  • C . Number of partitions scanned
  • D . Percentage scanned from cache
  • E . External bytes scanned

Reveal Solution Hide Solution

Correct Answer: A, C
A, C

Explanation:

In the query profiler view, the components that represent areas that can be used to help optimize query performance include ‘Bytes scanned’ and ‘Number of partitions scanned’. ‘Bytes scanned’ indicates the total amount of data the query had to read and is a direct indicator of the query’s efficiency. Reducing the bytes scanned can lead to lower data transfer costs and faster query execution. ‘Number of partitions scanned’ reflects how well the data is clustered; fewer partitions scanned typically means better performance because the system can skip irrelevant data more effectively.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Query Profiling1

Question #67

A marketing co-worker has requested the ability to change a warehouse size on their medium virtual warehouse called mktg__WH.

Which of the following statements will accommodate this request?

  • A . ALLOW RESIZE ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
  • B . GRANT MODIFY ON WAREHOUSE MKTG WH TO ROLE MARKETING;
  • C . GRANT MODIFY ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
  • D . GRANT OPERATE ON WAREHOUSE MKTG WH TO ROLE MARKETING;

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The correct statement to accommodate the request for a marketing co-worker to change the size of their medium virtual warehouse called mktg__WH is to grant the MODIFY privilege on the warehouse to the ROLE MARKETING. This privilege allows the role to change the warehouse size among other properties.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Access Control Privileges1

Question #68

When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?

  • A . A single join node uses more than 50% of the query time
  • B . Partitions scanned is equal to partitions total
  • C . An AggregateOperacor node is present
  • D . The query is spilling to remote storage

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When a query in Snowflake is too large to fit into the available memory, it will start spilling to remote storage. This is an indication that the memory allocated for the query is insufficient for its execution, and as a result, Snowflake uses remote disk storage to handle the overflow. This spill to remote storage can lead to slower query performance due to the additional I/O operations required.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Query Profile1

Snowpro Core Certification Exam Flashcards2

Question #69

Which stage type can be altered and dropped?

  • A . Database stage
  • B . External stage
  • C . Table stage
  • D . User stage

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

External stages can be altered and dropped in Snowflake. An external stage points to an external location, such as an S3 bucket, where data files are stored. Users can modify the stage’s definition or drop it entirely if it’s no longer needed. This is in contrast to table stages, which are tied to specific tables and cannot be altered or dropped independently.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Stages1

Question #70

Which command can be used to stage local files from which Snowflake interface?

  • A . SnowSQL
  • B . Snowflake classic web interface (Ul)
  • C . Snowsight
  • D . .NET driver

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

SnowSQL is the command-line client for Snowflake that allows users to execute SQL queries and perform all DDL and DML operations, including staging files for bulk data loading. It is specifically designed for scripting and automating tasks.

Reference: SnowPro Core Certification Exam Study Guide

Snowflake Documentation on SnowSQL

https://docs.snowflake.com/en/user-guide/snowsql-use.html

Question #71

What is the recommended file sizing for data loading using Snowpipe?

  • A . A compressed file size greater than 100 MB, and up to 250 MB
  • B . A compressed file size greater than 100 GB, and up to 250 GB
  • C . A compressed file size greater than 10 MB, and up to 100 MB
  • D . A compressed file size greater than 1 GB, and up to 2 GB

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

For data loading using Snowpipe, the recommended file size is a compressed file greater than 10 MB and up to 100 MB. This size range is optimal for Snowpipe’s continuous, micro-batch loading process, allowing for efficient and timely data ingestion without overwhelming the system with files that are too large or too small.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Snowpipe1

Question #72

Which services does the Snowflake Cloud Services layer manage? (Select TWO).

  • A . Compute resources
  • B . Query execution
  • C . Authentication
  • D . Data storage
  • E . Metadata

Reveal Solution Hide Solution

Correct Answer: C, E
C, E

Explanation:

The Snowflake Cloud Services layer manages a variety of services that are crucial for the operation of the Snowflake platform. Among these services, Authentication and Metadata management are key components. Authentication is essential for controlling access to the Snowflake environment, ensuring that only authorized users can perform actions within the platform. Metadata management involves handling all the metadata related to objects within Snowflake, such as tables, views, and databases, which is vital for the organization and retrieval of data.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation12

https://docs.snowflake.com/en/user-guide/intro-key-concepts.html

Question #73

What data is stored in the Snowflake storage layer? (Select TWO).

  • A . Snowflake parameters
  • B . Micro-partitions
  • C . Query history
  • D . Persisted query results
  • E . Standard and secure view results

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

The Snowflake storage layer is responsible for storing data in an optimized, compressed, columnar format. This includes micro-partitions, which are the fundamental storage units that contain the actual data stored in Snowflake. Additionally, persisted query results, which are the results of queries that have been materialized and stored for future use, are also kept within this layer. This design allows for efficient data retrieval and management within the Snowflake architecture1.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Key Concepts & Architecture | Snowflake Documentation2

Question #74

In which scenarios would a user have to pay Cloud Services costs? (Select TWO).

  • A . Compute Credits = 50 Credits Cloud Services = 10
  • B . Compute Credits = 80 Credits Cloud Services = 5
  • C . Compute Credits = 10 Credits Cloud Services = 9
  • D . Compute Credits = 120 Credits Cloud Services = 10
  • E . Compute Credits = 200 Credits Cloud Services = 26

Reveal Solution Hide Solution

Correct Answer: A, E
A, E

Explanation:

In Snowflake, Cloud Services costs are incurred when the Cloud Services usage exceeds 10% of the compute usage (measured in credits). Therefore, scenarios A and E would result in Cloud Services charges because the Cloud Services usage is more than 10% of the compute credits used.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake’s official documentation on billing and usage1

Question #75

What transformations are supported in a CREATE PIPE … AS COPY … FROM (….) statement? (Select TWO.)

  • A . Data can be filtered by an optional where clause
  • B . Incoming data can be joined with other tables
  • C . Columns can be reordered
  • D . Columns can be omitted
  • E . Row level access can be defined

Reveal Solution Hide Solution

Correct Answer: A, D
A, D

Explanation:

In a CREATE PIPE … AS COPY … FROM (….) statement, the supported transformations include

filtering data using an optional WHERE clause and omitting columns. The WHERE clause allows for the specification of conditions to filter the data that is being loaded, ensuring only relevant data is inserted into the table. Omitting columns enables the exclusion of certain columns from the data load, which can be useful when the incoming data contains more columns than are needed for the target table.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Simple Transformations During a Load1

Question #76

What is a responsibility of Snowflake’s virtual warehouses?

  • A . Infrastructure management
  • B . Metadata management
  • C . Query execution
  • D . Query parsing and optimization
  • E . Management of the storage layer

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Virtual Warehouses1

Question #77

Which of the following compute resources or features are managed by Snowflake? (Select TWO).

  • A . Execute a COPY command
  • B . Updating data
  • C . Snowpipe
  • D . AUTOMATIC__CLUSTERING
  • E . Scaling up a warehouse

Reveal Solution Hide Solution

Correct Answer: C, E
C, E

Explanation:

Snowflake manages various compute resources and features, including Snowpipe and the ability to scale up a warehouse. Snowpipe is Snowflake’s continuous data ingestion service that allows users to load data as soon as it becomes available. Scaling up a warehouse refers to increasing the compute resources allocated to a virtual warehouse to handle larger workloads or improve performance.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Snowpipe and Virtual Warehouses1

Question #78

What happens when a virtual warehouse is resized?

  • A . When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
  • B . When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
  • C . The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
  • D . Users who are trying to use the warehouse will receive an error message until the resizing is complete

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

When a virtual warehouse in Snowflake is resized, specifically when it is increased in size, the additional compute resources become immediately available to all running and queued queries. This means that the performance of these queries can improve due to the increased resources. Conversely, when the size of a warehouse is reduced, the compute resources are not removed until they are no longer being used by any current operations1.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Virtual Warehouses2

Question #79

A developer is granted ownership of a table that has a masking policy. The developer’s role is not able to see the masked data.

Will the developer be able to modify the table to read the masked data?

  • A . Yes, because a table owner has full control and can unset masking policies.
  • B . Yes, because masking policies only apply to cloned tables.
  • C . No, because masking policies must always reference specific access roles.
  • D . No, because ownership of a table does not include the ability to change masking policies

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and require specific privileges to modify12.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Masking Policies

Question #80

Which of the following describes how clustering keys work in Snowflake?

  • A . Clustering keys update the micro-partitions in place with a full sort, and impact the DML operations.
  • B . Clustering keys sort the designated columns over time, without blocking DML operations
  • C . Clustering keys create a distributed, parallel data structure of pointers to a table’s rows and columns
  • D . Clustering keys establish a hashed key on each node of a virtual warehouse to optimize joins at run-time

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Clustering keys in Snowflake work by sorting the designated columns over time. This process is done in the background and does not block data manipulation language (DML) operations, allowing for normal database operations to continue without interruption. The purpose of clustering keys is to organize the data within micro-partitions to optimize query performance1.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Clustering1

Question #81

What is a machine learning and data science partner within the Snowflake Partner Ecosystem?

  • A . Informatica
  • B . Power Bl
  • C . Adobe
  • D . Data Robot

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Data Robot is recognized as a machine learning and data science partner within the Snowflake Partner Ecosystem. It provides an enterprise AI platform that enables users to build and deploy accurate predictive models quickly. As a partner, Data Robot integrates with Snowflake to enhance data science capabilities2.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Snowflake Documentation on Machine Learning & Data Science Partners

https://docs.snowflake.com/en/user-guide/ecosystem-analytics.html

Question #82

Which of the following is a valid source for an external stage when the Snowflake account is located on Microsoft Azure?

  • A . An FTP server with TLS encryption
  • B . An HTTPS server with WebDAV
  • C . A Google Cloud storage bucket
  • D . A Windows server file share on Azure

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Snowflake, when the account is located on Microsoft Azure, a valid source for an external stage can be an Azure container or a folder path within an Azure container. This includes Azure Blob storage which is accessible via the azure:// endpoint. A Windows server file share on Azure, if configured properly, can be a valid source for staging data files for Snowflake. Options A, B, and C are not supported as direct sources for an external stage in Snowflake on Azure12.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Question #83

Which data type can be used to store geospatial data in Snowflake?

  • A . Variant
  • B . Object
  • C . Geometry
  • D . Geography

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Snowflake supports two geospatial data types: GEOGRAPHY and GEOMETRY. The GEOGRAPHY data type is used to store geospatial data that models the Earth as a perfect sphere, which is suitable for global geospatial data. This data type follows the WGS 84 standard and is used for storing points, lines, and polygons on the Earth’s surface. The GEOMETRY data type, on the other hand, represents features in a planar (Euclidean, Cartesian) coordinate system and is typically used for local spatial reference systems. Since the question specifically asks about geospatial data, which commonly refers to Earth-related spatial data, the correct answer is GEOGRAPHY3.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Question #84

What can be used to view warehouse usage over time? (Select Two).

  • A . The load HISTORY view
  • B . The Query history view
  • C . The show warehouses command
  • D . The WAREHOUSE_METERING__HISTORY View
  • E . The billing and usage tab in the Snowflake web Ul

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

To view warehouse usage over time, the Query history view and the WAREHOUSE_METERING__HISTORY View can be utilized. The Query history view allows users to monitor the performance of their queries and the load on their warehouses over a specified period1. The WAREHOUSE_METERING__HISTORY View provides detailed information about the workload on a warehouse within a specified date range, including average running and queued loads2.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Question #85

Which Snowflake feature is used for both querying and restoring data?

  • A . Cluster keys
  • B . Time Travel
  • C . Fail-safe
  • D . Cloning

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Snowflake’s Time Travel feature is used for both querying historical data in tables and restoring and cloning historical data in databases, schemas, and tables3. It allows users to access historical data within a defined period (1 day by default, up to 90 days for Snowflake Enterprise Edition) and is a key feature for data recovery and management.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

Question #86

A company strongly encourages all Snowflake users to self-enroll in Snowflake’s default Multi-Factor Authentication (MFA) service to provide increased login security for users connecting to Snowflake.

Which application will the Snowflake users need to install on their devices in order to connect with MFA?

  • A . Okta Verify
  • B . Duo Mobile
  • C . Microsoft Authenticator
  • D . Google Authenticator

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Snowflake’s default Multi-Factor Authentication (MFA) service is powered by Duo Security. Users are required to install the Duo Mobile application on their devices to use MFA for increased login security when connecting to Snowflake. This service is managed entirely by Snowflake, and users do not need to sign up separately with Duo1.

Question #87

Which Snowflake objects track DML changes made to tables, like inserts, updates, and deletes?

  • A . Pipes
  • B . Streams
  • C . Tasks
  • D . Procedures

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

In Snowflake, Streams are the objects that track Data Manipulation Language (DML) changes made to tables, such as inserts, updates, and deletes. Streams record these changes along with metadata about each change, enabling actions to be taken using the changed data. This process is known as change data capture (CDC)2.

Question #88

What tasks can be completed using the copy command? (Select TWO)

  • A . Columns can be aggregated
  • B . Columns can be joined with an existing table
  • C . Columns can be reordered
  • D . Columns can be omitted
  • E . Data can be loaded without the need to spin up a virtual warehouse

Reveal Solution Hide Solution

Correct Answer: C, D
C, D

Explanation:

The COPY command in Snowflake allows for the reordering of columns as they are loaded into a table, and it also permits the omission of columns from the source file during the load process. This provides flexibility in handling the schema of the data being ingested.

Reference: [COF-C02] SnowPro

Core Certification Exam Study Guide

Question #89

What feature can be used to reorganize a very large table on one or more columns?

  • A . Micro-partitions
  • B . Clustering keys
  • C . Key partitions
  • D . Clustered partitions

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Clustering keys in Snowflake are used to reorganize large tables based on one or more columns. This feature optimizes the arrangement of data within micro-partitions to improve query performance, especially for large tables where efficient data retrieval is crucial.

Reference: [COF-C02] SnowPro Core Certification Exam Study Guide

https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html

Question #90

What SQL command would be used to view all roles that were granted to user.1?

  • A . show grants to user USER1;
  • B . show grants of user USER1;
  • C . describe user USER1;
  • D . show grants on user USER1;

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The correct command to view all roles granted to a specific user in Snowflake is SHOW GRANTS TO USER <user_name>;. This command lists all access control privileges that have been explicitly granted to the specified user.

Reference: SHOW GRANTS | Snowflake Documentation

Question #91

Which of the following can be executed/called with Snowpipe?

  • A . A User Defined Function (UDF)
  • B . A stored procedure
  • C . A single copy_into statement
  • D . A single insert__into statement

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Snowpipe is used for continuous, automated data loading into Snowflake. It uses a COPY INTO <table> statement within a pipe object to load data from files as soon as they are available in a stage. Snowpipe does not execute UDFs, stored procedures, or insert statements.

Reference: Snowpipe | Snowflake Documentation

Question #92

What Snowflake role must be granted for a user to create and manage accounts?

  • A . ACCOUNTADMIN
  • B . ORGADMIN
  • C . SECURITYADMIN
  • D . SYSADMIN

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The ACCOUNTADMIN role is required for a user to create and manage accounts in Snowflake. This role has the highest level of privileges and is responsible for managing all aspects of the Snowflake account, including the ability to create and manage other user accounts1.

https://docs.snowflake.com/en/user-guide/security-access-control-considerations.html

Question #93

When unloading to a stage, which of the following is a recommended practice or approach?

  • A . Set SINGLE: = true for larger files
  • B . Use OBJECT_CONSTRUCT ( * ) when using Parquet
  • C . Avoid the use of the CAST function
  • D . Define an individual file format

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When unloading data to a stage, it is recommended to define an individual file format. This ensures that the data is unloaded in a consistent and expected format, which can be crucial for downstream processing and analysis2

Question #94

When is the result set cache no longer available? (Select TWO)

  • A . When another warehouse is used to execute the query
  • B . When another user executes the query
  • C . When the underlying data has changed
  • D . When the warehouse used to execute the query is suspended
  • E . When it has been 24 hours since the last query

Reveal Solution Hide Solution

Correct Answer: C, E
C, E

Explanation:

The result set cache in Snowflake is invalidated and no longer available when the underlying data of the query results has changed, ensuring that queries return the most current data. Additionally, the cache expires after 24 hours to maintain the efficiency and accuracy of data retrieval1.

Question #95

Which of the following describes external functions in Snowflake?

  • A . They are a type of User-defined Function (UDF).
  • B . They contain their own SQL code.
  • C . They call code that is stored inside of Snowflake.
  • D . They can return multiple rows for each row received

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

External functions in Snowflake are a special type of User-Defined Function (UDF) that call code executed outside of Snowflake, typically through a remote service. Unlike traditional UDFs, external functions do not contain SQL code within Snowflake; instead, they interact with external services to process data2.

https://docs.snowflake.com/en/sql-reference/external-functions.html#:~:text=External%20functions%20are%20user%2Ddefined,code%20running%20outsi de%20of%20Snowflake.

Question #96

What are ways to create and manage data shares in Snowflake? (Select TWO)

  • A . Through the Snowflake web interface (Ul)
  • B . Through the DATA_SHARE=TRUE parameter
  • C . Through SQL commands
  • D . Through the enable__share=true parameter
  • E . Using the CREATE SHARE AS SELECT * TABLE command

Reveal Solution Hide Solution

Correct Answer: A, C
A, C

Explanation:

Data shares in Snowflake can be created and managed through the Snowflake web interface, which provides a user-friendly graphical interface for various operations. Additionally, SQL commands can be used to perform these tasks programmatically, offering flexibility and automation capabilities123.

Question #97

A company’s security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days.

Which of the following statements will return the required information?

  • A . SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME FROM ACCOUNT_USAGE.USERS;
  • B . SELECT EVENT_TIMESTAMP, USER_NAME
    FROM table(information_schema.login_history_by_user())
  • C . SELECT EVENT_TIMESTAMP, USER_NAME FROM ACCOUNT_USAGE.ACCESS_HISTORY;
  • D . SELECT EVENT_TIMESTAMP, USER_NAME FROM ACCOUNT_USAGE.LOGIN_HISTORY;

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

To generate a report listing all Snowflake logins within the last 90 days, the ACCOUNT_USAGE.LOGIN_HISTORY view should be used. This view provides information about login attempts, including successful and unsuccessful logins, and is suitable for security audits4.

Question #98

Which semi-structured file formats are supported when unloading data from a table? (Select TWO).

  • A . ORC
  • B . XML
  • C . Avro
  • D . Parquet
  • E . JSON

Reveal Solution Hide Solution

Correct Answer: D, E
D, E

Explanation:

Semi-structured JSON, Parquet

Snowflake supports unloading data in several semi-structured file formats, including Parquet and JSON. These formats allow for efficient storage and querying of semi-structured data, which can be loaded directly into Snowflake tables without requiring a predefined schema12.

https://docs.snowflake.com/en/user-guide/data-unload-prepare.html#:~:text=Supported%20File%20Formats,-The%20following%20file&text=Delimited%20(CSV%2C%20TSV%2C%20etc.)

Question #99

What is the purpose of an External Function?

  • A . To call code that executes outside of Snowflake
  • B . To run a function in another Snowflake database
  • C . To share data in Snowflake with external parties
  • D . To ingest data from on-premises data sources

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The purpose of an External Function in Snowflake is to call code that executes outside of the Snowflake environment. This allows Snowflake to interact with external services and leverage functionalities that are not natively available within Snowflake, such as calling APIs or running custom code hosted on cloud services3. https://docs.snowflake.com/en/sql-reference/external-functions.html

Question #100

A user created a new worksheet within the Snowsight Ul and wants to share this with teammates How can this worksheet be shared?

  • A . Create a zero-copy clone of the worksheet and grant permissions to teammates
  • B . Create a private Data Exchange so that any teammate can use the worksheet
  • C . Share the worksheet with teammates within Snowsight
  • D . Create a database and grant all permissions to teammates

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Worksheets in Snowsight can be shared directly with other Snowflake users within the same account. This feature allows for collaboration and sharing of SQL queries or Python code, as well as other data manipulation tasks1.

Exit mobile version