What privileges are required to create a task?
- A . The global privilege create task is required to create a new task.
- B . Tasks are created at the Application level and can only be created by the Account Admin role.
- C . Many Snowflake DDLs are metadata operations only, and create task DDL can be executed without virtual warehouse requirement or task specific grants.
- D . The role must have access to the target schema and the create task privilege on the schema itself.
D
Explanation:
All tasks in a simple tree must have the same task owner (i.e. a single role must have the OWNERSHIP privilege on all of the tasks in the tree). All tasks in a simple tree must exist in the same schema.
https://docs.snowflake.com/en/user-guide/tasks-intro.html#creating-tasks
True or False: AWS Private Link provides a secure connection from the Customer’s on-premise data center to the Snowflake.
- A . True
- B . False
A
Explanation:
AWS PrivateLink provides a secure connection from the customer’s on-premise data center to Snowflake. It establishes a private connection between the customer’s VPC (Virtual Private Cloud) and Snowflake, ensuring that the data is transferred securely over the Amazon network and not exposed to the public internet.
Why would a customer size a Virtual Warehouse from an X-Small to a Medium?
- A . To accommodate more queries
- B . To accommodate more users
- C . To accommodate fluctuations in workload
- D . To accommodate a more complex workload
D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-considerations.html
When a Pipe is recreated using the CREATE OR REPLACE PIPE command:
- A . The Pipe load history is reset to empty
- B . The REFRESH parameter is set to TRUE
- C . Previously loaded files will be ignored
- D . All of the above
D
Explanation:
When a pipe is recreated using the CREATE OR REPLACE PIPE command, the following will happen:
The pipe load history is reset to empty.
The REFRESH parameter is set to TRUE.
Previously loaded files will be ignored.
This command allows you to replace the existing pipe with a new one while maintaining its name and any associated metadata.
True or False: Query ID’s are unique across all Snowflake deployments and can be used in communication with Snowflake Support to help troubleshoot issues.
- A . True
- B . False
True or False: During data unloading, only JSON and CSV files can be compressed.
- A . True
- B . False
B
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/sql/create-file-format.html
What is the minimum Snowflake edition that customers planning on storing protected information in Snowflake should consider for regulatory compliance?
- A . Standard
- B . Premier
- C . Enterprise
- D . Business Critical Edition
D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/intro-editions.html
What is the minimum Snowflake edition that provides multi-cluster warehouses and up to 90 days of Time Travel?
- A . Standard
- B . Premier
- C . Enterprise
- D . Business Critical Edition
C
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/intro-editions.html
True or False: Snowflake allows its customers to directly access the micro-partition files that make up its tables.
- A . True
- B . False
Which type of table corresponds to a single Snowflake session?
- A . Temporary
- B . Translent
- C . Provisional
- D . Permanent
A
Explanation:
Snowflake supports creating temporary tables for storing non-permanent, transitory data (e.g. ETL data, session-specific data). Temporary tables only exist within the session in which they were created and persist only for the remainder of the session. https://docs.snowflake.com/en/user-guide/tables-temp-transient.html#:~:text=Snowflake%20supports%20creating%20temporary%20tables,the%2 0remainder%20of%20the%20session.
If a Small Warehouse is made up of 2 servers/cluster, how many servers/cluster make up a Medium Warehouse?
- A . 4
- B . 16
- C . 32
- D . 128
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-overview.html
Select the different types of Internal Stages: (Choose three.)
- A . Named Stage
- B . User Stage
- C . Table Stage
- D . Schema Stage
A,B,C
Explanation:
Reference: https://dwgeek.com/type-of-snowflake-stages-how-to-create-and-use.html/#Snowflake-Internal-Named-Stage
True or False: When a new Snowflake object is created, it is automatically owned by the user who created it.
- A . True
- B . False
B
Explanation:
When a new Snowflake object is created, it is not automatically owned by the user who created it. Instead, the object is owned by the role that was active for the user when the object was created. This means that ownership and associated privileges belong to the role, allowing for easier management of access control and permissions across multiple users within an organization.
Which of the following statements are true of Snowflake releases: (Choose two.)
- A . They happen approximately weekly
- B . They roll up and release approximately monthly, but customers can request early release application
- C . During a release, new customer requests/queries/connections transparently move over to the newer version
- D . A customer is assigned a 30 minute window (that can be moved anytime within a week) during which the system will be unavailable and customer is upgraded
A,C
Explanation:
https://docs.snowflake.com/en/user-guide/intro-releases.html
True or False: Snowflake supports federated authentication in all editions.
- A . True
- B . False
B
Explanation:
Snowflake supports federated authentication only in the Enterprise and higher editions. Federated authentication allows users to authenticate with Snowflake using their existing identity provider (IdP) credentials, enabling single sign-on (SSO) and simplifying the management of user access and authentication. If federated authentication is a requirement for a customer, they need to consider at least the Enterprise edition of Snowflake.
Which of the following statements are true of Virtual Warehouses? (Choose all that apply.)
- A . Customers can change the size of the Warehouse after creation
- B . A Warehouse can be resized while running
- C . A Warehouse can be configured to suspend after a period of inactivity
- D . A Warehouse can be configured to auto-resume when new queries are submitted
A,B,C,D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-multicluster.html
True or False: Snowflake charges a premium for storing semi-structured data.
- A . True
- B . False
B
Explanation:
Reference: https://snowflakecommunity.force.com/s/question/0D50Z00008ckwNuSAI/does-snowflakecharges-premium-for-storing-semi-structured-data
True or False: Reader Accounts incur no additional Compute costs to the Data Provider since they are simply reading the shared data without making changes.
- A . True
- B . False
B
Explanation:
Reference: https://interworks.com/blog/bdu/2020/02/05/zero-to-snowflake-secure-data-sharing/
True or False: The user has to specify which cluster a query will run on in multi-clustering Warehouse.
- A . True
- B . False
B
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-multicluster.html
The PUT command: (Choose two.)
- A . Automatically creates a File Format object
- B . Automatically uses the last Stage created
- C . Automatically compresses files using Gzip
- D . Automatically encrypts files
C,D
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/sql/put.html
How a Snowpipe charges calculated?
- A . Per-second/per Warehouse size
- B . Per-second/per-core granularity
- C . Number of Pipes in account
- D . Total storage bucket size
B
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/data-load-snowpipe-billing.html
True or False: A Snowflake account is charged for data stored in both Internal and External Stages.
- A . True
- B . False
B
Explanation:
A Snowflake account is charged for data stored in Internal Stages. However, data stored in External Stages such as Amazon S3, Google Cloud Storage, or Microsoft Azure Blob Storage is not billed by Snowflake. Storage costs for External Stages are billed separately by the respective cloud storage provider.
Which is true of Snowflake network policies? A Snowflake network policy: (Choose two.)
- A . Is available to all Snowflake Editions
- B . Is only available to customers with Business Critical Edition
- C . Restricts or enables access to specific IP addresses
- D . Is activated using an “ALTER DATABASE” command
A,C
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/sql/create-network-policy.html
True or False: A Virtual Warehouse consumes Snowflake credits even when inactive.
- A . True
- B . False
B
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-multicluster.html
The number of queries that a Virtual Warehouse can concurrently process is determined by: Choose 2 answers
- A . The complexity of each query
- B . The CONCURRENT_QUERY_UMIT parameter set on the Snowflake account
- C . The size of the data required for each query
- D . The tool that s executing the query
A,C
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-overview.html
True or False: A customer using SnowSQL / native connectors will be unable be unable to able to also use the Snowflake Web interface (UI) unless access to the UI is explicitly granted by supported.
- A . True
- B . False
B
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/connecting.html
True or False: Micro-partition metadata enables some operations to be completed without requiring Compute.
- A . True
- B . False
A
Explanation:
Reference: https://blog.ippon.tech/innovative-snowflake-features-caching/
Each incremental increase in Virtual Warehouse size (e,g. Medium to Large) generally results in what?
Select one.
- A . More micro-partitions
- B . Better query scheduling
- C . Double the numbers of servers In the compute duster
- D . Higher storage costs
True or False: It is possible to query data from an Internal or named External stage without loading the data into Snowflake.
- A . True
- B . False
True or False: Data Providers can share data with only the Data Consumer.
- A . True
- B . False
True or False: Snowflake’s Global Services Layer gathers and maintains statistics on all columns in all micro-partitions.
- A . True
- B . False
A
Explanation:
Snowflake is a single, integrated platform delivered as-a-service. It features storage, compute, and global services layers that are physically separated but logically integrated.
True or False: It is possible to load data into Snowflake without creating a named File Format object.
- A . True
- B . False
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/data-load-external-tutorial-create-file-format.html
True or False: A table in Snowflake can only be queried using the Virtual Warehouse that was used to load the data.
- A . True
- B . False
True or False: A single database can exist in more than one Snowflake account.
- A . True
- B . False
B
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/ddl-database.html
True or False: Some queries can be answered through the metadata cache and do not require an active Virtual Warehouse.
- A . True
- B . False
A
Explanation:
Some aggregate queries are answered thru micro partitions metadata only not requiring any VW spin ups.
Which interfaces can be used to create and/or manage Virtual Warehouses?
- A . The Snowflake Web Interface (UI)
- B . SQL commands
- C . Data integration tools
- D . All of the above
D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses.html
When can a Virtual Warehouse start running queries?
- A . 12am-5am
- B . Only during administrator defined time slots
- C . When its provisioning is complete
- D . After replication
C
Explanation:
https://docs.snowflake.com/en/user-guide/warehouses-overview.html
Which of the following DML commands isn’t supported by Snowflake?
- A . UPSERT
- B . MERGE
- C . UPDATE
- D . TRUNCATE TABLE
A
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/sql-dml.html
Which of the following are true of multi-cluster Warehouses? Select all that apply below.
- A . A multi-cluster Warehouse can add clusters automatically based on query activity
- B . A multi-cluster Warehouse can automatically turn itself off after a period of inactivity
- C . A multi-cluster Warehouse can scale down when query activity slows
- D . A multi-cluster Warehouse can automatically turn itself on when a query is executed against it
Which of the following statement is true of Snowflake?
- A . It was built specifically for the cloud
- B . it was built as an on-premises solution and then potted to the cloud
- C . It was designed as a hybrid database to allow customers to store data either on premises or in the cloud
- D . It was built for Hadoop architecture
- E . It’s based on an Oracle Architecture
True or False: Pipes can be suspended and resumed.
- A . True
- B . False
A
Explanation:
https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro.html#pausing-or-resuming-pipes Pausing or Resuming Pipes In addition to the pipe owner, a role that has the following minimum permissions can pause or resume the pipe:
Which of the following connectors allow Multi-Factor Authentication (MFA) authorization when connecting? (Choose all that apply.)
- A . JDBC
- B . SnowSQL
- C . Snowflake Web Interface (UI)
- D . ODBC
- E . Python
A,B,C,D,E
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/security-mfa.html
Credit Consumption by the Compute Layer (Virtual Warehouses) is based on: (Choose two.)
- A . Number of users
- B . Warehouse size
- C . Amount of data processed
- D . # of clusters for the Warehouse
B,D
Explanation:
https://docs.snowflake.com/en/user-guide/credits.html#virtual-warehouse-credit-usage "Snowflake credits are charged based on the number of virtual warehouses you use, how long they run, and their size."
The Query History in the Snowflake Web Interface (UI) is kept for approximately:
- A . 60 minutes
- B . 24 hours
- C . 14 days
- D . 30 days
- E . 1 year
C
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html
A role is created and owns 2 tables. This role is then dropped. Who will now own the two tables?
- A . The tables are now orphaned
- B . The user that deleted the role
- C . SYSADMIN
- D . The assumed role that dropped the role
A
Explanation:
When a role is dropped in Snowflake, any objects owned by the role, such as tables, become orphaned. Orphaned objects cannot be accessed or altered by any user or role until ownership is transferred to an existing role. To transfer ownership, a user with the appropriate privileges (typically a user with the ACCOUNTADMIN or SECURITYADMIN role) can use the ALTER command to change the object’s ownership: ALTER TABLE table_name SET OWNER_ROLE = new_owner_role;
Which of the following statements are true about Schemas in Snowflake? (Choose two.)
- A . A Schema may contain one or more Databases
- B . A Database may contain one or more Schemas
- C . A Schema is a logical grouping of Database Objects
- D . Each Schema is contained within a Warehouse
B,C
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/data-sharing-mutiple-db.html
When scaling out by adding clusters to a multi-cluster warehouse, you are primarily scaling for improved:
- A . Concurrency
- B . Performance
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-multicluster.html
What are the three layers that make up Snowflake’s architecture? Choose 3 answer
- A . Compute
- B . Tri-Secret Secure
- C . Storage
- D . Cloud Services
A,C,D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/intro-key-concepts.html
Which of the following objects can be cloned? (Choose four.)
- A . Tables
- B . Named File Formats
- C . Schemas
- D . Shares
- E . Databases
- F . Users
A.C.E.F
Explanation:
The following objects can be cloned in Snowflake:
Tables: You can create a clone of a table using the CREATE TABLE … CLONE statement. This creates a new table with the same structure and data as the source table at the time of cloning.
Schemas: You can create a clone of a schema using the CREATE SCHEMA … CLONE statement. This creates a new schema with the same structure and objects as the source schema at the time of cloning.
Databases: You can create a clone of a database using the CREATE DATABASE … CLONE statement. This creates a new database with the same structure and objects as the source database at the time of cloning.
Users: While you cannot directly clone a user, you can create a new user with the same properties, roles, and privileges as an existing user by using the CREATE USER … COPY statement.
Named File Formats (B) and Shares (D) cannot be cloned directly using a single statement in Snowflake.
True or False: An active warehouse is required to run a COPY INTO statement.
- A . True
- B . False
To run a Multi-Cluster Warehouse in auto-scale mode, a user would:
- A . Configure the Maximum Clusters setting to “Auto-Scale”
- B . Set the Warehouse type to “Auto”
- C . Set the Minimum Clusters and Maximum Clusters settings to the same value
- D . Set the Minimum Clusters and Maximum Clusters settings to the different values
D
Explanation:
Reference: https://help.pentaho.com/Documentation/9.1/Products/Modify_Snowflake_warehouse
Which of the following statements describes a benefit of Snowflake’s separation of compute and storage? (Choose all that apply.)
- A . Growth of storage and compute are tightly coupled together
- B . Storage expands without the requirement to add more compute
- C . Compute can be scaled up or down without the requirement to add more storage
- D . Multiple compute clusters can access stored data without contention
B,C,D
Explanation:
Reference: https://towardsdatascience.com/why-you-are-throwing-money-away-if-your-cloud-data-warehousedoesnt-
separate-storage-and-compute-65d2dffd450f
Which of the following roles is recommended to be used to create and manage users and roles?
- A . SYSADMIN
- B . SECURITYADMIN
- C . PUBLIC
- D . ACCOUNTADMIN
B
Explanation:
https://docs.snowflake.com/en/user-guide/security-access-control-overview.html "Security admin: Role that can manage any object grant globally, as well as create, monitor, and manage users and roles"
Which of the following are valid Snowflake Virtual Warehouse Scaling Policies? (Choose two.)
- A . Custom
- B . Economy
- C . Optimized
- D . Standard
B,D
Explanation:
Reference: https://community.snowflake.com/s/article/Snowflake-Visualizing-Warehouse-Performance
Which of the following statements is true of Snowflake?
- A . It was built specifically for the cloud
- B . It was built as an on-premises solution and then ported to the cloud
- C . It was designed as a hybrid database to allow customers to store data either on premises or in the cloud
- D . It was built for Hadoop architecture
- E . It’s based on an Oracle Architecture
A
Explanation:
Reference: https://www.stitchdata.com/resources/snowflake/
True or False: It is possible to unload structured data to semi-structured formats such as JSON and parquet.
- A . True
- B . False
B
Explanation:
It is not possible to unload structured data directly to semi-structured formats such as JSON and Parquet using Snowflake’s native functionality. Snowflake’s COPY INTO statement, which is used for unloading data, currently supports exporting data to CSV, TSV, and other delimited text formats, as well as to Avro format (a row-based binary format).
If you need to unload structured data to JSON or Parquet, you may need to use third-party tools or custom code to transform the exported data into the desired format after unloading it from Snowflake. Alternatively, you can consider using cloud-native ETL services that support such transformations, like AWS Glue, Azure Data Factory, or Google Cloud Dataflow.
Which of the following terms best describes Snowflake’s database architecture?
- A . Columnar shared nothing
- B . Shared disk
- C . Multi-cluster, shared data
- D . Cloud-native shared memory
C
Explanation:
https://www.snowflake.com/product/architecture/
Built from the ground up for the cloud, Snowflake’s unique multi-cluster shared data architecture delivers the performance, scale, elasticity, and concurrency today’s organizations require.
Which of the following objects is not covered by Time Travel?
- A . Tables
- B . Schemas
- C . Databases
- D . Stages
D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/data-time-travel.html
Which of the following statements would be used to export/unload data from Snowflake?
- A . COPY INTO @stage
- B . EXPORT TO @stage
- C . INSERT INTO @stage
- D . EXPORT_TO_STAGE(stage = > @Wage, select = > ‘select * from t1);
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/data-unload-considerations.html
Which of the following are options when creating a Virtual Warehouse?
- A . Auto-suspend
- B . Auto-resume
- C . Local SSD size
- D . User count
True or False: The COPY command must specify a File Format in order to execute.
- A . True
- B . False
B
Explanation:
Create Stage:
https://docs.snowflake.com/en/sql-reference/sql/create-stage.html
Create Table (STAGE_FILE_FORMAT option):
https://docs.snowflake.com/en/sql-reference/sql/create-table.htmlCopy Into:
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html
True or False: Bulk unloading of data from Snowflake supports the use of a SELECT statement.
- A . True
- B . False
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide-data-unload.html
Which of the following statements are true of VALIDATION_MODE in Snowflake? (Choose two.)
- A . The validation_mode option is used when creating an Internal Stage
- B . validation_mode=return_all_errors is a parameter of the copy command
- C . The validation_mode option will validate data to be loaded by the copy statement while completing the load and will return the rows that could not be loaded without error
- D . The validation_mode option will validate data to be loaded by the copy statement without completing the load and will return possible errors
B,D
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/data-load-bulk-ts.html
True or False: You can query the files in an External Stage directly without having to load the data into a table.
- A . True
- B . False
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/tables-external-intro.html
External tables are read-only, therefore no DML operations can be performed on them; however, external tables can be used for query and join operations. Views can be created against external tables.
True or False: Snowflake’s data warehouse was built from the ground up for the cloud in lieu of using an existing database or a platform, like Hadoop, as a base.
- A . True
- B . False
A
Explanation:
Snowflake’s data warehouse was built from the ground up specifically for the cloud, instead of using an existing database or platform like Hadoop as a base. This design choice allowed Snowflake to create a unique architecture that takes advantage of the elasticity, scalability, and flexibility of cloud resources, resulting in a fully managed, highly performant, and cost-effective data warehouse solution. Snowflake’s multi-cluster, shared data architecture separates compute and storage resources, allowing for independent scaling and optimized performance for various workloads and use cases.
Snowflake is designed for which type of workloads? (Choose two.)
- A . OLAP (Analytics) workloads
- B . OLTP (Transactional) workloads
- C . Concurrent workloads
- D . On-premise workloads
A,C
Explanation:
Reference:
https://blog.couchbase.com/its-the-workload-stupid/
https://www.quora.com/Can-Snowflake-be-used-for-an-OLTP-system-or-is-it-only-best-suited-for-warehousing
What are the three things customers want most from their enterprise data warehouse solution? Choose 3 answers
- A . On-premise availability
- B . Simplicity
- C . Open source based
- D . Concurrency
- E . Performance
When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?
- A . A single join node uses more than 50% of the query time
- B . Partitions scanned is equal to partitions total
- C . An AggregateOperacor node is present
- D . The query is spilling to remote storage
Which of the following are valid methods for authenticating users for access into Snowflake? (Select THREE)
- A . SCIM
- B . Federated authentication
- C . TLS 1.2
- D . Key-pair authentication
- E . OAuth
- F . OCSP authentication
A developer is granted ownership of a table that has a masking policy. The developer’s role is not able to see the masked data.
Will the developer be able to modify the table to read the masked data?
- A . Yes, because a table owner has full control and can unset masking policies.
- B . Yes, because masking policies only apply to cloned tables.
- C . No, because masking policies must always reference specific access roles.
- D . No, because ownership of a table does not include the ability to change masking policies
A virtual warehouse’s auto-suspend and auto-resume settings apply to which of the following?
- A . The primary cluster in the virtual warehouse
- B . The entire virtual warehouse
- C . The database in which the virtual warehouse resides
- D . The Queries currently being run on the virtual warehouse
B
Explanation:
https://docs.snowflake.com/en/user-guide/warehouses-overview.html#:~:text=Similarly%2C%20auto%2Dresume%20ensures%20that,individual% 20clusters%20in%20the%20warehouse.
When reviewing the load for a warehouse using the load monitoring chart, the chart indicates that a high volume of Queries are always queuing in the warehouse
According to recommended best practice, what should be done to reduce the Queue volume? (Select TWO).
- A . Use multi-clustered warehousing to scale out warehouse capacity.
- B . Scale up the warehouse size to allow Queries to execute faster.
- C . Stop and start the warehouse to clear the queued queries
- D . Migrate some queries to a new warehouse to reduce load
- E . Limit user access to the warehouse so fewer queries are run against it.
Which account__usage views are used to evaluate the details of dynamic data masking? (Select TWO)
- A . ROLES
- B . POLICY_REFERENCES
- C . QUERY_HISTORY
- D . RESOURCE_MONIT ORS
- E . ACCESS_HISTORY
B,E
Explanation:
To evaluate the details of dynamic data masking in Snowflake, you can use the following account usage views:
POLICY_REFERENCES: This view provides information about the masking policies applied to various objects (tables, columns, etc.) within the account. It includes details like policy name, object name, and the masking policy expression.
ACCESS_HISTORY: This view provides a historical record of access attempts to objects (tables, columns, etc.) in the account, including whether the access was allowed or blocked, and if masking was applied during the access. It can help you understand how data masking policies are being enforced and identify any potential issues with the masking configuration.
How would you determine the size of the virtual warehouse used for a task?
- A . Root task may be executed concurrently (i.e. multiple instances), it is recommended to
leave some margins in the execution window to avoid missing instances of execution - B . Querying (select) the size of the stream content would help determine the warehouse size. For example, if querying large stream content, use a larger warehouse size
- C . If using the stored procedure to execute multiple SQL statements, it’s best to test run the stored procedure separately to size the compute resource first
- D . Since task infrastructure is based on running the task body on schedule, it’s recommended to configure the virtual warehouse for automatic concurrency handling using Multi-cluster warehouse (MCW) to match the task schedule
Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).
- A . Load files that are approximately 25 MB or smaller.
- B . Remove all dates and timestamps.
- C . Load files that are approximately 100-250 MB (or larger)
- D . Avoid using embedded characters such as commas for numeric data types
- E . Remove semi-structured data types
C,D
Explanation:
https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html
A company’s security audit requires generating a report listing all Snowflake logins (e.g.. date and user) within the last 90 days.
Which of the following statements will return the required information?
- A . SELECT LAST_SUCCESS_LOGIN, LOGIN_NAME
FROM ACCOUNT_USAGE.USERS; - B . SELECT EVENT_TIMESTAMP, USER_NAME
FROM table(information_schema.login_history_by_user()) - C . SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.ACCESS_HISTORY; - D . SELECT EVENT_TIMESTAMP, USER_NAME
FROM ACCOUNT_USAGE.LOGIN_HISTORY;
What feature can be used to reorganize a very large table on one or more columns?
- A . Micro-partitions
- B . Clustering keys
- C . Key partitions
- D . Clustered partitions
B
Explanation:
https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html
In the query profiler view for a query, which components represent areas that can be used to help optimize query performance? (Select TWO)
- A . Bytes scanned
- B . Bytes sent over the network
- C . Number of partitions scanned
- D . Percentage scanned from cache
- E . External bytes scanned
What is a machine learning and data science partner within the Snowflake Partner Ecosystem?
- A . Informatica
- B . Power Bl
- C . Adobe
- D . Data Robot
D
Explanation:
https://docs.snowflake.com/en/user-guide/ecosystem-analytics.html
Which command can be used to load data into an internal stage?
- A . LOAD
- B . copy
- C . GET
- D . PUT
D
Explanation:
https://medium.com/@divyanshsaxenaofficial/snowflake-loading-unloading-of-data-part-1-internal-stages-7121cc3cc9
A user is loading JSON documents composed of a huge array containing multiple records into Snowflake. The user enables the strip__outer_array file format option
What does the STRIP_OUTER_ARRAY file format do?
- A . It removes the last element of the outer array.
- B . It removes the outer array structure and loads the records into separate table rows,
- C . It removes the trailing spaces in the last element of the outer array and loads the records into separate table columns
- D . It removes the NULL elements from the JSON object eliminating invalid data and enables the ability to load the records
B
Explanation:
Data Size Limitations
The VARIANT data type imposes a 16 MB size limit on individual rows.
For some semi-structured data formats (e.g. JSON), data sets are frequently a simple concatenation of multiple documents. The JSON output from some software is composed of a single huge array containing multiple records. There is no need to separate the documents with line breaks or commas, though both are supported.
If the data exceeds 16 MB, enable the STRIP_OUTER_ARRAY file format option for the COPY INTO <table> command to remove the outer array structure and load the records into separate table rows:
copy into <table>
from @~/<file>.json
file_format = (type = ‘JSON’ strip_outer_array true);=
https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
Which copy INTO command outputs the data into one file?
- A . SINGLE=TRUE
- B . MAX_FILE_NUMBER=1
- C . FILE_NUMBER=1
- D . MULTIPLE=FAISE
What happens when a virtual warehouse is resized?
- A . When increasing the size of an active warehouse the compute resource for all running and queued queries on the warehouse are affected
- B . When reducing the size of a warehouse the compute resources are removed only when they are no longer being used to execute any current statements.
- C . The warehouse will be suspended while the new compute resource is provisioned and will resume automatically once provisioning is complete.
- D . Users who are trying to use the warehouse will receive an error message until the resizing is complete
What happens to the underlying table data when a CLUSTER BY clause is added to a Snowflake table?
- A . Data is hashed by the cluster key to facilitate fast searches for common data values
- B . Larger micro-partitions are created for common data values to reduce the number of partitions that must be scanned
- C . Smaller micro-partitions are created for common data values to allow for more parallelism
- D . Data may be colocated by the cluster key within the micro-partitions to improve pruning performance
A user has unloaded data from Snowflake to a stage
Which SQL command should be used to validate which data was loaded into the stage?
- A . list @file__stage
- B . show @file__stage
- C . view @file__stage
- D . verify @file__stage
A user has an application that writes a new Tile to a cloud storage location every 5 minutes.
What would be the MOST efficient way to get the files into Snowflake?
- A . Create a task that runs a copy into operation from an external stage every 5 minutes
- B . Create a task that puts the files in an internal stage and automate the data loading wizard
- C . Create a task that runs a GET operation to intermittently check for new files
- D . Set up cloud provider notifications on the Tile location and use Snowpipe with auto-ingest
D
Explanation:
https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro.html
What is the default File Format used in the COPY command if one is not specified?
- A . CSV
- B . JSON
- C . Parquet
- D . XML
A
Explanation:
Reference: https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html
True or False: A Virtual Warehouse can be resized while suspended.
- A . True
- B . False
A
Explanation:
Reference: https://docs.snowflake.com/en/user-guide/warehouses-tasks.html#effects-of-resizing-a-suspended-warehouse
What features does Snowflake Time Travel enable?
- A . Querying data-related objects that were created within the past 365 days
- B . Restoring data-related objects that have been deleted within the past 90 days
- C . Conducting point-in-time analysis for Bl reporting
- D . Analyzing data usage/manipulation over all periods of time
C
Explanation:
Snowflake Time Travel is a feature that enables you to access historical data within a specified period, known as the data retention period. It allows you to query data as it existed at a specific point in time, which can be useful for point-in-time analysis, BI reporting, or data recovery in case of accidental data changes or deletions.
In which use cases does Snowflake apply egress charges?
- A . Data sharing within a specific region
- B . Query result retrieval
- C . Database replication
- D . Loading data into Snowflake
B
Explanation:
Snowflake applies egress charges in cases where data is transferred out of Snowflake’s infrastructure, such as when retrieving query results. These charges are typically imposed by the cloud service provider (e.g., AWS, Azure, or Google Cloud Platform) for data transfer out of their respective data centers.
Which feature is only available in the Enterprise or higher editions of Snowflake?
- A . Column-level security
- B . SOC 2 type II certification
- C . Multi-factor Authentication (MFA)
- D . Object-level access control
A
Explanation:
https://docs.snowflake.com/en/user-guide/intro-editions.html
Which of the following is a valid source for an external stage when the Snowflake account is located on Microsoft Azure?
- A . An FTP server with TLS encryption
- B . An HTTPS server with WebDAV
- C . A Google Cloud storage bucket
- D . A Windows server file share on Azure
Which statement about billing applies to Snowflake credits?
- A . Credits are billed per-minute with a 60-minute minimum
- B . Credits are used to pay for cloud data storage usage
- C . Credits are consumed based on the number of credits billed for each hour that a warehouse runs
- D . Credits are consumed based on the warehouse size and the time the warehouse is running
D
Explanation:
Snowflake credits are used to pay for the consumption of resources on Snowflake. A Snowflake credit is a unit of measure, and it is consumed only when a customer is using resources, such as when a virtual warehouse is running, the cloud services layer is performing work, or serverless features are used. https://docs.snowflake.com/en/user-guide/what-are-credits.html
Which Snowflake technique can be used to improve the performance of a query?
- A . Clustering
- B . Indexing
- C . Fragmenting
- D . Using INDEX__HINTS
A
Explanation:
https://docs.snowflake.com/en/user-guide/tables-clustering-keys.html
What are the default Time Travel and Fail-safe retention periods for transient tables?
- A . Time Travel – 1 day. Fail-safe – 1 day
- B . Time Travel – 0 days. Fail-safe – 1 day
- C . Time Travel – 1 day. Fail-safe – 0 days
- D . Transient tables are retained in neither Fail-safe nor Time Travel
D
Explanation:
Transient tables in Snowflake do not have a Time Travel retention period or a Fail-safe retention period. These tables are designed for temporary storage and intermediate processing, and as such, they do not maintain historical data versions or offer data protection through Fail-safe. Once the data is deleted from a transient table, it cannot be recovered using Time Travel or Fail-safe features.
Which of the following are benefits of micro-partitioning? (Select TWO)
- A . Micro-partitions cannot overlap in their range of values
- B . Micro-partitions are immutable objects that support the use of Time Travel.
- C . Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses
- D . Rows are automatically stored in sorted order within micro-partitions
- E . Micro-partitions can be defined on a schema-by-schema basis
B C
Explanation:
Micro-partitioning is a key feature of Snowflake’s architecture, providing several benefits:
Micro-partitions are immutable objects that support the use of Time Travel. Each time data is loaded or modified, new micro-partitions are created, preserving the previous versions of data to enable Time Travel and historical data querying.
Micro-partitions can reduce the amount of I/O from object storage to virtual warehouses. Snowflake’s query optimizer leverages metadata about micro-partitions, such as the range of values within them, to prune unnecessary micro-partitions from query scans. This reduces the amount of data read and improves query performance.
Which of the following conditions must be met in order to return results from the results cache? (Select TWO).
- A . The user has the appropriate privileges on the objects associated with the query
- B . Micro-partitions have been reclustered since the query was last run
- C . The new query is run using the same virtual warehouse as the previous query
- D . The query includes a User Defined Function (UDF)
- E . The query has been run within 24 hours of the previously-run query
What happens when a cloned table is replicated to a secondary database? (Select TWO)
- A . A read-only copy of the cloned tables is stored.
- B . The replication will not be successful.
- C . The physical data is replicated
- D . Additional costs for storage are charged to a secondary account
- E . Metadata pointers to cloned tables are replicated
A E
Explanation:
When a cloned table is replicated to a secondary database in Snowflake, the following occurs:
A read-only copy of the cloned tables is stored in the secondary database. This ensures that the replicated data is consistent with the primary database and can be used for reporting, querying, or other read-only operations.
Metadata pointers to cloned tables are replicated, rather than the physical data. Snowflake’s cloning operation is metadata-based, which means that cloning a table does not create a full copy of the data. Instead, the new table references the same underlying data as the original table. When replicating a cloned table, Snowflake replicates these metadata pointers, ensuring efficient replication without duplicating the physical data.
What can be used to view warehouse usage over time? (Select Two).
- A . The load HISTORY view
- B . The Query history view
- C . The show warehouses command
- D . The WAREHOUSE_METERING__HISTORY View
- E . The billing and usage tab in the Snowflake web Ul
Will data cached in a warehouse be lost when the warehouse is resized?
- A . Possibly, if the warehouse is resized to a smaller size and the cache no longer fits.
- B . Yes. because the compute resource is replaced in its entirety with a new compute resource.
- C . No. because the size of the cache is independent from the warehouse size
- D . Yes. became the new compute resource will no longer have access to the cache encryption key