최신 Databricks-Certified-Professional-Data-Engineer 무료덤프 - Databricks Certified Professional Data Engineer

A Delta Lake table representing metadata about content posts from users has the following schema:
* user_id LONG
* post_text STRING
* post_id STRING
* longitude FLOAT
* latitude FLOAT
* post_time TIMESTAMP
* date DATE
Based on the above schema, which column is a good candidate for partitioning the Delta Table?

정답: A
설명: (DumpTOP 회원만 볼 수 있음)
A Databricks SQL dashboard has been configured to monitor the total number of records present in a collection of Delta Lake tables using the following query pattern:
SELECT COUNT (*) FROM table -
Which of the following describes how results are generated each time the dashboard is updated?

정답: E
설명: (DumpTOP 회원만 볼 수 있음)
An upstream system has been configured to pass the date for a given batch of data to the Databricks Jobs API as a parameter. The notebook to be scheduled will use this parameter to load data with the following code:
df = spark.read.format("parquet").load(f"/mnt/source/(date)")
Which code block should be used to create the date Python variable used in the above code block?

정답: C
설명: (DumpTOP 회원만 볼 수 있음)
A production cluster has 3 executor nodes and uses the same virtual machine type for the driver and executor.
When evaluating the Ganglia Metrics for this cluster, which indicator would signal a bottleneck caused by code executing on the driver?

정답: A
설명: (DumpTOP 회원만 볼 수 있음)
Which of the following is true of Delta Lake and the Lakehouse?

정답: B
설명: (DumpTOP 회원만 볼 수 있음)
A junior data engineer is migrating a workload from a relational database system to the Databricks Lakehouse. The source system uses a star schema, leveraging foreign key constrains and multi-table inserts to validate records on write.
Which consideration will impact the decisions made by the engineer while migrating this workload?

정답: A
설명: (DumpTOP 회원만 볼 수 있음)
A Databricks job has been configured with 3 tasks, each of which is a Databricks notebook. Task A does not depend on other tasks. Tasks B and C run in parallel, with each having a serial dependency on task A.
If tasks A and B complete successfully but task C fails during a scheduled run, which statement describes the resulting state?

정답: B
설명: (DumpTOP 회원만 볼 수 있음)
A junior data engineer is working to implement logic for a Lakehouse table namedsilver_device_recordings.
The source data contains 100 unique fields in a highly nested JSON structure.
Thesilver_device_recordingstable will be used downstream to power several production monitoring dashboards and a production model. At present, 45 of the 100 fields are being used in at least one of these applications.
The data engineer is trying to determine the best approach for dealing with schema declaration given the highly-nested structure of the data and the numerous fields.
Which of the following accurately presents information about Delta Lake and Databricks that may impact their decision-making process?

정답: D
설명: (DumpTOP 회원만 볼 수 있음)
The data science team has created and logged a production using MLFlow. The model accepts a list of column names and returns a new column of type DOUBLE.
The following code correctly imports the production model, load the customer table containing the customer_id key column into a Dataframe, and defines the feature columns needed for the model.

Which code block will output DataFrame with the schema'' customer_id LONG, predictions DOUBLE''?

정답: D
설명: (DumpTOP 회원만 볼 수 있음)
The data architect has mandated that all tables in the Lakehouse should be configured as external (also known as "unmanaged") Delta Lake tables.
Which approach will ensure that this requirement is met?

정답: A
설명: (DumpTOP 회원만 볼 수 있음)

우리와 연락하기

문의할 점이 있으시면 메일을 보내오세요. 12시간이내에 답장드리도록 하고 있습니다.

근무시간: ( UTC+9 ) 9:00-24:00
월요일~토요일

서포트: 바로 연락하기