최신 DP-300 무료덤프 - Microsoft Administering Relational Databases on Microsoft Azure
Hotspot Question
You have an Azure Data Lake Storage Gen2 account named account1 that stores logs as shown in the following table.

You do not expect that the logs will be accessed during the retention periods.
You need to recommend a solution for account1 that meets the following requirements:
- Automatically deletes the logs at the end of each retention period
- Minimizes storage costs
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You have an Azure Data Lake Storage Gen2 account named account1 that stores logs as shown in the following table.

You do not expect that the logs will be accessed during the retention periods.
You need to recommend a solution for account1 that meets the following requirements:
- Automatically deletes the logs at the end of each retention period
- Minimizes storage costs
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: Store the infrastructure logs in the Cool access tier the application logs in the Archive access tier Hot - Optimized for storing data that is accessed frequently.
Cool - Optimized for storing data that is infrequently accessed and stored for at least 30 days.
Archive - Optimized for storing data that is rarely accessed and stored for at least 180 days with flexible latency requirements, on the order of hours.
Box 2: Azure Blob storage lifecycle management rules
Blob storage lifecycle management offers a rich, rule-based policy that you can use to transition your data to the best access tier and to expire data at the end of its lifecycle.
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
Hotspot Question
You have an Azure subscription that is linked to a hybrid Azure Active Directory (Azure AD) tenant. The subscription contains an Azure Synapse Analytics SQL pool named Pool1.
You need to recommend an authentication solution for Pool1. The solution must support multi- factor authentication (MFA) and database-level authentication.
Which authentication solution or solutions should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You have an Azure subscription that is linked to a hybrid Azure Active Directory (Azure AD) tenant. The subscription contains an Azure Synapse Analytics SQL pool named Pool1.
You need to recommend an authentication solution for Pool1. The solution must support multi- factor authentication (MFA) and database-level authentication.
Which authentication solution or solutions should you include in the recommendation? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: Azure AD authentication
Azure Active Directory authentication supports Multi-Factor authentication through Active Directory Universal Authentication.
Box 2: Contained database users
Azure Active Directory Uses contained database users to authenticate identities at the database level.
Incorrect:
SQL authentication: To connect to dedicated SQL pool (formerly SQL DW), you must provide the following information:
- Fully qualified servername
- Specify SQL authentication
- Username
- Password
- Default database (optional)
Reference:
https://docs.microsoft.com/en-us/azure/synapse-analytics/sql-data-warehouse/sql-data- warehouse-authentication
Hotspot Question
You have an on-premises server named Server1 that has Microsoft SQL Server 2022 installed.
You need to define an extended event to track user connections to Server1. The solution must ensure that all connection attempts are returned.
How should you complete the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You have an on-premises server named Server1 that has Microsoft SQL Server 2022 installed.
You need to define an extended event to track user connections to Server1. The solution must ensure that all connection attempts are returned.
How should you complete the query? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: sqlserver.login
This is an abbreviated version of process_login_finish, containing only the columns required by external monitoring telemetry pipeline.
Use this for Box 1 as it has fewer parameters.
Box 2: sqlserver.process_login_finish
This event is generated when server is done processing a login (success or failure).
Reference:
https://www.sqlskills.com/blogs/bobb/over-1000-xevents-in-sql-server-2016-ctp2-here-are-the-new-ones/
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1.
You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1.
You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1.
You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1.
Solution: In an Azure Synapse Analytics pipeline, you use a Get Metadata activity that retrieves the DateTime of the files.
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have an Azure Synapse Analytics dedicated SQL pool that contains a table named Table1.
You have files that are ingested and loaded into an Azure Data Lake Storage Gen2 container named container1.
You plan to insert data from the files into Table1 and transform the data. Each row of data in the files will produce one row in the serving layer of Table1.
You need to ensure that when the source data files are loaded to container1, the DateTime is stored as an additional column in Table1.
Solution: In an Azure Synapse Analytics pipeline, you use a Get Metadata activity that retrieves the DateTime of the files.
Does this meet the goal?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
Case Study 3 - Contoso, Ltd 2
Overview
Contoso, Ltd. is a clothing retailer based in Seattle. The company has 2,000 retail stores across the United States and an emerging online presence.
The network contains an Active Directory forest named contoso.com. The forest is integrated with an Azure Active Directory (Azure AD) tenant named contoso.com. Contoso has an Azure subscription associated to the contoso.com Azure AD tenant.
Existing Environment
Transactional Data
Contoso has three years of customer, transaction, operational, sourcing, and supplier data comprised of 10 billion records stored across multiple on-premises Microsoft SQL Server servers.
The SQL Server instances contain data from various operations systems. The data is loaded into the instances by using SQL Server Integration Services (SSIS) packages.
You estimate that combining all product sales transactions into a company-wide sales transactions dataset will result in a single table that contains 5 billion rows, with one row per transaction.
Most queries targeting the sales transactions data will be used to identify which products were sold in retail stores and which products were sold online during different time periods. Sales transaction data that is older than three years will be removed monthly.
You plan to create a retail store table that will contain the address of each retail store. The table will be approximately 2 MB. Queries for retail store sales will include the retail store addresses.
You plan to create a promotional table that will contain a promotion ID. The promotion ID will be associated to a specific product. The product will be identified by a product ID. The table will be approximately 5 GB.
Streaming Twitter Data
The ecommerce department at Contoso develops an Azure logic app that captures trending Twitter feeds referencing the company's products and pushes the products to Azure Event Hubs.
Planned Changes and Requirements
Planned Changes
Contoso plans to implement the following changes:
* Load the sales transaction dataset to Azure Synapse Analytics.
* Integrate on-premises data stores with Azure Synapse Analytics by using SSIS packages.
* Use Azure Synapse Analytics to analyze Twitter feeds to assess customer sentiments about products.
Sales Transaction Dataset Requirements
Contoso identifies the following requirements for the sales transaction dataset:
* Partition data that contains sales transaction records. Partitions must be designed to provide efficient loads by month. Boundary values must belong to the partition on the right.
* Ensure that queries joining and filtering sales transaction records based on product ID complete as quickly as possible.
* Implement a surrogate key to account for changes to the retail store addresses.
* Ensure that data storage costs and performance are predictable.
* Minimize how long it takes to remove old records.
Customer Sentiment Analytics Requirements
Contoso identifies the following requirements for customer sentiment analytics:
* Allow Contoso users to use PolyBase in an Azure Synapse Analytics dedicated SQL pool to query the content of the data records that host the Twitter feeds. Data must be protected by using row-level security (RLS). The users must be authenticated by using their own Azure AD credentials.
* Maximize the throughput of ingesting Twitter feeds from Event Hubs to Azure Storage without purchasing additional throughput or capacity units.
* Store Twitter feeds in Azure Storage by using Event Hubs Capture. The feeds will be converted into Parquet files.
* Ensure that the data store supports Azure AD-based access control down to the object level.
* Minimize administrative effort to maintain the Twitter feed data records.
* Purge Twitter feed data records that are older than two years.
Data Integration Requirements
Contoso identifies the following requirements for data integration:
* Use an Azure service that leverages the existing SSIS packages to ingest on-premises data into datasets stored in a dedicated SQL pool of Azure Synapse Analytics and transform the data.
* Identify a process to ensure that changes to the ingestion and transformation activities can be version-controlled and developed independently by multiple data engineers.
You need to implement the surrogate key for the retail store table. The solution must meet the sales transaction dataset requirements.
What should you create?
Overview
Contoso, Ltd. is a clothing retailer based in Seattle. The company has 2,000 retail stores across the United States and an emerging online presence.
The network contains an Active Directory forest named contoso.com. The forest is integrated with an Azure Active Directory (Azure AD) tenant named contoso.com. Contoso has an Azure subscription associated to the contoso.com Azure AD tenant.
Existing Environment
Transactional Data
Contoso has three years of customer, transaction, operational, sourcing, and supplier data comprised of 10 billion records stored across multiple on-premises Microsoft SQL Server servers.
The SQL Server instances contain data from various operations systems. The data is loaded into the instances by using SQL Server Integration Services (SSIS) packages.
You estimate that combining all product sales transactions into a company-wide sales transactions dataset will result in a single table that contains 5 billion rows, with one row per transaction.
Most queries targeting the sales transactions data will be used to identify which products were sold in retail stores and which products were sold online during different time periods. Sales transaction data that is older than three years will be removed monthly.
You plan to create a retail store table that will contain the address of each retail store. The table will be approximately 2 MB. Queries for retail store sales will include the retail store addresses.
You plan to create a promotional table that will contain a promotion ID. The promotion ID will be associated to a specific product. The product will be identified by a product ID. The table will be approximately 5 GB.
Streaming Twitter Data
The ecommerce department at Contoso develops an Azure logic app that captures trending Twitter feeds referencing the company's products and pushes the products to Azure Event Hubs.
Planned Changes and Requirements
Planned Changes
Contoso plans to implement the following changes:
* Load the sales transaction dataset to Azure Synapse Analytics.
* Integrate on-premises data stores with Azure Synapse Analytics by using SSIS packages.
* Use Azure Synapse Analytics to analyze Twitter feeds to assess customer sentiments about products.
Sales Transaction Dataset Requirements
Contoso identifies the following requirements for the sales transaction dataset:
* Partition data that contains sales transaction records. Partitions must be designed to provide efficient loads by month. Boundary values must belong to the partition on the right.
* Ensure that queries joining and filtering sales transaction records based on product ID complete as quickly as possible.
* Implement a surrogate key to account for changes to the retail store addresses.
* Ensure that data storage costs and performance are predictable.
* Minimize how long it takes to remove old records.
Customer Sentiment Analytics Requirements
Contoso identifies the following requirements for customer sentiment analytics:
* Allow Contoso users to use PolyBase in an Azure Synapse Analytics dedicated SQL pool to query the content of the data records that host the Twitter feeds. Data must be protected by using row-level security (RLS). The users must be authenticated by using their own Azure AD credentials.
* Maximize the throughput of ingesting Twitter feeds from Event Hubs to Azure Storage without purchasing additional throughput or capacity units.
* Store Twitter feeds in Azure Storage by using Event Hubs Capture. The feeds will be converted into Parquet files.
* Ensure that the data store supports Azure AD-based access control down to the object level.
* Minimize administrative effort to maintain the Twitter feed data records.
* Purge Twitter feed data records that are older than two years.
Data Integration Requirements
Contoso identifies the following requirements for data integration:
* Use an Azure service that leverages the existing SSIS packages to ingest on-premises data into datasets stored in a dedicated SQL pool of Azure Synapse Analytics and transform the data.
* Identify a process to ensure that changes to the ingestion and transformation activities can be version-controlled and developed independently by multiple data engineers.
You need to implement the surrogate key for the retail store table. The solution must meet the sales transaction dataset requirements.
What should you create?
정답: C
설명: (DumpTOP 회원만 볼 수 있음)
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the on-premises networks shown in the following table.

You have an Azure subscription that contains an Azure SQL Database server named SQL1.
SQL1 contains two databases named DB1 and DB2.
You need to configure access to DB1 and DB2. The solution must meet the following requirements:
- Ensure that DB1 can be accessed only by users in Branch1.
- Ensure that DB2 can be accessed only by users in Branch2.
Solution: You connect to DB1 and run the following command.
EXECUTE sp_set_firewall_rule 'Deny db1 users', '131.107.11.0',
'131.107.11.255'
You connect to DB2 and run the following command.
EXECUTE sp_set_database_firewall_rule 'Deny db2 users', '131.107.10.0',
'131.107.10.255'
Does this meet the goal?
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the on-premises networks shown in the following table.

You have an Azure subscription that contains an Azure SQL Database server named SQL1.
SQL1 contains two databases named DB1 and DB2.
You need to configure access to DB1 and DB2. The solution must meet the following requirements:
- Ensure that DB1 can be accessed only by users in Branch1.
- Ensure that DB2 can be accessed only by users in Branch2.
Solution: You connect to DB1 and run the following command.
EXECUTE sp_set_firewall_rule 'Deny db1 users', '131.107.11.0',
'131.107.11.255'
You connect to DB2 and run the following command.
EXECUTE sp_set_database_firewall_rule 'Deny db2 users', '131.107.10.0',
'131.107.10.255'
Does this meet the goal?
정답: A
설명: (DumpTOP 회원만 볼 수 있음)
Hotspot Question
You have an Azure SQL database.
You are reviewing a slow performing query as shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.

You have an Azure SQL database.
You are reviewing a slow performing query as shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: Live Query Statistics
Live Query Statistics as it a percentage of the execution.
Box 2: Key Lookup
The use of a Key Lookup operator in a query plan indicates that the query might benefit from performance tuning. For example, query performance might be improved by adding a covering index.
Reference:
https://docs.microsoft.com/en-us/sql/relational-databases/performance/live-query- statistics?view=sql-server-ver15
https://docs.microsoft.com/en-us/sql/relational-databases/showplan-logical-and-physical- operators-reference
You are developing an application that uses Azure Data Lake Storage Gen 2.
You need to recommend a solution to grant permissions to a specific application for a limited time period.
What should you include in the recommendation?
You need to recommend a solution to grant permissions to a specific application for a limited time period.
What should you include in the recommendation?
정답: B
설명: (DumpTOP 회원만 볼 수 있음)
Hotspot Question
You have a SQL Server on Azure Virtual Machines instance that hosts a 10-TB SQL database named DB1.
You need to identify and repair any physical or logical corruption in DB1. The solution must meet the following requirements:
- Minimize how long it takes to complete the procedure.
- Minimize data loss.
How should you complete the command? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

You have a SQL Server on Azure Virtual Machines instance that hosts a 10-TB SQL database named DB1.
You need to identify and repair any physical or logical corruption in DB1. The solution must meet the following requirements:
- Minimize how long it takes to complete the procedure.
- Minimize data loss.
How should you complete the command? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

정답:

Explanation:
Box 1: REPAIR_REBUILD
Performs repairs that have no possibility of data loss. This option may include quick repairs, such as repairing missing rows in nonclustered indexes, and more time-consuming repairs, such as rebuilding an index.
Box 2: PHYSICAL_ONLY
Limits the checking to the integrity of the physical structure of the page and record headers and the allocation consistency of the database. This check is designed to provide a small overhead check of the physical consistency of the database, but it can also detect torn pages, checksum failures, and common hardware failures that can compromise a user's data.
Incorrect:
TABLOCK
Causes DBCC CHECKDB to obtain locks instead of using an internal database snapshot. This includes a short-term exclusive (X) lock on the database.
TABLOCK will cause DBCC CHECKDB to run faster on a database under heavy load, but will decrease the concurrency available on the database while DBCC CHECKDB is running.
EXTENDED_LOGICAL_CHECKS
If the compatibility level is 100 ( SQL Server 2008) or higher, performs logical consistency checks on an indexed view, XML indexes, and spatial indexes, where present.
Reference:
https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkdb-transact- sql
You have an Azure subscription. The subscription contains an instance of SQL Server on Azure Virtual Machines named SQL1 and an Azure Automation account named account1.
You need to configure account1 to restart the SQL Server Agent service if the service stops.
Which setting should you configure?
You need to configure account1 to restart the SQL Server Agent service if the service stops.
Which setting should you configure?
정답: B
Case Study 1 - Litware, Inc
Overview
Litware, Inc. is a renewable energy company that has a main office in Boston. The main office hosts a sales department and the primary datacenter for the company.
Physical Locations
Existing Environment
Litware has a manufacturing office and a research office is separate locations near Boston. Each office has its own datacenter and internet connection.
The manufacturing and research datacenters connect to the primary datacenter by using a VPN.
Network Environment
The primary datacenter has an ExpressRoute connection that uses both Microsoft peering and private peering. The private peering connects to an Azure virtual network named HubVNet.
Identity Environment
Litware has a hybrid Azure Active Directory (Azure AD) deployment that uses a domain named litwareinc.com. All Azure subscriptions are associated to the litwareinc.com Azure AD tenant.
Database Environment
The sales department has the following database workload:
* An on-premises named SERVER1 hosts an instance of Microsoft SQL Server 2012 and two 1- TB databases.
* A logical server named SalesSrv01A contains a geo-replicated Azure SQL database named SalesSQLDb1. SalesSQLDb1 is in an elastic pool named SalesSQLDb1Pool. SalesSQLDb1 uses database firewall rules and contained database users.
* An application named SalesSQLDb1App1 uses SalesSQLDb1.
The manufacturing office contains two on-premises SQL Server 2016 servers named SERVER2 and SERVER3. The servers are nodes in the same Always On availability group. The availability group contains a database named ManufacturingSQLDb1 Database administrators have two Azure virtual machines in HubVnet named VM1 and VM2 that run Windows Server 2019 and are used to manage all the Azure databases.
Licensing Agreement
Litware is a Microsoft Volume Licensing customer that has License Mobility through Software Assurance.
Current Problems
SalesSQLDb1 experiences performance issues that are likely due to out-of-date statistics and frequent blocking queries.
Requirements
Planned Changes
Litware plans to implement the following changes:
* Implement 30 new databases in Azure, which will be used by time-sensitive manufacturing apps that have varying usage patterns. Each database will be approximately 20 GB.
* Create a new Azure SQL database named ResearchDB1 on a logical server named ResearchSrv01. ResearchDB1 will contain Personally Identifiable Information (PII) data.
* Develop an app named ResearchApp1 that will be used by the research department to populate and access ResearchDB1.
* Migrate ManufacturingSQLDb1 to the Azure virtual machine platform.
* Migrate the SERVER1 databases to the Azure SQL Database platform.
Technical Requirements
Litware identifies the following technical requirements:
* Maintenance tasks must be automated.
* The 30 new databases must scale automatically.
* The use of an on-premises infrastructure must be minimized.
* Azure Hybrid Use Benefits must be leveraged for Azure SQL Database deployments.
* All SQL Server and Azure SQL Database metrics related to CPU and storage usage and limits must be analyzed by using Azure built-in functionality.
Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
* Store encryption keys in Azure Key Vault.
* Retain backups of the PII data for two months.
* Encrypt the PII data at rest, in transit, and in use.
* Use the principle of least privilege whenever possible.
* Authenticate database users by using Active Directory credentials.
* Protect Azure SQL Database instances by using database-level firewall rules.
* Ensure that all databases hosted in Azure are accessible from VM1 and VM2 without relying on public endpoints.
Business Requirements
Litware identifies the following business requirements:
* Meet an SLA of 99.99% availability for all Azure deployments.
* Minimize downtime during the migration of the SERVER1 databases.
* Use the Azure Hybrid Use Benefits when migrating workloads to Azure.
* Once all requirements are met, minimize costs whenever possible.
Drag and Drop Question
You need to implement statistics maintenance for SalesSQLDb1. The solution must meet the technical requirements.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Overview
Litware, Inc. is a renewable energy company that has a main office in Boston. The main office hosts a sales department and the primary datacenter for the company.
Physical Locations
Existing Environment
Litware has a manufacturing office and a research office is separate locations near Boston. Each office has its own datacenter and internet connection.
The manufacturing and research datacenters connect to the primary datacenter by using a VPN.
Network Environment
The primary datacenter has an ExpressRoute connection that uses both Microsoft peering and private peering. The private peering connects to an Azure virtual network named HubVNet.
Identity Environment
Litware has a hybrid Azure Active Directory (Azure AD) deployment that uses a domain named litwareinc.com. All Azure subscriptions are associated to the litwareinc.com Azure AD tenant.
Database Environment
The sales department has the following database workload:
* An on-premises named SERVER1 hosts an instance of Microsoft SQL Server 2012 and two 1- TB databases.
* A logical server named SalesSrv01A contains a geo-replicated Azure SQL database named SalesSQLDb1. SalesSQLDb1 is in an elastic pool named SalesSQLDb1Pool. SalesSQLDb1 uses database firewall rules and contained database users.
* An application named SalesSQLDb1App1 uses SalesSQLDb1.
The manufacturing office contains two on-premises SQL Server 2016 servers named SERVER2 and SERVER3. The servers are nodes in the same Always On availability group. The availability group contains a database named ManufacturingSQLDb1 Database administrators have two Azure virtual machines in HubVnet named VM1 and VM2 that run Windows Server 2019 and are used to manage all the Azure databases.
Licensing Agreement
Litware is a Microsoft Volume Licensing customer that has License Mobility through Software Assurance.
Current Problems
SalesSQLDb1 experiences performance issues that are likely due to out-of-date statistics and frequent blocking queries.
Requirements
Planned Changes
Litware plans to implement the following changes:
* Implement 30 new databases in Azure, which will be used by time-sensitive manufacturing apps that have varying usage patterns. Each database will be approximately 20 GB.
* Create a new Azure SQL database named ResearchDB1 on a logical server named ResearchSrv01. ResearchDB1 will contain Personally Identifiable Information (PII) data.
* Develop an app named ResearchApp1 that will be used by the research department to populate and access ResearchDB1.
* Migrate ManufacturingSQLDb1 to the Azure virtual machine platform.
* Migrate the SERVER1 databases to the Azure SQL Database platform.
Technical Requirements
Litware identifies the following technical requirements:
* Maintenance tasks must be automated.
* The 30 new databases must scale automatically.
* The use of an on-premises infrastructure must be minimized.
* Azure Hybrid Use Benefits must be leveraged for Azure SQL Database deployments.
* All SQL Server and Azure SQL Database metrics related to CPU and storage usage and limits must be analyzed by using Azure built-in functionality.
Security and Compliance Requirements
Litware identifies the following security and compliance requirements:
* Store encryption keys in Azure Key Vault.
* Retain backups of the PII data for two months.
* Encrypt the PII data at rest, in transit, and in use.
* Use the principle of least privilege whenever possible.
* Authenticate database users by using Active Directory credentials.
* Protect Azure SQL Database instances by using database-level firewall rules.
* Ensure that all databases hosted in Azure are accessible from VM1 and VM2 without relying on public endpoints.
Business Requirements
Litware identifies the following business requirements:
* Meet an SLA of 99.99% availability for all Azure deployments.
* Minimize downtime during the migration of the SERVER1 databases.
* Use the Azure Hybrid Use Benefits when migrating workloads to Azure.
* Once all requirements are met, minimize costs whenever possible.
Drag and Drop Question
You need to implement statistics maintenance for SalesSQLDb1. The solution must meet the technical requirements.
Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

정답:

Explanation:
Automating Azure SQL DB index and statistics maintenance using Azure Automation:
1. Create Azure automation account (Step 1)
2. Import SQLServer module (Step 2)
3. Add Credentials to access SQL DB
This will use secure way to hold login name and password that will be used to access Azure SQL DB
4. Add a runbook to run the maintenance (Step 3)
Steps:
1. Click on "runbooks" at the left panel and then click "add a runbook"
2. Choose "create a new runbook" and then give it a name and choose "Powershell" as the type of the runbook and then click on "create"

5. Schedule task (Step 4)
Steps:
1. Click on Schedules
2. Click on "Add a schedule" and follow the instructions to choose existing schedule or create a new schedule.
Reference:
https://techcommunity.microsoft.com/t5/azure-database-support-blog/automating-azure-sql-db- index-and-statistics-maintenance-using/ba-p/368974
SIMULATION
You have a legacy application written for Microsoft SQL Server 2012. The application will be the only application that accesses db1.
You need to ensure that db1 is compatible with all the features and syntax of SQL Server 2012.
To complete this task, sign in to the virtual machine. You may need to use SQL Server Management Studio and the Azure portal.
You have a legacy application written for Microsoft SQL Server 2012. The application will be the only application that accesses db1.
You need to ensure that db1 is compatible with all the features and syntax of SQL Server 2012.
To complete this task, sign in to the virtual machine. You may need to use SQL Server Management Studio and the Azure portal.
정답:
View or change the compatibility level of a database
You can view or change the compatibility level of a database in SQL Server, Azure SQL Database, or Azure SQL Managed Instance by using SQL Server Management Studio or Transact-SQL.
Use SQL Server Management Studio
To view or change the compatibility level of a database using SQL Server Management Studio (SSMS) Step 1: Connect to the appropriate server or instance hosting your database [Here db1].
Step 2: Select the server name in Object Explorer.
Step 3: Expand Databases, and, depending on the database, either select a user database or expand System Databases and select a system database.
Step 4: Right-click the database, and then select Properties.
The Database Properties dialog box opens.
Step 5: In the Select a page pane, select Options.
The current compatibility level is displayed in the Compatibility level list box.
Step 6: To change the compatibility level, select a different option from the list [Select SQL Server 2012]
Reference:
https://learn.microsoft.com/en-us/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database
You can view or change the compatibility level of a database in SQL Server, Azure SQL Database, or Azure SQL Managed Instance by using SQL Server Management Studio or Transact-SQL.
Use SQL Server Management Studio
To view or change the compatibility level of a database using SQL Server Management Studio (SSMS) Step 1: Connect to the appropriate server or instance hosting your database [Here db1].
Step 2: Select the server name in Object Explorer.
Step 3: Expand Databases, and, depending on the database, either select a user database or expand System Databases and select a system database.
Step 4: Right-click the database, and then select Properties.
The Database Properties dialog box opens.
Step 5: In the Select a page pane, select Options.
The current compatibility level is displayed in the Compatibility level list box.
Step 6: To change the compatibility level, select a different option from the list [Select SQL Server 2012]
Reference:
https://learn.microsoft.com/en-us/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database
You have the following Azure Data Factory pipelines:
- Ingest Data from System1
- Ingest Data from System2
- Populate Dimensions
- Populate Facts
Ingest Data from System1 and Ingest Data from System2 have no dependencies. Populate Dimensions must execute after Ingest Data from System1 and Ingest Data from System2.
Populate Facts must execute after the Populate Dimensions pipeline. All the pipelines must execute every eight hours.
What should you do to schedule the pipelines for execution?
- Ingest Data from System1
- Ingest Data from System2
- Populate Dimensions
- Populate Facts
Ingest Data from System1 and Ingest Data from System2 have no dependencies. Populate Dimensions must execute after Ingest Data from System1 and Ingest Data from System2.
Populate Facts must execute after the Populate Dimensions pipeline. All the pipelines must execute every eight hours.
What should you do to schedule the pipelines for execution?
정답: D
설명: (DumpTOP 회원만 볼 수 있음)