This page was exported from Best Free Exam Guide [ http://free.exams4sures.com ] Export date:Sat Mar 15 6:40:44 2025 / +0000 GMT ___________________________________________________ Title: Microsoft DP-600 Daily Practice Exam New 2025 Updated 112 Questions [Q63-Q85] --------------------------------------------------- Microsoft DP-600 Daily Practice Exam New 2025 Updated 112 Questions Use Valid DP-600 Exam - Actual Exam Question & Answer Microsoft DP-600 Exam Syllabus Topics: TopicDetailsTopic 1Explore and analyze data: It also deals with performing exploratory analytics. Moreover, the topic delves into query data by using SQL.Topic 2Plan, implement, and manage a solution for data analytics: Planning a data analytics environment, implementing and managing a data analytics environment are discussed in this topic. It also focuses on managing the analytics development lifecycle.Topic 3Implement and manage semantic models: The topic delves into designing and building semantic models, and optimizing enterprise-scale semantic models.Topic 4Prepare and serve data: In this topic, questions about creating objects in a lakehouse or warehouse, copying data, transforming data, and optimizing performance appear.   QUESTION 63You have a Fabric warehouse that contains a table named Sales.Orders. Sales.Orders contains the following columns.You need to write a T-SQL query that will return the following columns.How should you complete the code? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Explanation:For the PeriodDate that returns the first day of the month for OrderDate, you should use DATEFROMPARTS as it allows you to construct a date from its individual components (year, month, day).For the DayName that returns the name of the day for OrderDate, you should use DATENAME with the weekday date part to get the full name of the weekday.The complete SQL query should look like this:SELECT OrderID, CustomerID,DATEFROMPARTS(YEAR(OrderDate), MONTH(OrderDate), 1) AS PeriodDate,DATENAME(weekday, OrderDate) AS DayNameFROM Sales.OrdersSelect DATEFROMPARTS for the PeriodDate and weekday for the DayName in the answer area.QUESTION 64You have a Fabric notebook that has the Python code and output shown in the following exhibit.Which type of analytics are you performing?  predictive  descriptive  prescriptive  diagnostic The Python code and output shown in the exhibit display a histogram, which is a representation of the distribution of data. This kind of analysis is descriptive analytics, which is used to describe or summarize the features of a dataset. Descriptive analytics answers the question of “what has happened” by providing insight into past data through tools such as mean, median, mode, standard deviation, and graphical representations like histograms.References: Descriptive analytics and the use of histograms as a way to visualize data distribution are basic concepts in data analysis, often covered in introductory analytics and Python programming resources.QUESTION 65Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You have a Fabric tenant that contains a semantic model named Model1.You discover that the following query performs slowly against Model1.You need to reduce the execution time of the query.Solution: You replace line 4 by using the following code:Does this meet the goal?  Yes  No Topic 1, Litware. Inc. Case StudyOverviewLitware. Inc. is a manufacturing company that has offices throughout North America. The analytics team at Litware contains data engineers, analytics engineers, data analysts, and data scientists.Existing Environmentlitware has been using a Microsoft Power Bl tenant for three years. Litware has NOT enabled any Fabric capacities and features.Fabric EnvironmentLitware has data that must be analyzed as shown in the following table.The Product data contains a single table and the following columns.The customer satisfaction data contains the following tables:* Survey* Question* ResponseFor each survey submitted, the following occurs:* One row is added to the Survey table.* One row is added to the Response table for each question in the survey.The Question table contains the text of each survey question. The third question in each survey response is an overall satisfaction score. Customers can submit a survey after each purchase.User ProblemsThe analytics team has large volumes of data, some of which is semi-structured. The team wants to use Fabric to create a new data store.Product data is often classified into three pricing groups: high, medium, and low. This logic is implemented in several databases and semantic models, but the logic does NOT always match across implementations.Planned ChangesLitware plans to enable Fabric features in the existing tenant. The analytics team will create a new data store as a proof of concept (PoC). The remaining Litware users will only get access to the Fabric features once the PoC is complete. The PoC will be completed by using a Fabric trial capacity.The following three workspaces will be created:* AnalyticsPOC: Will contain the data store, semantic models, reports, pipelines, dataflows, and notebooks used to populate the data store* DataEngPOC: Will contain all the pipelines, dataflows, and notebooks used to populate Onelake* DataSciPOC: Will contain all the notebooks and reports created by the data scientists The following will be created in the AnalyticsPOC workspace:* A data store (type to be decided)* A custom semantic model* A default semantic model* Interactive reportsThe data engineers will create data pipelines to load data to OneLake either hourly or daily depending on the data source. The analytics engineers will create processes to ingest transform, and load the data to the data store in the AnalyticsPOC workspace daily. Whenever possible, the data engineers will use low-code tools for data ingestion. The choice of which data cleansing and transformation tools to use will be at the data engineers’ discretion.All the semantic models and reports in the Analytics POC workspace will use the data store as the sole data source.Technical RequirementsThe data store must support the following:* Read access by using T-SQL or Python* Semi-structured and unstructured data* Row-level security (RLS) for users executing T-SQL queriesFiles loaded by the data engineers to OneLake will be stored in the Parquet format and will meet Delta Lake specifications.Data will be loaded without transformation in one area of the AnalyticsPOC data store. The data will then be cleansed, merged, and transformed into a dimensional model.The data load process must ensure that the raw and cleansed data is updated completely before populating the dimensional model.The dimensional model must contain a date dimension. There is no existing data source for the date dimension. The Litware fiscal year matches the calendar year. The date dimension must always contain dates from 2010 through the end of the current year.The product pricing group logic must be maintained by the analytics engineers in a single location. The pricing group data must be made available in the data store for T-SQL queries and in the default semantic model. The following logic must be used:* List prices that are less than or equal to 50 are in the low pricing group.* List prices that are greater than 50 and less than or equal to 1,000 are in the medium pricing group.* List pnces that are greater than 1,000 are in the high pricing group.Security RequirementsOnly Fabric administrators and the analytics team must be able to see the Fabric items created as part of the PoC. Litware identifies the following security requirements for the Fabric items in the AnalyticsPOC workspace:* Fabric administrators will be the workspace administrators.* The data engineers must be able to read from and write to the data store. No access must be granted to datasets or reports.* The analytics engineers must be able to read from, write to, and create schemas in the data store. They also must be able to create and share semantic models with the data analysts and view and modify all reports in the workspace.* The data scientists must be able to read from the data store, but not write to it. They will access the data by using a Spark notebook.* The data analysts must have read access to only the dimensional model objects in the data store. They also must have access to create Power Bl reports by using the semantic models created by the analytics engineers.* The date dimension must be available to all users of the data store.* The principle of least privilege must be followed.Both the default and custom semantic models must include only tables or views from the dimensional model in the data store. Litware already has the following Microsoft Entra security groups:* FabricAdmins: Fabric administrators* AnalyticsTeam: All the members of the analytics team* DataAnalysts: The data analysts on the analytics team* DataScientists: The data scientists on the analytics team* Data Engineers: The data engineers on the analytics team* Analytics Engineers: The analytics engineers on the analytics teamReport RequirementsThe data analysis must create a customer satisfaction report that meets the following requirements:* Enables a user to select a product to filter customer survey responses to only those who have purchased that product* Displays the average overall satisfaction score of all the surveys submitted during the last 12 months up to a selected date* Shows data as soon as the data is updated in the data store* Ensures that the report and the semantic model only contain data from the current and previous year* Ensures that the report respects any table-level security specified in the source data store* Minimizes the execution time of report queriesQUESTION 66You have the source data model shown in the following exhibit.The primary keys of the tables are indicated by a key symbol beside the columns involved in each key.You need to create a dimensional data model that will enable the analysis of order items by date, product, and customer.What should you include in the solution? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. QUESTION 67You have a Fabric tenant that contains a warehouse named Warehouse1. Warehouse1 contains three schemas named schemaA, schemaB. and schemaC You need to ensure that a user named User1 can truncate tables in schemaA only.How should you complete the T-SQL statement? To answer, select the appropriate options in the answer area.NOTE: Each correct selection is worth one point. Explanation:* GRANT ALTER ON SCHEMA::schemaA TO User1;The ALTER permission allows a user to modify the schema of an object, and granting ALTER on a schema will allow the user to perform operations like TRUNCATE TABLE on any object within that schema. It is the correct permission to grant to User1 for truncating tables in schemaA.References =* GRANT Schema Permissions* Permissions That Can Be Granted on a SchemaQUESTION 68You create a semantic model by using Microsoft Power Bl Desktop. The model contains one security role named SalesRegionManager and the following tables:* Sales* SalesRegion* Sales Ad dressYou need to modify the model to ensure that users assigned the SalesRegionManager role cannot see a column named Address in Sales Address.Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Explanation:To ensure that users assigned the SalesRegionManager role cannot see the Address column in the SalesAddress table, follow these steps in sequence:* Open the model in Tabular Editor.* Select the Address column in SalesAddress.* Set Object Level Security to None for SalesRegionManager.QUESTION 69You have a Fabric tenant that contains a new semantic model in OneLake.You use a Fabric notebook to read the data into a Spark DataFrame.You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.Solution: You use the following PySpark expression:df.explain()Does this meet the goal?  Yes  No The df.explain() method does not meet the goal of evaluating data to calculate statistical functions. It is used to display the physical plan that Spark will execute. References = The correct usage of the explain() function can be found in the PySpark documentation.QUESTION 70Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.You have a Fabric tenant that contains a lakehouse named Lakehousel. Lakehousel contains a Delta table named Customer.When you query Customer, you discover that the query is slow to execute. You suspect that maintenance was NOT performed on the table.You need to identify whether maintenance tasks were performed on Customer.Solution: You run the following Spark SQL statement:DESCRIBE DETAIL customerDoes this meet the goal?  Yes  No QUESTION 71You have source data in a folder on a local computer.You need to create a solution that will use Fabric to populate a data store. The solution must meet the following requirements:* Support the use of dataflows to load and append data to the data store.* Ensure that Delta tables are V-Order optimized and compacted automatically.Which type of data store should you use?  a lakehouse  an Azure SQL database  a warehouse  a KQL database A lakehouse (A) is the type of data store you should use. It supports dataflows to load and append data and ensures that Delta tables are Z-Order optimized and compacted automatically. Reference = The capabilities of a lakehouse and its support for Delta tables are described in the lakehouse and Delta table documentation.QUESTION 72You have a data warehouse that contains a table named Stage. Customers. Stage-Customers contains all the customer record updates from a customer relationship management (CRM) system. There can be multiple updates per customer You need to write a T-SQL query that will return the customer ID, name, postal code, and the last updated time of the most recent row for each customer ID.How should you complete the code? To answer, select the appropriate options in the answer area, NOTE Each correct selection is worth one point. Explanation:* In the ROW_NUMBER() function, choose OVER (PARTITION BY CustomerID ORDER BY LastUpdated DESC).* In the WHERE clause, choose WHERE X = 1.To select the most recent row for each customer ID, you use the ROW_NUMBER() window function partitioned by CustomerID and ordered by LastUpdated in descending order. This will assign a row number of 1 to the most recent update for each customer. By selecting rows where the row number (X) is 1, you get the latest update per customer.References =* Use the OVER clause to aggregate data per partition* Use window functionsQUESTION 73You have a Fabric tenant that contains a lakehouse named lakehouse1. Lakehouse1 contains a table named Table1.You are creating a new data pipeline.You plan to copy external data to Table1. The schema of the external data changes regularly.You need the copy operation to meet the following requirements:* Replace Table1 with the schema of the external data.* Replace all the data in Table1 with the rows in the external data.You add a Copy data activity to the pipeline. What should you do for the Copy data activity?  From the Source tab, add additional columns.  From the Destination tab, set Table action to Overwrite.  From the Settings tab, select Enable staging  From the Source tab, select Enable partition discovery  From the Source tab, select Recursively QUESTION 74What should you recommend using to ingest the customer data into the data store in the AnatyticsPOC workspace?  a Spark notebook  a pipeline that contains a KQL activity  a stored procedure  a dataflow For ingesting customer data into the data store in the AnalyticsPOC workspace, a dataflow (D) should be recommended. Dataflows are designed within the Power BI service to ingest, cleanse, transform, and load data into the Power BI environment. They allow for the low-code ingestion and transformation of data as needed by Litware’s technical requirements. References = You can learn more about dataflows and their use in Power BI environments in Microsoft’s Power BI documentation.QUESTION 75You have a Fabric tenant that contains a machine learning model registered in a Fabric workspace. You need to use the model to generate predictions by using the predict function in a fabric notebook. Which two languages can you use to perform model scoring? Each correct answer presents a complete solution. NOTE:Each correct answer is worth one point.  T-SQL  DAX EC.  Spark SQL  PySpark The two languages you can use to perform model scoring in a Fabric notebook using the predict function are Spark SQL (option C) and PySpark (option D). These are both part of the Apache Spark ecosystem and are supported for machine learning tasks in a Fabric environment. References = You can find more information about model scoring and supported languages in the context of Fabric notebooks in the official documentation on Azure Synapse Analytics.QUESTION 76You have a Fabric tenant that contains 30 CSV files in OneLake. The files are updated daily.You create a Microsoft Power Bl semantic model named Modell that uses the CSV files as a data source. You configure incremental refresh for Model 1 and publish the model to a Premium capacity in the Fabric tenant.When you initiate a refresh of Model1, the refresh fails after running out of resources.What is a possible cause of the failure?  Query folding is occurring.  Only refresh complete days is selected.  XMLA Endpoint is set to Read Only.  Query folding is NOT occurring.  The data type of the column used to partition the data has changed. A possible cause for the failure is that query folding is NOT occurring (D). Query folding helps optimize refresh by pushing down the query logic to the source system, reducing the amount of data processed and transferred, hence conserving resources. Reference = The Power BI documentation on incremental refresh and query folding provides detailed information on this topic.QUESTION 77You are creating a dataflow in Fabric to ingest data from an Azure SQL database by using a T-SQL statement.You need to ensure that any foldable Power Query transformation steps are processed by the Microsoft SQL Server engine.How should you complete the code? To answer, drag the appropriate values to the correct targets. Each value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.NOTE: Each correct selection is worth one point. QUESTION 78You have a Fabric tenant that contains a warehouse.You use a dataflow to load a new dataset from OneLake to the warehouse.You need to add a Power Query step to identify the maximum values for the numeric columns.Which function should you include in the step?  Table. MaxN  Table.Max  Table.Range  Table.Profile The Table.Max function should be used in a Power Query step to identify the maximum values for the numeric columns. This function is designed to calculate the maximum value across each column in a table, which suits the requirement of finding maximum values for numeric columns. References = For detailed information on Power Query functions, including Table.Max, please refer to Power Query M function reference.QUESTION 79You have a Fabric workspace named Workspace1 that contains a data flow named Dataflow1. Dataflow1 contains a query that returns the data shown in the following exhibit.You need to transform the date columns into attribute-value pairs, where columns become rows.You select the VendorlD column.Which transformation should you select from the context menu of the VendorlD column?  Group by  Unpivot columns  Unpivot other columns  Split column  Remove other columns The transformation you should select from the context menu of the VendorID column to transform the date columns into attribute-value pairs, where columns become rows, is Unpivot columns (B). This transformation will turn the selected columns into rows with two new columns, one for the attribute (the original column names) and one for the value (the data from the cells). Reference = Techniques for unpivoting columns are covered in the Power Query documentation, which explains how to use the transformation in data modeling.QUESTION 80You have a Fabric tenant that contains a semantic model. The model uses Direct Lake mode.You suspect that some DAX queries load unnecessary columns into memory.You need to identify the frequently used columns that are loaded into memory.What are two ways to achieve the goal? Each correct answer presents a complete solution.NOTE: Each correct answer is worth one point.  Use the Analyze in Excel feature.  Use the Vertipaq Analyzer tool.  Query the $system.discovered_STORAGE_TABLE_COLUMN-iN_SEGMeNTS dynamic management view (DMV).  Query the discover_hehory6Rant dynamic management view (DMV). The Vertipaq Analyzer tool (B) and querying the $system.discovered_STORAGE_TABLE_COLUMNS_IN_SEGMENTS dynamic management view (DMV) (C) can help identify which columns are frequently loaded into memory. Both methods provide insights into the storage and retrieval aspects of the semantic model. Reference = The Power BI documentation on Vertipaq Analyzer and DMV queries offers detailed guidance on how to use these tools for performance analysis.QUESTION 81You have a Fabric tenant that contains a lakehouse named Lakehouse1. Lakehouse1 contains a subfolder named Subfolder1 that contains CSV files. You need to convert the CSV files into the delta format that has V-Order optimization enabled. What should you do from Lakehouse explorer?  Use the Load to Tables feature.  Create a new shortcut in the Files section.  Create a new shortcut in the Tables section.  Use the Optimize feature. To convert CSV files into the delta format with Z-Order optimization enabled, you should use the Optimize feature (D) from Lakehouse Explorer. This will allow you to optimize the file organization for the most efficient querying. References = The process for converting and optimizing file formats within a lakehouse is discussed in the lakehouse management documentation.QUESTION 82You have a Fabric tenant that contains a Microsoft Power Bl report named Report 1.Report1 is slow to render. You suspect that an inefficient DAX query is being executed.You need to identify the slowest DAX query, and then review how long the query spends in the formula engine as compared to the storage engine.Which five actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Explanation:To identify the slowest DAX query and analyze the time it spends in the formula engine compared to the storage engine, you should perform the following actions in sequence:* From Performance analyzer, capture a recording.* View the Server Timings tab.* Enable Query Timings and Server Timings. Run the query.* View the Query Timings tab.* Sort the Duration (ms) column in descending order by DAX query time.QUESTION 83You are creating a semantic model in Microsoft Power Bl Desktop.You plan to make bulk changes to the model by using the Tabular Model Definition Language (TMDL) extension for Microsoft Visual Studio Code.You need to save the semantic model to a file.Which file format should you use?  PBIP  PBIX  PBIT  PBIDS QUESTION 84You have a Fabric tenant that contains a complex semantic model. The model is based on a star schema and contains many tables, including a fact table named Sales. You need to create a diagram of the model. The diagram must contain only the Sales table and related tables. What should you use from Microsoft Power Bl Desktop?  data categories  Data view  Model view  DAX query view QUESTION 85You have a Fabric tenant that contains a workspace named Workspace^ Workspacel is assigned to a Fabric capacity.You need to recommend a solution to provide users with the ability to create and publish custom Direct Lake semantic models by using external tools. The solution must follow the principle of least privilege.Which three actions in the Fabric Admin portal should you include in the recommendation? Each correct answer presents part of the solution.NOTE: Each correct answer is worth one point.  From the Tenant settings, set Allow XMLA Endpoints and Analyze in Excel with on-premises datasets to Enabled  From the Tenant settings, set Allow Azure Active Directory guest users to access Microsoft Fabric to Enabled  From the Tenant settings, select Users can edit data models in the Power Bl service.  From the Capacity settings, set XMLA Endpoint to Read Write  From the Tenant settings, set Users can create Fabric items to Enabled  From the Tenant settings, enable Publish to Web For users to create and publish custom Direct Lake semantic models using external tools, following the principle of least privilege, the actions to be included are enabling XMLA Endpoints (A), editing data models in Power BI service (C), and setting XMLA Endpoint to Read-Write in the capacity settings (D). Reference = More information can be found in the Admin portal of the Power BI service documentation, detailing tenant and capacity settings. Loading … Test Engine to Practice DP-600 Test Questions: https://www.exams4sures.com/Microsoft/DP-600-practice-exam-dumps.html --------------------------------------------------- Images: https://free.exams4sures.com/wp-content/plugins/watu/loading.gif https://free.exams4sures.com/wp-content/plugins/watu/loading.gif --------------------------------------------------- --------------------------------------------------- Post date: 2025-01-13 10:23:34 Post date GMT: 2025-01-13 10:23:34 Post modified date: 2025-01-13 10:23:34 Post modified date GMT: 2025-01-13 10:23:34