🚨 Just Launched! Ethical Hacking – AI Edition! Unleash the power of AI to master cutting-edge hacking techniques. Enroll Now!
No Course Found
0 of 60 Questions completed
Questions:
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading…
You must sign in or sign up to start the quiz.
You must first complete the following:
0 of 60 Questions answered correctly
Your time:
Time has elapsed
You have reached 0 of 0 point(s), (0)
Earned Point(s): 0 of 0, (0) 0 Essay(s) Pending (Possible Point(s): 0)
A company has an Azure SQL Database defined as part of its Azure subscription. The Automatic tuning settings are configured, as shown below.
Would the setting of “Drop Index” be ON for the database?
A company has an application that is storing its data in an Azure Cosmos DB Account. The database currently has around 100 GB worth of data. Each entry in a collection in the database is shown below. [ Orderld: number. OrderDescriptionld: number. ProductName: string OrderValue: number ] The partition key for the collection is set as Orderld. Users report that queries take a long time to execute when retrieving data using the Product Name attribute. You have to resolve the issue. You decide to create a lookup collection that uses ProductName as the partition key. Would this resolve the issue?
A company has an application that is storing its data in an Azure Cosmos DB Account. The database currently has around 100 GB worth of data. Each entry in a collection in the database is shown below. [ Orderld: number. OrderDescriptionld: number. ProductName: string OrderValue: number ] The partition key for the collection is set as Orderld. Users report that queries take a long time to execute when retrieving data using the Product Name attribute. You have to resolve the issue. You decide to create a lookup collection that uses ProductName as the partition key and Orderld as a value. Would this resolve the issue?
A company has an application that is storing its data in an Azure Cosmos DB Account. The database currently has around 100 GB worth of data. Each entry in a collection in the database is shown below. [ Orderld: number. OrderDescriptionld: number. ProductName: string OrderValue: number ] The partition key for the collection is set as Orderld. Users report that queries take a long time to execute when retrieving data using the Product Name attribute. You have to resolve the issue. You decide to change the partition key to include the ProductName. Would this resolve the issue?
You need to create a new Azure Databricks cluster. This cluster would connect to Azure Data Lake Storage Gen2 by using Azure Active Directory (Azure AD) integration. Which of the following would you use as the Cluster Mode?
You need to create a new Azure Databricks cluster. This cluster would connect to Azure Data Lake Storage Gen2 using Azure Active Directory (Azure AD) integration. Which of the following advanced option would you enable?
You currently have an Azure Storage Account and an Azure SQL Database defined as part of your Azure subscription. You need to move data from Azure Storage Account to the SQL database using Azure Data Factory. You have to ensure that the following requirements are met. Ensure that the data remains in the same region as the Azure Storage Account and the Azure SQL Database at all times. Minimize administrative effort. Which of the following would you use as the Integration runtime type?
You have to implement Azure Stream Analytics Functions as part of your data streaming solution. The solution has the following requirements. • Segment the data stream into distinct time segments that do not repeat or overlap. • Segment the data stream into distinct time segments that repeat and can overlap. • Segment the data stream to produce an output when an event occurs. Which of the following windowing function would you use for the following requirement? “Segment the data stream into distinct time segments that do not repeat or overlap.”
You have to implement Azure Stream Analytics Functions as part of your data streaming solution. The solution has the following requirements. • Segment the data stream into distinct time segments that do not repeat or overlap. • Segment the data stream into distinct time segments that repeat and can overlap. • Segment the data stream to produce an output when an event occurs. Which of the following windowing function would you use for the following requirement? “Segment the data stream into distinct time segments that repeat and can overlap.”
You have to implement Azure Stream Analytics Functions as part of your data streaming solution. The solution has the following requirements. • Segment the data stream into distinct time segments that do not repeat or overlap. • Segment the data stream into distinct time segments that repeat and can overlap. • Segment the data stream to produce an output when an event occurs. Which of the following windowing function would you use for the following requirement? “Segment the data stream to produce an output when an event occurs.”
You have JSON files stored in an Azure Data Lake Storage Genz account. The JSON file contains the FirstName and LastName of customers. You need to use Azure Data bricks to copy the data in the JSON files to an Azure data warehouse. A new column must be created which concatenates the FirstName and LastName values. You have the following components in place in Azure. • A destination table in the SQL Data Warehouse • An Azure Blob storage container • A service principal Which of the following are actions you would perform to transfer the data onto the Azure SQL Data warehouse table? Choose 5 answers from the options given below.
You have created an instance of Azure Data Bricks. You have gone ahead and created a cluster and a notebook. The notebook will use R as the primary language. But you also need to be able to switch the notebook to support Scala and SQL. Which of the following can be used to switch between languages in the notebook?
You have an Azure Data Lake Storage Gen 2 account. You have several CSV files loaded into the account. Each file has a header row. After the header row is a property formatted by carriage return (/r} and line feed (/n). You need to load the files daily as a batch into Azure SQL Data warehouse using Polybase. You have to skip the header row when the files are imported. Which of the following actions would you take to implement this requirement? Choose 3 answers from the options given below.
A company is planning to create an Azure Cosmos DB account. This account will contain a database and a collection. Around 10,000 JSON records will be written to the collection every 24 hours. The company wants to set a consistency level for the database that would meet the following requirements. • Enable monotonic reads and writes within a session. • Provide fast throughput. • Provide the lowest latency. Which of the following should be set as the consistency level for the database?
A company has an Azure SQL Datawarehouse. They have a table named ipslab_salesfact that contains data for the past 12 months. The data is partitioned by month. The table contains around a billion rows. The table has clustered columnstore indexes. At the beginning of each month. you need to remove the data from the table that is older than 12 months. Which of the following actions would you implement for this requirement? Choose 3 answers from the options given below.
You have an Azure SQL data warehouse. You have used Polybase to create a table named [Extl.[ipslabitems] to query Parquet files stored in Azure Data lake Storage Gen 2. The external table has been defined with 3 columns. You have now discovered that the Parquet files contain a fourth column named ItemID. Which of the following command can you use to add the fourth column to the external table?
You are planning to create a dimension table in an Azure SQL Data Warehouse. The data in the table will be less than 1 GB. You need to ensure that the table meets the following requirements. • Minimize data movement. • Provide the fastest query time. Which of the following would you choose as the table type?
You have an Azure SQL Database named ipslabdb. The database contains a table named ipslabcustomer. The table has a column named customerID that is of the type varchar(22). You have to implement masking for the customerID. which would meet the following requirements. • The first two prefix characters must be exposed. • The last four prefix characters must be exposed. • All other characters must be masked. You decide to implement data masking and use a credit card function mask. Would this fulfill the requirement?
You have an Azure SQL Database named ipslabdb. The database contains a table named ipslabcustomer. The table has a column named customerID that is of the type varchar(22). You have to implement masking for the customerID. which would meet the following requirements. • The first two prefix characters must be exposed. • The last four prefix characters must be exposed, • All other characters must be masked. You decide to implement data masking and use a random number function mask. Would this fulfill the requirement?
You have an Azure SQL Database named ipslabdb. The database contains a table named ipslabcustomer. The table has a column named customerID that is of the type varchar(22). You have to implement masking for the customerID, which would meet the following requirements. • The first two prefix characters must be exposed. • The last four prefix characters must be exposed. • All other characters must be masked. You decide to implement data masking and use an email function mask. Would this fulfill the requirement?
You have an Azure Data Lake Storage Gen 2 account. Your user account has contributor access to the storage account. You have the application ID and access key. You need to use PolyBase to load data into the Azure SQL Data warehouse. You need to configure PolyBase to connect the data warehouse to the storage account. Which of the following would you need to create for this requirement? Choose 3 answers from the options given below.
A company has an application that allows developers to share and compare code. The conversations for the code snippets. the code snippets themselves and the linked shared are all stored in an Azure SQL database instance. The application also allows to search for historical conversations and code snippets. Matches to previous code snippets also take place in the application. This comparison is made via Transact-SQL functions. If the application finds a match. A link to the match is added to the conversation. Currently, the following issues are occurring within the application. • Delays occur during live conversations • Delay occurs before the matching link appears after the code snippet is added to the conversation. Which of the following can be used to resolve the below issue? “There are delays which occur during live conversations.”
A company has an application that allows developers to share and compare code. The conversations for the code snippets. the code snippets themselves and the linked shared are all stored in an Azure SQL database instance. The application also allows to search for historical conversations and code snippets. Matches to previous code snippets also take place in the application. This comparison is made via Transact-SQL functions. If the application finds a match. A link to the match is added to the conversation. Currently, the following issues are occurring within the application. • Some delays occur during live conversations. • A delay occurs before the matching link appears after the code snippet is added to the conversation. Which of the following can be used to resolve the below issue? “There is a delay which occurs before the matching link appears after the code snippet is added to the conversation.”
You have an Azure SQL Data Warehouse. You plan to use PolyBase to load data from CSV files located in Azure Data Lake Gen 2 by using an external table. You need to monitor files with invalid schema errors. Which of the following is an error you would monitor for?
The security team in your company currently uses Azure Databricks to analyze data emitted from various sources. You have to send the Apache Spark level events. the Spark structured streaming metrics and application metrics to Azure Monitor. Which of the following would you implement for this requirement? Choose 3 answers from the options given below.
You need to enable Transparent Data Encryption for an Azure SQL database. Which of the following steps would you perform for this requirement? Choose 4 answers from the options given below.
An application is currently making use of a database on the Azure platform. Below is a snippet of the code base, private static readonly string ipslabendpointUrl – ConfigurationManager.AppSettingsl*EndpointUrl’]: private static readonly SecureString ipslabkey-ToSecureString(ConfigurationManagerAppSettings|*AuthorizationKey”)): var ipslab_client- new CosmosClient(new Url(ipslabendpointUrl), ipslabkey): Database database- await ipslab_client.CreateDatabaseAsync(new Database [ Id=”ipslabdb” ). Which of the following is the type of database the code is connecting to?
An application is currently making use of a database on the Azure platform. Below is a snippet of the code base. private static readonly string ipslabendpointUrl = ConfigurationManager.AppSettingsl*EndpointUrl’): private static readonly SecureString ipslabkey=-ToSecureString(ConfigurationManagerAppSettingsl“AuthorizationKey’)): var ipslab_client- new DocumentClient(new UrliipslabendpointUrl). ipslabkey): Database database- await ipslab_client.CreateDatabaseAsync(new Database [ Id=”ipslabdb” ): Which of the following is the key type used in the code?
A company is planning to set up an Azure SQL database to store sensitive data. The company wants to monitor data usage and data copied from the system to prevent data leakage. The company also wants to configure the Azure SQL database to email a specific user when the data leakage occurs. Which of the following activities would you need to perform? Choose 3 answers from the options given below.
Your company currently has an enterprise data warehouse in Azure Synapse Analytics. You have to monitor the solution to see whether the data warehouse needs to be scaled up based on the current workloads. Which of the following metric would you monitor for this requirement?
A company wants to implement a lambda architecture on Microsoft Azure. The following are the key requirements for each architecture layer. Data storage • The data store should serve as a repository for high volumes of files. • The files can be large and of different formats. • It should be optimized for big data analytics workloads. • The data should be organized using a hierarchical structure. Batch processing • This layer should provide a managed solution for in-memory computation processing. • It should provide support for a variety of programming languages. • It should provide the ability to resize and terminate the cluster automatically. Analytical data store • This layer must provide support for SQL language. • It must implement native columnar storage. • It should support parallel processing. Which of the following should be used as a technology for the “Data Storage” layer?
A company wants to implement a lambda architecture on Microsoft Azure. The following are the key requirements for each architecture layer. Data storage • The data store should serve as a repository for high volumes of files. • The files can be large and of different formats. • It should be optimized for big data analytics workloads. • The data should be organized using a hierarchical structure. Batch processing • This layer should provide a managed solution for in-memory computation processing. • It should provide support for a variety of programming languages. • It should provide the ability to resize and terminate the cluster automatically. Analytical data store • This layer must provide support for SQL language. • It must implement native columnar storage. • It should support parallel processing. Which of the following should be used as a technology for the “Batch processing” layer?
A company wants to implement a lambda architecture on Microsoft Azure. The following are the key requirements for each architecture layer. Data storage • The data store should serve as a repository for high volumes of files. • The files can be large and of different formats. • It should be optimized for big data analytics workloads. • The data should be organized using a hierarchical structure. Batch processing • This layer should provide a managed solution for in-memory computation processing. • It should provide support for a variety of programming languages. • It should provide the ability to resize and terminate the cluster automatically. Analytical data store • This layer must provide support for SQL language. • It must implement native columnar storage. • It should support parallel processing. Which of the following should be used as a technology for the “Analytical data store” layer?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight Which of the following should be used as the API for the Cosmos DB account?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight Which of the following would you use for the consistency level for the database?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight You need to build the Azure SQL Data warehouse data store. Which of the following would you use as the underlying table type?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight You need to build the Azure SQL Data warehouse data store. Which of the following would you use as the underlying index type?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight Which of the following masking functions should be used for the “carlD” column?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight Which of the following masking functions should be used for the “carWeight” column?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight Which of the following should be included in the Data Factory Pipeline?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight The team is monitoring the Data Factory pipeline. They can see that the Cosmos DB to SQL database run time is taking 45 minutes. Which of the following can be carried out to improve the performance of the job?
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight Which of the following can be used to satisfy the case study requirement? “The query performance for data in the Azure SQL database must be stable without the need for administrative overhead.”
Case Study Overview Ipslabs is an online training provider. They also provide a yearly gaming competition for their students. The competition is held every month in different locations. Current Environment The company currently has following environment in place • The racing cars for the competition send their telemetry data to a MongoDB database. The telemetry data has around 100 attributes. • A custom application is then used to transfer the data from the MongoDB database to a SQL Server 2017 database. The attribute names are changed when they are sent to the SQL Server database. • Another application named “Ipslab workflow” is then used to perform analytics on the telemetry data to look for improvements on the racing cars. • The SQL Server 2017 database has a table named “cardata” which has around 1 TB of data. “Ipslab workflow” performs the required analytics on the data in this table. Large aggregations are performed on a column of the table. Proposed Environment The company now wants to move the environment to Azure. Below are the key requirements • The racing car data will now be moved to Azure Cosmos DB and Azure SQL database. The data must be written to the closest Azure data center and must converge in the least amount of time. • The query performance for data in the Azure SQL database must be stable without the need of administrative overhead • The data for analytics will be moved to an Azure SQL Data warehouse • Transparent data encryption must be enabled for all data stores wherever possible • An Azure Data Factory pipeline will be used to move data from the Cosmos DB database to the Azure SQL database. If there is a delay of more than 15 minutes for the data transfer, then configuration changes need to be made to the pipeline workflow. • The telemetry data must be monitored for any sort of performance issues. • The Request Units for Cosmos DB must be adjusted to maintain the demand while also minimizing costs. • The data in the Azure SQL Server database must be protected via the following requirements Only the last four digits of the values in the column carID must be shown A zero value must be shown for all values in the column carWeight You need to monitor the telemetry data being sent to Cosmos DB so that you can decide on the amount of Request Units to provision for Cosmos DB. Which of the following metrics must you monitor? Choose 2 answers from the options given below.
You have the following query defined in Azure Stream Analytics WITH step1 AS (SELECT ° FROM ipslabinput1 PARTITION BY Orderlid INTO 10). step1 AS (SELECT * FROM ipslabinput2 PARTITION BY Orderld INTO 10) SELECT ° INTO ipslaboutput FROM step1 PARTITION BY Orderld UNION step2 BY Orderld Would the above query join two streams of partitioned data?
You have the following query defined in Azure Stream Analytics WITH step1 AS (SELECT * FROM ipslabinput1 PARTITION BY Orderld INTO 10). step1 AS (SELECT * FROM ipslabinput2 PARTITION BY Orderld INTO 10) SELECT ° INTO ipslaboutput FROM step1i PARTITION BY Orderld UNION step2 BY Orderld Should the stream scheme key and the count match the one in the output scheme?
You have the following query defined in Azure Stream Analytics WITH step1 AS (SELECT ° FROM ipslabinput1 PARTITION BY Orderld INTO 10). step1 AS (SELECT ° FROM ipslabinput2 PARTITION BY Orderld INTO 10) SELECT * INTO ipslaboutput FROM step1 PARTITION BY Orderid UNION step2 BY Orderld Would the supply of 60 streaming units optimize the performance of the query?
A company wants to use the Azure Databricks service. There is a need to create clusters based on the following configuration. • Cluster A – Here the cluster needs to be configured to terminate automatically after 120 minutes. • Cluster B – Here an environment needs to be created for each notebook. • Cluster C – Here a group of data engineers will be sharing the same cluster. Which of the following cluster type would you set for Cluster A?
A company wants to use the Azure Databricks service. There is a need to create clusters based on the following configuration. • Cluster A — Here the cluster needs to be configured to terminate automatically after 120 minutes. • Cluster B – Here an environment needs to be created for each notebook. • Cluster C – Here a group of data engineers will be sharing the same cluster. Which of the following cluster type would you set for Cluster B?
A company wants to use the Azure Databricks service. There is a need to create clusters based on the following configuration. • Cluster A – Here the cluster needs to be configured to terminate automatically after 120 minutes. • Cluster B – Here an environment needs to be created for each notebook. • Cluster C – Here a group of data engineers will be sharing the same cluster. Which of the following cluster type would you set for Cluster C?
A company has an Azure SQL Database. They want to enable diagnostics logging for the database. Which of the following can be used to store the diagnostic logs for the database? Choose 2 answers from the options given below.
Would the setting of “Create Index” be ON for the database?
You need to create a new Azure Data bricks cluster. This cluster would connect to Azure Data Lake Storage Gen2 using Azure Active Directory (Azure AD) integration. Which of the following advanced option would you enable?
You have JSON files stored in an Azure Data Lake Storage Gen2 account. The JSON file contains the FirstName and LastName of customers. You need to use Azure Data bricks to copy the data in the JSON files to an Azure data warehouse. A new column must be created which concatenates the FirstName and LastName values. You have the following components in place in Azure. • A destination table in the SQL Data Warehouse • An Azure Blob storage container • A service principal Which of the following are actions you would perform to transfer the data onto the Azure SQL Data warehouse table? Choose 5 answers from the options given below.
You have an Azure SQL Database named ipslabdb. The database contains a table named ipslabcustomer. The table has a column named customerID that is of the type varchar(22). You have to implement masking for the customerID, which would meet the following requirements. • The first two prefix characters must be exposed. • The last four prefix characters must be exposed. • All other characters must be masked. You decide to implement data masking and use a random number function mask. Would this fulfill the requirement?
A company wants to use the Azure Databricks service. There is a need to create clusters based on the following configuration. • Cluster A – Here the cluster needs to be configured to terminate automatically after 120 minutes. • Cluster B – Here an environment needs to be created for each notebook. • Cluster C – Here a group of data engineers will be sharing the same cluster. Which of the following cluster type would you set for Cluster B?
A team currently managed Azure HDinsight cluster. The team spends quite a lot of time on creating and destroying clusters. They want to implement a solution that can be used to deploy Azure HDinsight clusters with minimal effort. Which of the following can they implement for this requirement?
A company needs to configure data synchronization between them on-premises Microsoft SQL Server database and Azure SQL database. The synchronization process must include the following. • Be able to perform an initial data synchronization to the Azure SQL Database with minimal downtime. • Be able to perform bi-directional synchronization after the initial synchronization is complete. Which of the following would you consider as the synchronization solution?
You need to migrate data from an Azure Blob storage account to an Azure SQL Data warehouse. Which of the following actions do you need to implement for this requirement? Choose 4 answers from the options given below.
You need to fulfill the below requirement of the case study. Overview Ipslabs is an online training provider. Current Environment The company currently has Microsoft SQL databases that are split into different categories or tiers. Some of the database are used by internal users, some by external partners and external distributions.
Below is the list of applications, tiers and their individual requirements.
Below are the current requirements of the company • The databases in Tier 3. Tier 6 to 8 must use a database density on the same server and Elastic pools in cost effective manner. • The Applications must have access to data from internal and external sources whilst ensuring date is encrypted at rest and in transit. • The databases in Tier 3. Tier 6 to 8 must have a recovery strategy for in case whenever the server goes offline. • The Tier 1 applications must have their databases stored on Premium P2 tier • The Tier 1 applications must have their databases stored on Standard $4 tier • Data will be migrated from the on-premise databases to Azure SQL Databases using Azure Data Factory. The pipeline must support continued data movement and migration. • The Application access for Tier 7 and 8 must be restricted to the database only • The Application access for Tier 7 and 8 must be restricted to the database only. • For Tier 4 and Tier 5 databases, the backup strategy must include the following Transactional log backup every hour Differential backup every day Full back up every week • Backup strategies must be in place for all standalone Azure SQL databases using methods available with Azure SQL databases. • Tier 1 database must implement the following data masking logic For Data type ipslabA – Mask 4 or less string data type characters For Data type ipslabB – Expose the first letter and mask the domain For Data type ipslabC – Mask everything except characters at the beginning and the end • All certificates and keys are internally managed in on-premise data stores • For Tier 2 databases, if there are any conflicts between the data transfer from on-premise, preference should be given to on-premise data. • Monitoring must be setup on every database • Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data engineers. • Azure SQL Data warehouse would be used to gather data from multiple internal and external databases. • The Azure SQL Data warehouse must be optimized to use data from its cache • The below metrics must be available when it comes to the cache Metric ipslabA – Low cache hit %, high cache usage % Metric ipslabB – Low cache hit %, low cache usage % Metric ipslabC – high cache hit %, high cache usage % • The reporting data for external partners must be stored in Azure storage. The data should be made available during regular business hours in connecting regions. • The reporting for Tier 9 needs to be moved to Event Hubs. • The reporting for Tier 10 needs to be moved to Azure Blobs. • The following issues have been identified in the setup • The External partners have control over the data formats, type and schemas • For External based clients, the queries cannot be changed or optimized. • The Database development staff are familiar with T-SQL language • Because if the size and amount of data, some applications and reporting features are not performing at SLA levels. “Applications with Tiers 6 through 8 must ensure that unexpected resource storage usage is immediately reported to IT data engineers.” Which of the following would you implement for this requirement?
You have to access Azure Blob Storage from Azure Databricks using secrets stored in a key vault. You already have the storage account, the blob container and Azure key vault in place. You decide to implement the following steps. • Add the secret to the key vault. • Create a Databricks workspace and add the access keys. • Access the blob container from Azure Databricks. Would these steps fulfill the requirement?